To obtain the point cloud observation data, go to test_flowbot3d.py
for reference. Then look at the fuction get_pointcloud
from utils.py
to transform depth images to point cloud data.
Example :
pcd = get_pointcloud(observation['image'][:, :, -1], observation['image'], sim.segmentation_mask, sim._scene_cam_intrinsics, sim.cam_pose_matrix)[0]
downsample = pcd[:, 2]>1e-2
segmask = np.zeros_like(sim.link_id_pts)
segmask[np.where(sim.link_id_pts == sim.selected_joint)] = 1
segmask = segmask[downsample]
segmask = segmask.astype('bool')
To get ground truth flows (for all parts) from the PyBullet Env, checkout create_env_by_id
and create_train_envs
functions in create_flowbot_env_utils.py
for example. You just need to feed in the environment id and you get segmented pointcloud (3x1200) and flow vectors for all parts (dictionary whose keys are link_id
s).
Zhenjia Xu,
Zhanpeng He,
Shuran Song
Columbia University
Robotics and Automation Letters (RA-L) / ICRA 2022
Project Page | Video | arXiv
This repo contains the PyTorch implementation for paper "UMPNet: Universal Manipulation Policy Network for Articulated Objects".
The code is built with Python 3.6. Libraries are listed in requirements.txt and can be installed with pip by:
pip install -r requirements.txt
Prepare object URDF and pretrained model.
- mobility_dataset: URDF of 12 training and 10 testing object categories.
- pretrained: pretrained model.
Download, unzip, and organize as follows:
/umpnet
/mobility_dataset
/pretrained
...
There are also two modes of testing: exploration and manipulation.
# Open-ended state exploration
python test_gui.py --mode exploration --category CATEGORY
# Goal conditioned manipulation
python test_gui.py --mode manipulation --category CATEGORY
Here CATEGORY
can be chosen from:
- training categories]: Refrigerator, FoldingChair, Laptop, Stapler, TrashCan, Microwave, Toilet, Window, StorageFurniture, Switch, Kettle, Toy
- [Testing categories]: Box, Phone, Dishwasher, Safe, Oven, WashingMachine, Table, KitchenPot, Bucket, Door
There are also two modes of testing: exploration and manipulation.
# Open-ended state exploration
python test_quantitative.py --mode exploration
# Goal conditioned manipulation
python test_quantitative.py --mode manipulation
By default, it will run quantitative evaluation for each category. You can modify pool_list(L91) to run evaluation for a specific category.
Hyper-parameters mentioned in paper are provided in default arguments.
python train.py --exp EXP_NAME
Then a directory will be created at exp/EXP_NAME
, in which checkpoints, visualization, and replay buffer will be stored.
@article{xu2022umpnet,
title={UMPNet: Universal manipulation policy network for articulated objects},
author={Xu, Zhenjia and Zhanpeng, He and Song, Shuran},
journal={IEEE Robotics and Automation Letters},
year={2022},
publisher={IEEE}
}
This repository is released under the MIT license. See LICENSE for additional details.
- The code for spherical sampling is modified from area-beamforming.
- The code for UNet is modified from Pytorch-UNet.