Skip to content

Files

Latest commit

ac31c11 · Jan 11, 2023

History

History
This branch is 14 commits ahead of, 1 commit behind real-stanford/umpnet:main.

umpnet

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
Mar 28, 2022
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022
Jan 11, 2023
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022
Jan 11, 2023
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022
Mar 24, 2022

Modified Columbia's code to test FlowBot 3D in PyBullet for a fair comparison

Usage

To obtain the point cloud observation data, go to test_flowbot3d.py for reference. Then look at the fuction get_pointcloud from utils.py to transform depth images to point cloud data. Example :

pcd = get_pointcloud(observation['image'][:, :, -1], observation['image'], sim.segmentation_mask, sim._scene_cam_intrinsics, sim.cam_pose_matrix)[0]
downsample = pcd[:, 2]>1e-2
segmask = np.zeros_like(sim.link_id_pts)
segmask[np.where(sim.link_id_pts == sim.selected_joint)] = 1
segmask = segmask[downsample]
segmask = segmask.astype('bool')

To get ground truth flows (for all parts) from the PyBullet Env, checkout create_env_by_id and create_train_envs functions in create_flowbot_env_utils.py for example. You just need to feed in the environment id and you get segmented pointcloud (3x1200) and flow vectors for all parts (dictionary whose keys are link_ids).

UMPNet: Universal Manipulation Policy Network for Articulated Objects

Zhenjia Xu, Zhanpeng He, Shuran Song
Columbia University
Robotics and Automation Letters (RA-L) / ICRA 2022

Overview

This repo contains the PyTorch implementation for paper "UMPNet: Universal Manipulation Policy Network for Articulated Objects".

teaser

Content

Prerequisites

The code is built with Python 3.6. Libraries are listed in requirements.txt and can be installed with pip by:

pip install -r requirements.txt

Data Preparation

Prepare object URDF and pretrained model.

Download, unzip, and organize as follows:

/umpnet
    /mobility_dataset
    /pretrained
    ...

Testing

Test with GUI

There are also two modes of testing: exploration and manipulation.

# Open-ended state exploration
python test_gui.py --mode exploration --category CATEGORY

# Goal conditioned manipulation
python test_gui.py --mode manipulation --category CATEGORY

Here CATEGORY can be chosen from:

  • training categories]: Refrigerator, FoldingChair, Laptop, Stapler, TrashCan, Microwave, Toilet, Window, StorageFurniture, Switch, Kettle, Toy
  • [Testing categories]: Box, Phone, Dishwasher, Safe, Oven, WashingMachine, Table, KitchenPot, Bucket, Door

teaser

Quantitative Evaluation

There are also two modes of testing: exploration and manipulation.

# Open-ended state exploration
python test_quantitative.py --mode exploration

# Goal conditioned manipulation
python test_quantitative.py --mode manipulation

By default, it will run quantitative evaluation for each category. You can modify pool_list(L91) to run evaluation for a specific category.

Training

Hyper-parameters mentioned in paper are provided in default arguments.

python train.py --exp EXP_NAME

Then a directory will be created at exp/EXP_NAME, in which checkpoints, visualization, and replay buffer will be stored.

BibTeX

@article{xu2022umpnet,
  title={UMPNet: Universal manipulation policy network for articulated objects},
  author={Xu, Zhenjia and Zhanpeng, He and Song, Shuran},
  journal={IEEE Robotics and Automation Letters},
  year={2022},
  publisher={IEEE}
}

License

This repository is released under the MIT license. See LICENSE for additional details.

Acknowledgement