-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some hints on training with your own data #4
Comments
Hi, as you mentioned above, there are five files we need to prepare before training. Could you provide how you preprocess your original videos and use what kind of tools to get these five files. Thanks! |
Maybe |
Hello, I'm interested in training ASH using my own multi-view video data. Could you please guide me on how to get these files similar to
|
Thanks! And do you know here I place my custom video and the command to run this? |
But how can I get the "Subject0022/ddc.skin", "Subject0022/ddc.skeleton", "Subject0022/ddc.obj", "Subject0022/skeletoolToGTPose/poseAnglesRotationNormalized.motion", "Subject0022/skeletoolToGTPose/poseAngles.motion" for my own dataset, since I carefully reviewed the code and realized that generating those five cache files you mentioned requires the preparation of skeleton information in advance? |
Since our approach requires only the motion textures as input conditions, it is possible, and intuitive, to adapt it for different kinds of drivable human templates.
Assume that you have a skinned/drivable template mesh with a UV paradigm.
Since we have provided the tools in the training dataloader, that render the info attached on the vertexes to the textures, it would be intuitive to adapt it for training on other drivable human models with the following ingredients:
cached_ret_canonical_delta.pkl
)cached_ret_posed_delta.pkl
)cached_temp_vert_normal.pkl
)cached_fin_rotation_quad.pkl
,cached_fin_translation_quad.pkl
)cached_joints.pkl
)The text was updated successfully, but these errors were encountered: