You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The fit_textured_mesh tutorial notebook does not work as expected. The "Mesh prediction via silhouette rendering" section works fine. But the "Mesh and texture prediction via textured rendering" section does not optimize as expected.
Instructions To Reproduce the Issue:
System:
Windows 10 Pro
RTX 3060Ti
PyTorch: 1.11.0
PyTorch3D: 0.6.2 (installed from source)
Full Versions: conda_list.txt
To reproduce, just run the "fit_textured_mesh" notebook up to and including the optimization loop of the "Mesh and texture prediction via textured rendering" section. I have only just installed PyTorch3D so there could be an issue there, but as the other sections worked and I was able to workaround the issue, it seems like my installation is working.
Expected output: The optimization is (at least somewhat) successful: The mesh and texture are modified to look more like the target image.
Actual output: The optimization immediately updates the mesh to an unrealistic solution and cannot recover.
I debugged this a bit and found that the problem is the RGB loss. Any time the RGB loss is used to modify the mesh I get behaviour like this. If I disconnect those parts, for example by excluding either of them, the optimization goes as expected. The best solution I've found so far which still let's me use both losses is to have two backward passes - one where I update the mesh (with all losses except RGB) and one where I update the texture (with RGB loss). This gives me the sort of optimization I was expecting. But obviously it's not ideal because it's inefficient and the mesh optimizer isn't able to use any information from the RGB loss (and vice versa).
Output with no changes:
Output with 2 losses:
Other results
If I draw after every iteration we see why the optimization fails. After the first iteration the mesh values are given very extreme values which cannot be recovered (all subsequent images are completely white).
The minimum and maximum values of the deform_verts tensor after a single iteration:
max: 238345080000.0
min: -175514620000.0
This is for a sphere of radius 1 so these value seem obviously wrong.
The text was updated successfully, but these errors were encountered:
I can see the github notebook already has this setting. I downloaded the notebook from the website and it looks like the website doesn't have this setting (https://pytorch3d.org/tutorials/fit_textured_mesh). Maybe the website could be updated to download from github? Anyway thanks for the help, happy for this to be closed since the issue is already being tracked.
🐛 Bugs / Unexpected behaviors
The fit_textured_mesh tutorial notebook does not work as expected. The "Mesh prediction via silhouette rendering" section works fine. But the "Mesh and texture prediction via textured rendering" section does not optimize as expected.
Instructions To Reproduce the Issue:
System:
Windows 10 Pro
RTX 3060Ti
PyTorch: 1.11.0
PyTorch3D: 0.6.2 (installed from source)
Full Versions: conda_list.txt
To reproduce, just run the "fit_textured_mesh" notebook up to and including the optimization loop of the "Mesh and texture prediction via textured rendering" section. I have only just installed PyTorch3D so there could be an issue there, but as the other sections worked and I was able to workaround the issue, it seems like my installation is working.
Expected output: The optimization is (at least somewhat) successful: The mesh and texture are modified to look more like the target image.
Actual output: The optimization immediately updates the mesh to an unrealistic solution and cannot recover.
I debugged this a bit and found that the problem is the RGB loss. Any time the RGB loss is used to modify the mesh I get behaviour like this. If I disconnect those parts, for example by excluding either of them, the optimization goes as expected. The best solution I've found so far which still let's me use both losses is to have two backward passes - one where I update the mesh (with all losses except RGB) and one where I update the texture (with RGB loss). This gives me the sort of optimization I was expecting. But obviously it's not ideal because it's inefficient and the mesh optimizer isn't able to use any information from the RGB loss (and vice versa).
Output with no changes:

Output with 2 losses:

Other results
If I draw after every iteration we see why the optimization fails. After the first iteration the mesh values are given very extreme values which cannot be recovered (all subsequent images are completely white).

The minimum and maximum values of the deform_verts tensor after a single iteration:
max: 238345080000.0
min: -175514620000.0
This is for a sphere of radius 1 so these value seem obviously wrong.
The text was updated successfully, but these errors were encountered: