You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running PyTorch 2.1.2 with CUDA 11.8 on a Kaggle notebook using a T4 GPU, TorchInductor fails with the following error:
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: libcuda.so cannot found!
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The error only occurs when I train with splatfacto.
Is there an issue with how TorchInductor handles CUDA on Kaggle's T4 GPU setup? What would be the proper workaround to ensure TorchInductor compiles correctly in this environment?
Full error:
[13:27:37] Saving config to: outputs/unnamed/splatfacto/2025-03-02_132737/config.yml �]8;id=539582;file:///kaggle/working/nerfstudio/nerfstudio/configs/experiment_config.py�\experiment_config.py�]8;;�:�]8;id=656813;file:///kaggle/working/nerfstudio/nerfstudio/configs/experiment_config.py#136�\136�]8;;�
Saving checkpoints to: outputs/unnamed/splatfacto/2025-03-02_132737/nerfstudio_models �]8;id=234053;file:///kaggle/working/nerfstudio/nerfstudio/engine/trainer.py�\trainer.py�]8;;�:�]8;id=146316;file:///kaggle/working/nerfstudio/nerfstudio/engine/trainer.py#142�\142�]8;;�
Auto image downscale factor of 1 �]8;id=91161;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py�\nerfstudio_dataparser.py�]8;;�:�]8;id=619176;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py#484�\484�]8;;�
Dataset is overriding orientation method to none �]8;id=243962;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py�\nerfstudio_dataparser.py�]8;;�:�]8;id=529903;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py#232�\232�]8;;�
Warning: load_3D_points set to true but no point cloud found. splatfacto will use random point cloud initialization.
Dataset is overriding orientation method to none �]8;id=750800;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py�\nerfstudio_dataparser.py�]8;;�:�]8;id=681453;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py#232�\232�]8;;�
Train dataset has over 500 images, overriding cache_images to cpu. If you still get OOM errors or segfault, please
consider seting cache_images to 'disk'
Downloading: "https://download.pytorch.org/models/alexnet-owt-7be5be79.pth" to /root/.cache/torch/hub/checkpoints/alexnet-owt-7be5be79.pth
100%|█████████████████████████████████████████| 233M/233M [00:01<00:00, 175MB/s]
╭─────────────── viser ───────────────╮
│ ╷ │
│ HTTP │ http://0.0.0.0:7007/ │
│ Websocket │ ws://0.0.0.0:7007 │
│ ╵ │
╰─────────────────────────────────────╯
(viser) Share URL requested!
(viser) Generated share URL (expires in 24 hours, max 32 clients): https://street-descriptive.share.viser.studio/
[NOTE] Not running eval iterations since only viewer is enabled.
Use --vis {wandb, tensorboard, viewer+wandb, viewer+tensorboard} to run with eval.
No Nerfstudio checkpoint to load, so training from scratch.
Disabled comet/tensorboard/wandb event writers
[13:27:53] Caching / undistorting train images �]8;id=291704;file:///kaggle/working/nerfstudio/nerfstudio/data/datamanagers/full_images_datamanager.py�\full_images_datamanager.py�]8;;�:�]8;id=848749;file:///kaggle/working/nerfstudio/nerfstudio/data/datamanagers/full_images_datamanager.py#238�\238�]8;;�
Caching / undistorting train images ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:0300:0100:01
Printing profiling stats, from longest to shortest duration in seconds
Trainer.train_iteration: 10.9377
VanillaPipeline.get_train_loss_dict: 10.9363
Traceback (most recent call last):
File "/usr/local/envs/nerfstudio/bin/ns-train", line 8, in
sys.exit(entrypoint())
File "/kaggle/working/nerfstudio/nerfstudio/scripts/train.py", line 272, in entrypoint
main(
File "/kaggle/working/nerfstudio/nerfstudio/scripts/train.py", line 257, in main
launch(
File "/kaggle/working/nerfstudio/nerfstudio/scripts/train.py", line 190, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "/kaggle/working/nerfstudio/nerfstudio/scripts/train.py", line 101, in train_loop
trainer.train()
File "/kaggle/working/nerfstudio/nerfstudio/engine/trainer.py", line 266, in train
loss, loss_dict, metrics_dict = self.train_iteration(step)
File "/kaggle/working/nerfstudio/nerfstudio/utils/profiler.py", line 111, in inner
out = func(*args, **kwargs)
File "/kaggle/working/nerfstudio/nerfstudio/engine/trainer.py", line 502, in train_iteration
_, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
File "/kaggle/working/nerfstudio/nerfstudio/utils/profiler.py", line 111, in inner
out = func(*args, **kwargs)
File "/kaggle/working/nerfstudio/nerfstudio/pipelines/base_pipeline.py", line 299, in get_train_loss_dict
model_outputs = self._model(ray_bundle) # train distributed data parallel model if world_size > 1
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/kaggle/working/nerfstudio/nerfstudio/models/base_model.py", line 143, in forward
return self.get_outputs(ray_bundle)
File "/kaggle/working/nerfstudio/nerfstudio/models/splatfacto.py", line 534, in get_outputs
viewmat = get_viewmat(optimized_camera_to_world)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 490, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 641, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 569, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 458, in transform
tracer.run()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2069, in run
super().run()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 683, in step
getattr(self, inst.opname)(inst)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2157, in RETURN_VALUE
self.output.compile_subgraph(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 833, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/usr/local/envs/nerfstudio/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 957, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1024, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/dynamo/output_graph.py", line 1009, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/init.py", line 1568, in call
return compile_fx(model, inputs, config_patches=self.config)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1150, in compile_fx
return aot_autograd(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 55, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3891, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3429, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2212, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2392, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1573, in aot_dispatch_base
compiled_fw = compiler(fw_module, flat_args)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1092, in fw_compiler_base
return inner_compile(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 80, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/debug.py", line 228, in inner
return fn(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 54, in newFunction
return old_func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 341, in compile_fx_inner
compiled_graph: CompiledFxGraph = fx_codegen_and_compile(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 565, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/graph.py", line 970, in compile_to_fn
return self.compile_to_module().call
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/graph.py", line 941, in compile_to_module
mod = PyCodeCache.load_by_key_path(key, path, linemap=linemap)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1139, in load_by_key_path
exec(code, mod.dict, mod.dict)
File "/tmp/torchinductor_root/yq/cyq7irqbf27unmgrdvtcvenpomt3e3vnoprniu4gqjgiwecijmnw.py", line 154, in
async_compile.wait(globals())
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1418, in wait
scope[key] = result.result()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1277, in result
self.future.result()
File "/usr/local/envs/nerfstudio/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/local/envs/nerfstudio/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: libcuda.so cannot found!
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The text was updated successfully, but these errors were encountered:
From the last assertion error AssertionError: libcuda.so cannot found! This is likely an issue where cuda toolkit is not being detected on the kaggle gpu.
When running PyTorch 2.1.2 with CUDA 11.8 on a Kaggle notebook using a T4 GPU, TorchInductor fails with the following error:
The error only occurs when I train with splatfacto.
Is there an issue with how TorchInductor handles CUDA on Kaggle's T4 GPU setup? What would be the proper workaround to ensure TorchInductor compiles correctly in this environment?
Full error:
[13:27:37] Saving config to: outputs/unnamed/splatfacto/2025-03-02_132737/config.yml �]8;id=539582;file:///kaggle/working/nerfstudio/nerfstudio/configs/experiment_config.py�\experiment_config.py�]8;;�:�]8;id=656813;file:///kaggle/working/nerfstudio/nerfstudio/configs/experiment_config.py#136�\136�]8;;�
Saving checkpoints to: outputs/unnamed/splatfacto/2025-03-02_132737/nerfstudio_models �]8;id=234053;file:///kaggle/working/nerfstudio/nerfstudio/engine/trainer.py�\trainer.py�]8;;�:�]8;id=146316;file:///kaggle/working/nerfstudio/nerfstudio/engine/trainer.py#142�\142�]8;;�
Auto image downscale factor of 1 �]8;id=91161;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py�\nerfstudio_dataparser.py�]8;;�:�]8;id=619176;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py#484�\484�]8;;�
Dataset is overriding orientation method to none �]8;id=243962;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py�\nerfstudio_dataparser.py�]8;;�:�]8;id=529903;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py#232�\232�]8;;�
Warning: load_3D_points set to true but no point cloud found. splatfacto will use random point cloud initialization.
Dataset is overriding orientation method to none �]8;id=750800;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py�\nerfstudio_dataparser.py�]8;;�:�]8;id=681453;file:///kaggle/working/nerfstudio/nerfstudio/data/dataparsers/nerfstudio_dataparser.py#232�\232�]8;;�
Train dataset has over 500 images, overriding cache_images to cpu. If you still get OOM errors or segfault, please
consider seting cache_images to 'disk'
Downloading: "https://download.pytorch.org/models/alexnet-owt-7be5be79.pth" to /root/.cache/torch/hub/checkpoints/alexnet-owt-7be5be79.pth
100%|█████████████████████████████████████████| 233M/233M [00:01<00:00, 175MB/s]
╭─────────────── viser ───────────────╮
│ ╷ │
│ HTTP │ http://0.0.0.0:7007/ │
│ Websocket │ ws://0.0.0.0:7007 │
│ ╵ │
╰─────────────────────────────────────╯
(viser) Share URL requested!
(viser) Generated share URL (expires in 24 hours, max 32 clients):
https://street-descriptive.share.viser.studio/
[NOTE] Not running eval iterations since only viewer is enabled.
Use --vis {wandb, tensorboard, viewer+wandb, viewer+tensorboard} to run with eval.
No Nerfstudio checkpoint to load, so training from scratch.
Disabled comet/tensorboard/wandb event writers
[13:27:53] Caching / undistorting train images �]8;id=291704;file:///kaggle/working/nerfstudio/nerfstudio/data/datamanagers/full_images_datamanager.py�\full_images_datamanager.py�]8;;�:�]8;id=848749;file:///kaggle/working/nerfstudio/nerfstudio/data/datamanagers/full_images_datamanager.py#238�\238�]8;;�
Caching / undistorting train images ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:0300:0100:01
Printing profiling stats, from longest to shortest duration in seconds
Trainer.train_iteration: 10.9377
VanillaPipeline.get_train_loss_dict: 10.9363
Traceback (most recent call last):
File "/usr/local/envs/nerfstudio/bin/ns-train", line 8, in
sys.exit(entrypoint())
File "/kaggle/working/nerfstudio/nerfstudio/scripts/train.py", line 272, in entrypoint
main(
File "/kaggle/working/nerfstudio/nerfstudio/scripts/train.py", line 257, in main
launch(
File "/kaggle/working/nerfstudio/nerfstudio/scripts/train.py", line 190, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "/kaggle/working/nerfstudio/nerfstudio/scripts/train.py", line 101, in train_loop
trainer.train()
File "/kaggle/working/nerfstudio/nerfstudio/engine/trainer.py", line 266, in train
loss, loss_dict, metrics_dict = self.train_iteration(step)
File "/kaggle/working/nerfstudio/nerfstudio/utils/profiler.py", line 111, in inner
out = func(*args, **kwargs)
File "/kaggle/working/nerfstudio/nerfstudio/engine/trainer.py", line 502, in train_iteration
_, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
File "/kaggle/working/nerfstudio/nerfstudio/utils/profiler.py", line 111, in inner
out = func(*args, **kwargs)
File "/kaggle/working/nerfstudio/nerfstudio/pipelines/base_pipeline.py", line 299, in get_train_loss_dict
model_outputs = self._model(ray_bundle) # train distributed data parallel model if world_size > 1
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/kaggle/working/nerfstudio/nerfstudio/models/base_model.py", line 143, in forward
return self.get_outputs(ray_bundle)
File "/kaggle/working/nerfstudio/nerfstudio/models/splatfacto.py", line 534, in get_outputs
viewmat = get_viewmat(optimized_camera_to_world)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 490, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 641, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 569, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 458, in transform
tracer.run()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2069, in run
super().run()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 683, in step
getattr(self, inst.opname)(inst)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2157, in RETURN_VALUE
self.output.compile_subgraph(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 833, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/usr/local/envs/nerfstudio/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 957, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1024, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/dynamo/output_graph.py", line 1009, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/init.py", line 1568, in call
return compile_fx(model, inputs, config_patches=self.config)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1150, in compile_fx
return aot_autograd(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 55, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3891, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3429, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2212, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2392, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1573, in aot_dispatch_base
compiled_fw = compiler(fw_module, flat_args)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1092, in fw_compiler_base
return inner_compile(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 80, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/debug.py", line 228, in inner
return fn(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 54, in newFunction
return old_func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 341, in compile_fx_inner
compiled_graph: CompiledFxGraph = fx_codegen_and_compile(
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 565, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/graph.py", line 970, in compile_to_fn
return self.compile_to_module().call
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/graph.py", line 941, in compile_to_module
mod = PyCodeCache.load_by_key_path(key, path, linemap=linemap)
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1139, in load_by_key_path
exec(code, mod.dict, mod.dict)
File "/tmp/torchinductor_root/yq/cyq7irqbf27unmgrdvtcvenpomt3e3vnoprniu4gqjgiwecijmnw.py", line 154, in
async_compile.wait(globals())
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1418, in wait
scope[key] = result.result()
File "/usr/local/envs/nerfstudio/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1277, in result
self.future.result()
File "/usr/local/envs/nerfstudio/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/local/envs/nerfstudio/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: libcuda.so cannot found!
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The text was updated successfully, but these errors were encountered: