You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running Ubuntu 22.04 on a VM running on virtualbox.
I run python and verify that the DRJIT_LIBLLVM_PATH environment variable is properly set by running: >>> os.environ['DRJIT_LIBLLVM_PATH'] '/usr/lib/llvm-14/lib/libLLVM.so'
However, when I run import sionna, I get the following error:
2024-09-28 16:59:23.304419: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
jit_llvm_init(): your CPU does not support the fma instruction set, shutting down the LLVM backend...
Traceback (most recent call last):
File "/home/vagrant/sionna/lib/python3.10/site-packages/mitsuba/init.py", line 107, in getattribute import('mitsuba.mitsuba' + variant + '_ext'),
File "/usr/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 674, in _load_unlocked
File "", line 571, in module_from_spec
File "", line 1176, in create_module
File "", line 241, in _call_with_frames_removed
ImportError: jit_init_thread_state(): the LLVM backend is inactive because the LLVM shared library ("libLLVM.so") could not be found! Set the DRJIT_LIBLLVM_PATH environment variable to specify its path.
I am not sure if the problem is the environment variable not being set for some reason, or if the error is related to the no support for FMA, which results in the shutdown of the LLVM backend.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Dear all,
I am running Ubuntu 22.04 on a VM running on virtualbox.
I run python and verify that the DRJIT_LIBLLVM_PATH environment variable is properly set by running:
>>> os.environ['DRJIT_LIBLLVM_PATH']
'/usr/lib/llvm-14/lib/libLLVM.so'
However, when I run
import sionna
, I get the following error:2024-09-28 16:59:23.304419: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
jit_llvm_init(): your CPU does not support the
fma
instruction set, shutting down the LLVM backend...Traceback (most recent call last):
File "/home/vagrant/sionna/lib/python3.10/site-packages/mitsuba/init.py", line 107, in getattribute
import('mitsuba.mitsuba' + variant + '_ext'),
File "/usr/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 674, in _load_unlocked
File "", line 571, in module_from_spec
File "", line 1176, in create_module
File "", line 241, in _call_with_frames_removed
ImportError: jit_init_thread_state(): the LLVM backend is inactive because the LLVM shared library ("libLLVM.so") could not be found! Set the DRJIT_LIBLLVM_PATH environment variable to specify its path.
I am not sure if the problem is the environment variable not being set for some reason, or if the error is related to the no support for FMA, which results in the shutdown of the LLVM backend.
Amy help is appreciated.
BR
Daniel
Beta Was this translation helpful? Give feedback.
All reactions