Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option. #34

Closed
cdeepali opened this issue Feb 5, 2021 · 8 comments
Assignees

Comments

@cdeepali
Copy link
Contributor

cdeepali commented Feb 5, 2021

Executing test_jit tests with OpenCE is showing numerous warnings:

OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option.

This same warning is not observed with WMLCE for same version of OpenBLAS.

@cdeepali cdeepali self-assigned this Feb 5, 2021
@cdeepali
Copy link
Contributor Author

cdeepali commented Feb 5, 2021

As a workaround user can set OMP_NUM_THREADS=1 to avoid using multiple threads.

@cdeepali
Copy link
Contributor Author

pytorch/pytorch#52047

@lgg
Copy link

lgg commented Jun 3, 2021

@cdeepali i had the same issue. Building OpenBLAS from source with USE_OPENMP=1 didn't help for me.

I have few Jetson Nano and I guess i found a few ways to fix it:

  1. Downgrading Jetpack version
    On Jetson Nano with JetPack 4.4.1 - everything works fine, on new Jeston Nano with JetPack 4.5.1 - i have the same issue.
    Check the difference from jtop output:

Jetpack 4.4.1
image

Jetpack 4.5.1
image

Nvidia update VPI (https://docs.nvidia.com/vpi/index.html) and i guess it cause the problem.

  1. Downgrading torch
    On another Jetson Nano with Jetpack 4.5.1 i tried this:
  • sudo apt-get install libopenblas-base libopenmpi-dev
  • wget https://nvidia.box.com/shared/static/wa34qwrwtk9njtyarwt5nvo6imenfy26.whl -O torch-1.7.0-cp36-cp36m-linux_aarch64.whl
  • ./venv/bin/pip3 install torch-1.7.0-cp36-cp36m-linux_aarch64.whl
  • and it helps

@bnemanich
Copy link
Member

This was resolved by switching PyTorch to use TBB instead of OpenMP.

@lgg
Copy link

lgg commented Jul 27, 2021

@bnemanich how did you do that?

@cdeepali
Copy link
Contributor Author

@jayfurmanek
Copy link
Contributor

yeah, the change was in:
2ab8570

@jayfurmanek
Copy link
Contributor

jayfurmanek commented Jul 27, 2021

And just a note for why this was needed.

Anaconda's toolchain uses gomp for openMP. And gomp is known to not be fork-safe (it can hang when the program using it forks). To avoid any hangs, Anaconda has disabled openMP in their OpenBLAS package. (conda-forge has since included llvm-openmp as a openmp variant, but defaults hasn't moved there yet)

This warning message comes when Pytorch is built with OpenMP support, but then uses an OpenBLAS that was not.

Switching Pytorch to use TBB instead of OMP avoids the problem.

The downside is still relying on the OpenBLAS package from defaults with no OMP support, which is slower. In a future release we will likely include our own OpenBLAS package built with llvm-openmp unless defaults provides.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants