Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

10gb lambda #14

Open
Emveez opened this issue Jan 15, 2021 · 1 comment
Open

10gb lambda #14

Emveez opened this issue Jan 15, 2021 · 1 comment

Comments

@Emveez
Copy link

Emveez commented Jan 15, 2021

So aws lambda now support up to 10gb and increases computational capacity in relation with the mem allocation. I was running inference with 3gb allocation and compared with 10gb but did not see any major improvements. Why could this be? Maybe the static compiled torch can not use all vcpu?

@szymonmaszke
Copy link
Owner

I think all the cores should be used out of the box even with static build. You may also try to change some PyTorch flags as described in documentation and torchlambda build.

You may see available flags here and specify them like this (for example: torchlambda build --pytorch USE_OPENMP=ON.

You may need to profile your application somehow and that might require manually changing C++ code, if you find something please let me know though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants