Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in Pipeline() on a CPU #51

Open
navotoz opened this issue May 31, 2022 · 3 comments
Open

Memory leak in Pipeline() on a CPU #51

navotoz opened this issue May 31, 2022 · 3 comments

Comments

@navotoz
Copy link

navotoz commented May 31, 2022

Hello,

I've initiated the model like so:
nlp = Pipeline('english', gpu=False, cache_dir='./cache')
Than call it by using:
with torch.no_grad(): for idx in range(10000): nlp.lemmatize('Hello World', is_sent=True).
When running the code, the RAM memory slowly fills.

I attached a graph of the memory filling up.
image

I'm using python3.7, trankit=1.1.0, torch=1.7.1.

Thank you!

@olegpolivin
Copy link

olegpolivin commented Sep 2, 2022

I confirm: when running on CPU there is an increasing memory consumption.
@navotoz , could you, please, tell me whether you have been able to solve this issue?

@Dielianss
Copy link

Hi @navotoz , I confirm this issue also appears in Python 3.7, trankit 1.1.1, torch 1.8.1+cu101

@navotoz
Copy link
Author

navotoz commented Sep 5, 2022

Hi @Dielianss @olegpolivin
Thanks for the comments. We maneged to mitigate this issue by running inference in a docker and restarting it every predefined interval.
This is not a real solution to this issue, but at least we can work with the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants