Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibly memory issues with SVC? #1010

Open
Stack-it-up opened this issue Jun 1, 2022 · 18 comments
Open

Possibly memory issues with SVC? #1010

Stack-it-up opened this issue Jun 1, 2022 · 18 comments
Assignees
Labels
bug Something isn't working

Comments

@Stack-it-up
Copy link

Stack-it-up commented Jun 1, 2022

Description
I'm trying to use Intelex to accelerate training of a SVC. My dataset is pretty tame (18 MB, in fact, I am attaching it, since it is a publicly available dataset - Universal Dependencies ISDT). I wasn't expecting my 16GB of ram (and 16gb of swap) to be filled by this task, so I wonder if this could be a bug. However, I am a student, so it may be an error on my part (if so, I'm sorry).

To Reproduce
Steps to reproduce the behavior:

  1. Download attached files in the same folder
  2. Change extension of train_parser from txt to py
  3. Install NLTK
  4. Run the python script
  5. See error

Expected behavior
A new file should be created with the training output. Instead, an Out Of Memory error is raised.

Note on NLTK implementation
The code for the function train is pretty straightforward, see source code here: https://www.nltk.org/_modules/nltk/parse/transitionparser.html#TransitionParser.train

Environment:

  • OS: Ubuntu 20.04
  • Intelex 2021.5
  • Python 3.9.11
  • scikit-learn 1.0.2
  • NLTK 3.7
  • conda 4.13.0
  • CPU: i5-10500

Attachments
train_parser.txt
it_isdt-ud-train.txt

EDIT:
the svmlight file generated by NLTK is actually 62 MB and the memory used during sequential training (plain sklearn) is around 1GB

@Stack-it-up Stack-it-up added the bug Something isn't working label Jun 1, 2022
@FischyM
Copy link

FischyM commented Jun 2, 2022

How many threads are you using? SVM uses all available threads, so having N number of threads leads to consuming N times more ram. https://intel.github.io/scikit-learn-intelex/memory-requirements.html

@Stack-it-up
Copy link
Author

Thank you for your reply. My processor has 12 virtual cores so I shouldn't be able to process more thna 12 threads at once, is that right? I'm not sure if there is a way to set the maximum number of threads from Intelex.

@FischyM
Copy link

FischyM commented Jun 2, 2022

I'm running into the same problem right now actually. I'm not sure which environmental variable controls the number of threads, so I came to this GitHub to find out! I'll let you know if I find what we are looking for.

I do have these 5 to test with to see if they control the number of threads that spawn from a single process, but I won't be able to test them until later tonight maybe.

export OMP_NUM_THREADS=1
export BLAS_NUM_THREADS=1
export MKL_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1
export OPENBLAS_NUM_THREADS=1

EDIT:
It doesn't appear that any of those changes the thread usage of SKLEARNEX. I tried some variations of SKLEARNEX_THREADS=1 and SKLEARNEX_NUM_THREADS=1, but it did not change the thread behavior. Hopefully, someone more knowledgeable will be able to answer this.

@PivovarA PivovarA self-assigned this Jun 14, 2022
@plenoi
Copy link

plenoi commented Jun 18, 2022

i also face the same problem and waiting for some help.
this always happen when i use SVC.

joblib.externals.loky.process_executor.TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.

EDIT:
I solve the problem by manually import SVC and remove patch_sklearn()
from daal4py import daalinit
daalinit(1)

from daal4py.sklearn.svm import SVC

@lilybellesweet
Copy link

Any update here? I would like to be able to set the number of threads, as some jobs misbehave on shared resources

@Alexsandruss
Copy link
Contributor

Alexsandruss commented Aug 11, 2022

Number of threads per SVM training/inference can be effectively limited with daalinit:

import daal4py as d4p
d4p.daalinit(1)

I checked that it works for SVM with python's multiprocessing.

@Alexsandruss
Copy link
Contributor

However, limit of threads will not solve memory issues of SVM completely, because it is experiencing memory leak, which is under investigation.

@lilybellesweet
Copy link

I tried using daalinit (for RandomForestRegressor) and it did not work, the number of threads created was not affected.

@Alexsandruss
Copy link
Contributor

I run RandomForestRegressor and it used number of threads set by daalinit. Did you check that RandomForestRegressor was patched using verbose mode?
What OS, python, scikit-learn and scikit-learn-intelex versions are you using?

@lilybellesweet
Copy link

lilybellesweet commented Aug 16, 2022

It's running the sklearnex version, I checked.

OS: CentOS 7.9
python 3.8.12
scikit-learn 1.1.1
scikit-learn-intelex 2021.6.0

I set d4py.daalinit(2), then do patch_sklearn(), but always get threads per process equal to number of CPUs available.

@Alexsandruss
Copy link
Contributor

I used same configuration and next script while trying to reproduce:

import logging
logging.getLogger().setLevel(logging.INFO)

from sklearnex import patch_sklearn
patch_sklearn()

from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression
import daal4py as d4p

from multiprocessing import Pool
from sys import argv


def train_rfr(data):
    x, y = data
    rfr = RandomForestRegressor()
    rfr.fit(x, y)
    print('Score:', rfr.score(x, y))


if __name__ == '__main__':
    n_threads = int(argv[1])
    n_forests = int(argv[2])

    dataset = [make_regression(n_samples=20000, n_features=128) for i in range(n_forests)]

    d4p.daalinit(n_threads)
    with Pool(n_forests) as p:
        p.map(train_rfr, dataset)

n_threads x n_forests total threads were used every time for varying parameters.

@lilybellesweet
Copy link

Thank you for this effort! I am not sure why it is behaving like this for me, but despite using very similar code I am still having this issue where the same amount of threads are created per process as the total number of cores available, no matter how I set daalinit(). I am working on a SLURM system - could this be causing the issue?

@FischyM
Copy link

FischyM commented Nov 9, 2022

It doesn't appear to be a SLURM issue for me, as even using the same system with and without SLURM gives me an odd issue in that SVC is returning np.nan for different testing scores in sklearn's GridSearchCV. I'm wondering if it is a CPU-specific issue, because I don't have this issue on an intel CPU (Xeon E5-2630 v3), but I do on an AMD (Milan 7763). It appears that @Stack-it-up is using an intel CPU, but it's in the Core series. What CPU are you using @lilybellesweet for your SLURM system?

@lange-martin
Copy link

However, limit of threads will not solve memory issues of SVM completely, because it is experiencing memory leak, which is under investigation.

@Alexsandruss Is there any update on the memory leak for SVM? I found one post of yours here where you say the issue is on Python side. Does that mean it cannot be fixed?

@Alexsandruss
Copy link
Contributor

However, limit of threads will not solve memory issues of SVM completely, because it is experiencing memory leak, which is under investigation.

@Alexsandruss Is there any update on the memory leak for SVM? I found one post of yours here where you say the issue is on Python side. Does that mean it cannot be fixed?

Fix for memory leak is not found yet, you can try to use SVM from daal4py.sklearn.svm namespace as temporary alternative. It is wrapper for legacy DAAL interface and memory leak is not expected here, however it might have outdated API comparing to latest sklearn versions of SVM

@Stack-it-up
Copy link
Author

Any update on this?

@Alexsandruss
Copy link
Contributor

Any update on this?

Currently - no update.

@montagne5641
Copy link

montagne5641 commented Jun 15, 2024

We have confirmed that memory leaks occur in the same way with SVR. However, daal4py does not support SVR....
Does this mean that dealing with memory leaks is inherently difficult and there is no prospect of a solution in the future?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants