Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU on Apple M1 chip support #62

Open
carschno opened this issue Nov 17, 2022 · 0 comments
Open

GPU on Apple M1 chip support #62

carschno opened this issue Nov 17, 2022 · 0 comments

Comments

@carschno
Copy link

This is a feature request to add support for the Apple M1 chip, which is supported by PyTorch since v1.12.

Currently, Trankit only seems to use Cuda:

In [9]: from trankit import Pipeline

In [10]: p = Pipeline(lang='english')
Loading pretrained XLM-Roberta, this may take a while...
Loading tokenizer for english
Loading tagger for english
Loading lemmatizer for english
Loading NER tagger for english
==================================================
Active language: english
==================================================

In [11]: p._use_gpu
Out[11]: False

Confirming that MPS is available through PyTorch:

In [12]: import torch

In [13]: torch.has_mps
Out[13]: True

A look into pipeline.py shows that it only works on CUDA:

    def _setup_config(self, lang):
        torch.cuda.empty_cache()
        # decide whether to run on GPU or CPU
        if self._gpu and torch.cuda.is_available():
            self._use_gpu = True
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant