This repository has been archived by the owner on Jan 15, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 538
Fix scripts/question_answering/data_pipeline.py requiring optional package #1013
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Codecov Report
@@ Coverage Diff @@
## master #1013 +/- ##
=======================================
Coverage 88.27% 88.27%
=======================================
Files 67 67
Lines 6254 6254
=======================================
Hits 5521 5521
Misses 733 733 |
leezu
force-pushed
the
fixqadatapipelinespacy
branch
from
November 20, 2019 08:22
51eb0c4
to
6cfdfa6
Compare
Job PR-1013/3 is complete. |
eric-haibin-lin
approved these changes
Nov 29, 2019
leezu
force-pushed
the
fixqadatapipelinespacy
branch
from
December 3, 2019 03:44
6cfdfa6
to
49b861d
Compare
Job PR-1013/4 is complete. |
leezu
force-pushed
the
fixqadatapipelinespacy
branch
2 times, most recently
from
December 3, 2019 07:42
a87a9e3
to
78fdd38
Compare
…ckage Because a nlp.data.SpacyTokenizer is created as class attribute, SpacyTokenizer is required when Python parses the data_pipeline.py file. This means users will always need to install the "optional" SpacyTokenizer dependencies, even if they don't plan to use it. For example, just running an unrelated test in the scripts folder will currently raise the following error. ImportError while loading conftest '/home/ubuntu/projects/gluon-nlp/scripts/tests/conftest.py'. scripts/tests/conftest.py:23: in <module> from ..question_answering.data_pipeline import SQuADDataPipeline scripts/question_answering/data_pipeline.py:433: in <module> class SQuADDataTokenizer: scripts/question_answering/data_pipeline.py:435: in SQuADDataTokenizer spacy_tokenizer = nlp.data.SpacyTokenizer() src/gluonnlp/data/transforms.py:248: in __init__ lang=lang)) E OSError: SpaCy Model for the specified language="en_core_web_sm" has not been downloaded. You need to check the installation guide in https://spacy.io/usage/models. Usually, the installation command should be `python -m spacy download en_core_web_sm`.
leezu
force-pushed
the
fixqadatapipelinespacy
branch
from
December 3, 2019 07:47
78fdd38
to
b048122
Compare
Job PR-1013/7 is complete. |
Job PR-1013/8 is complete. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Because a nlp.data.SpacyTokenizer is created as class attribute, SpacyTokenizer
is required when Python parses the data_pipeline.py file. This means users will
always need to install the "optional" SpacyTokenizer dependencies, even if they
don't plan to use it. For example, just running an unrelated test in the scripts
folder will currently raise the following error.
cc @dmlc/gluon-nlp-team