Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No module named 'fused_layer_norm_cuda' #69

Open
Xingxl2studious opened this issue Sep 24, 2021 · 0 comments
Open

No module named 'fused_layer_norm_cuda' #69

Xingxl2studious opened this issue Sep 24, 2021 · 0 comments

Comments

@Xingxl2studious
Copy link

Hi! I'm running the demo, I download the pytorch_model_9.bin, and I have this problem.

`---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in ()
65 else:
66 model = VILBertForVLTasks.from_pretrained(
---> 67 args.from_pretrained, config=config, num_labels=num_labels, default_gpu=default_gpu
68 )
69

8 frames
/content/vilbert-multi-task/vilbert/utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
933
934 # Instantiate model.
--> 935 model = cls(config, *model_args, **model_kwargs)
936
937 if state_dict is None and not from_tf:

/content/vilbert-multi-task/vilbert/vilbert.py in init(self, config, num_labels, dropout_prob, default_gpu)
1603 self.num_labels = num_labels
1604
-> 1605 self.bert = BertModel(config)
1606 self.dropout = nn.Dropout(dropout_prob)
1607 self.cls = BertPreTrainingHeads(

/content/vilbert-multi-task/vilbert/vilbert.py in init(self, config)
1292 # initilize word embedding
1293 if config.model == "bert":
-> 1294 self.embeddings = BertEmbeddings(config)
1295 elif config.model == "roberta":
1296 self.embeddings = RobertaEmbeddings(config)

/content/vilbert-multi-task/vilbert/vilbert.py in init(self, config)
338 # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
339 # any TensorFlow checkpoint file
--> 340 self.LayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12)
341 self.dropout = nn.Dropout(config.hidden_dropout_prob)
342

/usr/local/lib/python3.7/dist-packages/apex-0.1-py3.7.egg/apex/normalization/fused_layer_norm.py in init(self, normalized_shape, eps, elementwise_affine)
131
132 global fused_layer_norm_cuda
--> 133 fused_layer_norm_cuda = importlib.import_module("fused_layer_norm_cuda")
134
135 if isinstance(normalized_shape, numbers.Integral):

/usr/lib/python3.7/importlib/init.py in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
129

/usr/lib/python3.7/importlib/_bootstrap.py in _gcd_import(name, package, level)

/usr/lib/python3.7/importlib/_bootstrap.py in find_and_load(name, import)

/usr/lib/python3.7/importlib/_bootstrap.py in find_and_load_unlocked(name, import)

ModuleNotFoundError: No module named 'fused_layer_norm_cuda'`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant