-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Summary with embeddings #42
Comments
same problem |
|
I also encountered this problem. Has anyone solved it? |
One quick-fix is to change the if device == "cuda" and torch.cuda.is_available():
# dtype = torch.cuda.FloatTensor
dtype = torch.cuda.LongTensor
else:
# dtype = torch.FloatTensor
dtype = torch.LongTensor |
same problem and also changing the data types from floattensor to long is not workng |
Not working is not satisfactory. What is the error message? |
Hey I'm having the same issue class BiLSTMBaseline(nn.Module):
def __init__(self,hidden_dim,emb_dim=300,
recurrent_dropout=0.1,num_linear=1):
super().__init__()
self.embedding = nn.Embedding(len(TEXT.vocab),emb_dim)
self.encoder = nn.LSTM(emb_dim,hidden_dim,num_linear,dropout = recurrent_dropout)
self.linear_layers = []
for _ in range(num_linear - 1):
self.linear_layers.append(nn.Linear(hidden_dim,hidden_dim))
self.linear_layers = nn.ModuleList(self.linear_layers)
self.predictor = nn.Linear(hidden_dim,6)
def forward(self,seq):
embeddings = self.embedding(seq)
hdn, _ = self.encoder(embeddings)
feature = hdn[-1,:,:]
for layer in self.linear_layers:
feature = layer(feature)
preds = self.predictor(feature)
return preds
summary(model,input_size=(402, 64),device='cpu') Error : ---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-44-ad1f81b1c254> in <module>
----> 1 summary(model,input_size=(402, 64),device='cpu')
<ipython-input-43-4e60852cb827> in summary(model, input_size, batch_size, device)
72 # print(x.shape)
73
---> 74 model(*x)
75
76 # remove these hooks
~/anaconda3/envs/sent-analy/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
<ipython-input-12-e2f879788e90> in forward(self, seq)
13
14 def forward(self,seq):
---> 15 embeddings = self.embedding(seq)
16 hdn, _ = self.encoder(embeddings)
17 feature = hdn[-1,:,:]
~/anaconda3/envs/sent-analy/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
~/anaconda3/envs/sent-analy/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
116 return F.embedding(
117 input, self.weight, self.padding_idx, self.max_norm,
--> 118 self.norm_type, self.scale_grad_by_freq, self.sparse)
119
120 def extra_repr(self):
~/anaconda3/envs/sent-analy/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1452 # remove once script supports set_grad_enabled
1453 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1454 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1455
1456
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.FloatTensor instead (while checking arguments for embedding) |
Is there any fix for this? I'm still getting this error. |
I was having the same issue and this quick fix worked for me |
Hey @siddBanPsu, thanks for starting this thread - I was wondering if you've fixed your issue - if yes, do you mind sharing how you fix it? thanks a lot! |
Hi, I also encountered same issue and as a hack, I changed the data type of tensor in the forward method where I am calling embedding module to long just for testing purpose. It worked for me. |
to solve this I used the
Example:
|
Gives error:
The text was updated successfully, but these errors were encountered: