-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#3095: add dual encoder #3208
#3095: add dual encoder #3208
Conversation
flair/models/word_tagger_model.py
Outdated
@@ -108,18 +109,22 @@ def _get_embedding_for_data_point(self, prediction_data_point: Token) -> torch.T | |||
|
|||
def _get_data_points_from_sentence(self, sentence: Sentence) -> List[Token]: | |||
# special handling during training if this is a span prediction problem | |||
if self.training and self.span_prediction_problem: | |||
if self.span_prediction_problem: # do we need self.training here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The conversion is only necessary during training: we take Span labels and encode them as Token-level labels. During prediction, this is not necessary.
addition of self.training in word_tagger_model.py.
addition of self.training in word_tagger_model.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this! Changes requested since there are some important type declarations missing, and one unnecessary if-statement.
Additionally: have you tested "cosine-similarity"? Is that working?
flair/nn/decoder.py
Outdated
|
||
|
||
class LabelVerbalizerDecoder(torch.nn.Module): | ||
def __init__(self, label_encoder, label_dictionary: Dictionary, decoding: str = "dot-product"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Type hint for label_encoder
is missing. Perhaps also rename to label_embedding
for clarity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be DocumentEmbedding
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DocumentEmbeddings raise ImportError, looks like circular dependency.
Embeddings as used in nn/model.py instead of DocumentEmbeddings works. If we do not want to take Embeddings, I can open up new issue to inspect this.
tests/conftest.py:6: in <module>
import flair
flair/__init__.py:28: in <module>
from . import ( # noqa: E402 import after setting device
flair/models/__init__.py:1: in <module>
from .clustering import ClusteringModel
flair/models/clustering.py:14: in <module>
from flair.embeddings import DocumentEmbeddings
flair/embeddings/__init__.py:13: in <module>
from .document import (
flair/embeddings/document.py:21: in <module>
from flair.nn import LockedDropout, WordDropout
flair/nn/__init__.py:1: in <module>
from .decoder import LabelVerbalizerDecoder, PrototypicalDecoder
flair/nn/decoder.py:15: in <module>
from flair.embeddings import DocumentEmbeddings
E ImportError: cannot import name 'DocumentEmbeddings' from 'flair.embeddings' (/Users/jgolde/PycharmProjects/flair/flair/embeddings/__init__.py)```
flair/nn/decoder.py
Outdated
|
||
label_tensor = torch.stack([label.get_embedding() for label in self.verbalized_labels]) | ||
|
||
if self.training or not self.label_encoder._everything_embedded(self.verbalized_labels): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second condition is not needed: During, training always store embeddings. Otherwise, do not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how should the decoder know about the embeddings_storage_mode? Trainer stores embeddings depending on the mode, thus we would need to include this param into the forward loss called in trainer.train().
flair/nn/decoder.py
Outdated
if decoding not in ["dot-product", "cosine-similarity"]: | ||
raise RuntimeError("Decoding method needs to be one of the following: dot-product, cosine-similarity") | ||
self.label_encoder = label_encoder | ||
self.verbalized_labels = self.verbalize_labels(label_dictionary) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The type should be declared so it becomes easier to understand what verbalized_labels is (List[Sentence]
)
- type hints added - removed unnecessary checks - renamed attributes
I have removed the cosine logic, it worked but just if we adjust some functions of the default model. I am currently experimenting with it and will open a new branch with it. |
closes #3095.