You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use flairNLP to annotate nested entities. Currently, I'm experimenting to use different models for different syntactic layers. My plan to do so was, to be a bit more efficient, to load the three models I'm using with a MultiTagger. I've seen that this has been removed since the last time I used it.
If I've read the comments in the code correctly, I should still be able to do the same by loading the models with the SequenceTagger class:
I was also wondering the same thing. We are using multitask training to train multiple SequenceTagger models, along with TextClassifier and RelationExtractor models, with shared embeddings. For inference, we would like to run 2 one one TextClassifier and two SequenceTagger models on the same sentences.
Ideally, these would load only one copy of the shared embeddings, and only compute the embeddings one time per inference. The smaller task heads could then run the rest of their prediction on the computed embeddings.
It would be ideal to combine all of the model types together, but would still be useful to combine just the SequenceTagger models.
The old MultiTagger#1791 class seemed to share the embeddings between models so they were only loaded once in memory, however for prediction, it just looked like it just iterated over each model and ran predictions as normal
Question
I use flairNLP to annotate nested entities. Currently, I'm experimenting to use different models for different syntactic layers. My plan to do so was, to be a bit more efficient, to load the three models I'm using with a MultiTagger. I've seen that this has been removed since the last time I used it.
If I've read the comments in the code correctly, I should still be able to do the same by loading the models with the SequenceTagger class:
tagger = Classifier.load({ "doc": MODEL_PATH_DOC, "mention": MODEL_PATH_MENTION, "desc": MODEL_PATH_DESC })
MODEL_PATH_DOC
etc. are paths to the models as strings.I'm absolutely clueless if this is correct, but I didn't see any other way to include label_names, which I then want to use when I predict:
tagger.predict(sent, label_name=label_name)
e.g. label_name would be "doc" for the first iteration on the sentence, but when annotating inside an entity I use the label_name "mention".
Is there a way to do this?
I'm currently using three independent models (although embeddings are the same), but training them in a Multitask settings is next on my list.
The text was updated successfully, but these errors were encountered: