You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have read your new paper "12-in-1: Multi-Task Vision and Language Representation Learning" on Arxiv, which utilizes multi-task fine-tuning to boost the performance of Vil-BERT. May I ask whether you will release this part of code in this repo or in some other places? Thank you very much!
The text was updated successfully, but these errors were encountered:
Thanks for the interest, yes, We plan to release the code and pretrained model for the new paper (12-in-1). That code will be released under Facebook AI Github, and it's still in the reviewing stage. I think the code and model should be released this month. In the meantime, I'm working on a new open-source multi-modal multi-task transformer (M3Transformer), which is optimized for the new transformer codebase. I will also release this open-source project this month.
Thank you for your kind notification! Would you please release the data in this repo as well, like the lmdb files and how to generate features using the new Resnext detector?
Hi, I have read your new paper "12-in-1: Multi-Task Vision and Language Representation Learning" on Arxiv, which utilizes multi-task fine-tuning to boost the performance of Vil-BERT. May I ask whether you will release this part of code in this repo or in some other places? Thank you very much!
The text was updated successfully, but these errors were encountered: