Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would you release the multi-task fine-tuning codes for ViL-BERT? #38

Open
yangapku opened this issue Dec 7, 2019 · 4 comments
Open

Comments

@yangapku
Copy link

yangapku commented Dec 7, 2019

Hi, I have read your new paper "12-in-1: Multi-Task Vision and Language Representation Learning" on Arxiv, which utilizes multi-task fine-tuning to boost the performance of Vil-BERT. May I ask whether you will release this part of code in this repo or in some other places? Thank you very much!

@jiasenlu
Copy link
Owner

jiasenlu commented Jan 3, 2020

Hi

Thanks for the interest, yes, We plan to release the code and pretrained model for the new paper (12-in-1). That code will be released under Facebook AI Github, and it's still in the reviewing stage. I think the code and model should be released this month. In the meantime, I'm working on a new open-source multi-modal multi-task transformer (M3Transformer), which is optimized for the new transformer codebase. I will also release this open-source project this month.

@yangapku
Copy link
Author

yangapku commented Jan 4, 2020

Great! It's delightful to hear this. I will wait for the release.

@jiasenlu
Copy link
Owner

Check out this release! https://github.com/facebookresearch/vilbert-multi-task

@yangapku
Copy link
Author

Thank you for your kind notification! Would you please release the data in this repo as well, like the lmdb files and how to generate features using the new Resnext detector?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants