You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your work!
You mentioned that "we used better strategies to train Phi-3-Mini-based and Llama-3-8B-based Bunny", so I would like to ask, what kind of strategy did you use when training Llama-3-8B-based Bunny? And when do you plan to make the finetune_lora.sh of Llama-3-8B-based Bunny public?
The text was updated successfully, but these errors were encountered:
Hi, thanks for your work!
You mentioned that "we used better strategies to train Phi-3-Mini-based and Llama-3-8B-based Bunny", so I would like to ask, what kind of strategy did you use when training Llama-3-8B-based Bunny? And when do you plan to make the finetune_lora.sh of Llama-3-8B-based Bunny public?
The text was updated successfully, but these errors were encountered: