Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the training strategy for Llama-3-8B #96

Closed
Jancsi9981 opened this issue Jun 27, 2024 · 2 comments
Closed

about the training strategy for Llama-3-8B #96

Jancsi9981 opened this issue Jun 27, 2024 · 2 comments

Comments

@Jancsi9981
Copy link

Hi, thanks for your work!
You mentioned that "we used better strategies to train Phi-3-Mini-based and Llama-3-8B-based Bunny", so I would like to ask, what kind of strategy did you use when training Llama-3-8B-based Bunny? And when do you plan to make the finetune_lora.sh of Llama-3-8B-based Bunny public?

@Isaachhh
Copy link
Collaborator

All of the training strategy and data of latest Bunny is released! Check more details about Bunny in Technical Report, Data and Training Tutorial!

@Isaachhh
Copy link
Collaborator

Isaachhh commented Aug 6, 2024

Close the issue for now if there's no further discussions. Feel free to reopen it if there's any other questions.

@Isaachhh Isaachhh closed this as completed Aug 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants