Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add prediction (table) artifacts to Weights & Biases logger #490

Closed
5 tasks done
Glavin001 opened this issue Aug 27, 2023 · 3 comments · Fixed by #521
Closed
5 tasks done

Add prediction (table) artifacts to Weights & Biases logger #490

Glavin001 opened this issue Aug 27, 2023 · 3 comments · Fixed by #521
Labels
enhancement New feature or request

Comments

@Glavin001
Copy link
Contributor

Glavin001 commented Aug 27, 2023

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Ideas in Discussions didn't find any similar feature requests.
  • I searched previous Issues didn't find any similar feature requests.

🔖 Feature description

One of my favourite features from LLM Studio is the validation prediction insights: https://h2oai.github.io/h2o-llmstudio/guide/experiments/view-an-experiment#experiment-tabs

Validation prediction insights : This tab displays model predictions for random, best, and worst validation samples. This tab becomes available after the first validation run and allows you to evaluate how well your model generalizes to new data.

image

image

Since Axolotl is headless (no UI) this can instead be implemented with WandB logging.

Examples:

✔️ Solution

See https://wandb.ai/stacey/mnist-viz/reports/Visualize-Predictions-over-Time--Vmlldzo1OTQxMTk

❓ Alternatives

No response

📝 Additional Context

I'd be interested in contributing this, if Axolotl team is interested and I can figure it out 😅

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.
@Glavin001 Glavin001 added the enhancement New feature or request label Aug 27, 2023
@NanoCode012
Copy link
Collaborator

NanoCode012 commented Aug 27, 2023

A callback could be added for this feature.

I wasn’t sure wandb support saving text results.

Edit: Wandb table can save predictions.

@Glavin001
Copy link
Contributor Author

@NanoCode012 : Could you give me some pointers on where this should be added to Axolotl? I'll try to find time in the next week when I'm training to add and test this new feature. Thanks!

@NanoCode012
Copy link
Collaborator

callbacks should be placed in utils/callbacks.py. Then you can add it to the Trainer at utils/trainer.py. You can see examples of callbacks and how it's added in the aforementioned files.

I think you can add a callback on_evaluate finished (?) if that's an option to also predict over a few eval samples and save the responses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants