Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results visualization dashboard #202

Open
GeorgePearse opened this issue Apr 19, 2022 · 4 comments
Open

Results visualization dashboard #202

GeorgePearse opened this issue Apr 19, 2022 · 4 comments
Labels
enhancement New feature or request

Comments

@GeorgePearse
Copy link
Collaborator

Composer have a brilliant dashboard (https://app.mosaicml.com/explorer/imagenet) which summarises all of their experiment results. It allows you to inspect results against some set of hyperparameters, understand some simple trends, and what methods work well together vs poorly (the library is for fast deep learning training). This is a significant innovation in open-source documentation.

The main metrics would be runtime and some measure of performance (AUC at some X,Y,Z fractions of datasets or something like that).

I've been working with Superset a lot, and I'm pretty sure their SaS offering (Preset) has a very generous free tier that would cover the use case. I'd be happy to set this up. Let me know your thoughts.

@GeorgePearse GeorgePearse added the enhancement New feature or request label Apr 19, 2022
@Dref360
Copy link
Member

Dref360 commented Apr 20, 2022

Hi george,

Oh that looks very cool!!

So what would be the project look like?

Would we run many experiments on many datasets and BaaL's website would refer to these dashboards hosted on mosaicml?

@GeorgePearse
Copy link
Collaborator Author

If your experiments have been run with MLFlow or Weights and Biases so far we'd be able to import them across, but we could log to a DB as well moving forwards

I was just using mosaicml's tool as an example. We'd host on https://preset.io

This is very blue sky thinking, but I think they have the right idea (dashboard as demo for techniques that work)

@GeorgePearse
Copy link
Collaborator Author

GeorgePearse commented Apr 21, 2022

We could either create a logger that wrote to a DB that preset read from, or export logs from whatever you want to / do use (MLFlow / WandB etc.)

Could also have an option for any user to log their results directly to the dashboard in cases where they're working on an open-source dataset (probably coloured / displayed differently to demonstrate that the results are unverified).

@Dref360
Copy link
Member

Dref360 commented Apr 25, 2022

Oh I like that! I think we should do this. There is a lack of leaderboards in active learning which makes research more difficult.

And websites such as paperswithcode do not show the information we need.

What would be the first step? Gathering the logs that we have currently?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants