Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add old benchmarks.ipynb and updated benchmark plot.jl #62

Merged
merged 1 commit into from
May 9, 2022

Conversation

acxz
Copy link
Contributor

@acxz acxz commented Mar 23, 2022

These two files are sources from the JuliaLang/www.julialang.org repo. I believe it is a good idea to move these plotting scripts here instead of having them in the website repo. Once this is merged in, a PR will be sent to www.julialang.org to remove these two scripts from their repo.

This now makes changing the ploting code much easier for us as we don't have to go through JuliaLang reviewers, only when we submit the finally benchmarks.svg file to integrate with the Benchmarks webpage.

See: #48 (comment) for more relevant discussion

These two files are sources from the JuliaLang/www.julialang.org repo. I believe it is a good idea to move these plotting scripts here instead of having them in the website repo. Once this is merged in, a PR will be sent to www.julialang.org to remove these two scripts from their repo
@ViralBShah
Copy link
Member

Maybe just use Pluto notebooks if feasible?

@acxz
Copy link
Contributor Author

acxz commented Apr 12, 2022

That's actually a great idea, I'll add a commit to use the plot.jl file as a pluto notebook.

For the older benchmarks.ipynb file I'd like to keep it as long as the current graph is up on the julialang.org website since that was the exact file used to produce the plot. Once the plot on the website is changed (i.e. closure of #48) we can confidently remove the older benchmarks file and keep only the new one around.

@ViralBShah
Copy link
Member

We can just update the plot on the Julia website as well. It is really old.

@ViralBShah
Copy link
Member

ViralBShah commented Apr 13, 2022

Would it be too crazy to just pull the performance timings right out of the Github Actions? Maybe that is the easiest way to actually run the benchmarks. The big issue would be that we can't get numbers for commercial software.

@acxz
Copy link
Contributor Author

acxz commented Apr 13, 2022

Would it be too crazy to just pull the performance timings right out of the Github Actions?

So basically like on every commit, get the benchmarks.csv output from the CI and commit it to the repo? That should be doable, however, I'm not completely sold on the idea of having a update timing commit for every other commit tbh. I think manually downloading the benchmarks.csv file from the latest commit, whenever we need to update the timings/graph/table is prob the best method for now.

Maybe that is the easiest way to actually run the benchmarks.

For sure, Github Actions has been a boon.

The big issue would be that we can't get numbers for commercial software.

Yeah... How I'm currently handling this (to get the graph as shown here) is to interpolate the actual timings based on the ratios on the last known timing data for those languages which we don't have timings for. I'm not sure if publishing that kind of interpolated data on the JuliaLang website is honest (even with appropriate disclaimers), but I do think that our graph should contain data for those languages as no other benchmarks do. (I'm personally okay with this myself though, interpolated data is better than no data) There are options for CI as discussed here and if it comes down to it, I am still a student and have licenses for these commercial languages. I can try to run the tests myself on local hardware once I fix up tooling PRs such as this one.

We can just update the plot on the Julia website as well. It is really old.

While it is old, the information from the new graph is very similar to the previous graph. Rust and Julia both overtake Lua, but that's the only significant (trend) changes besides overall improvements in individual benchmarks. Let's try to 1) use interpolated data or 2) get commercial software working (CI/locally).

I'm totally fine making PRs to the julialang website with option 1) as a stopgap till we get updated data with 2).

@acxz
Copy link
Contributor Author

acxz commented May 9, 2022

Sadly I wasn't able to get Pluto working, for now I'll merge these scripts from the JuliaLang website repo as is and people can make PRs to them to add more info to the graph i.e. the geo mean.

@StefanKarpinski
Copy link
Sponsor Member

StefanKarpinski commented May 10, 2022

Maybe @fonsp can help? Can you explain the issues?

@acxz
Copy link
Contributor Author

acxz commented May 10, 2022

I'll make an issue over at the Pluto repo soon enough.

Edit: Running

import Pkg
Pkg.update("Pluto")

did the trick!

@fonsp
Copy link
Member

fonsp commented May 10, 2022

@acxz also feel free to contact me on zulip or email! fons@plutojl.org

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants