Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dedicated machine for benchmarks #791

Closed
mcollina opened this issue Jul 12, 2017 · 14 comments · Fixed by #800
Closed

Dedicated machine for benchmarks #791

mcollina opened this issue Jul 12, 2017 · 14 comments · Fixed by #800

Comments

@mcollina
Copy link
Member

mcollina commented Jul 12, 2017

Intel is donating a dedicate server to run benchmarks, and nearForm is operating that machine from our HQ.
The machine is built on the Kaby Lake architecture (I'll update on specs).

We need to hook this up to the ci system.

cc @jasnell @piccoloaiutante

Ref: nodejs/node#8157

@gibfahn
Copy link
Member

gibfahn commented Jul 12, 2017

Do you know what OS it's running? We'll need to make sure it's covered by our existing ansible scripts.

nearForm is operating that machine from our HQ.

What does that mean exactly? Intel are paying for it but it's your hardware?

@mcollina
Copy link
Member Author

The HW is Intel's, we are paying for the operating expenses and hosting.

@mhdawson
Copy link
Member

In terms of hooking it up to the CI, for other machines the approach is that you provide the root password to the build WG who will then manage the machine. In terms of things like reboots etc all existing machines either have some sort of control panel that we can use to reboot when necessary. I'm guessing that won't be the case in this instance so we'll need contacts/procedures to be able recover in the cases were the build team members can no longer access the system.

We'll need to discuss based on the system/specs but unless there is some type of virtualization that we believe will not affect performance I'm assuming they will be a single OS instance running on each of the boxes.

@jbergstroem
Copy link
Member

Better understanding who has physical access (and intended os level access) would also make it easier for us to understand how the machine can be used.

@mcollina
Copy link
Member Author

Our staff has physical access, but you can refer to me for any physical maintenance. I can forward account credentials for a root-level account.
You can refer to me for any physical maintenance, I'll forward the request to our team.

As it is a dedicated server, it should be used for benchmarks.

@jbergstroem
Copy link
Member

@mcollina benchmarks of course, it's just mostly about what code we can benchmark so to speak. For instance, adding it to a release process where we benchmark prior release to avoid any regressions could include things that comes from security-related release stuff.

@mcollina
Copy link
Member Author

@jbergstroem that's ok. Also check benchmark regressions on PRs, maybe?

@rvagg
Copy link
Member

rvagg commented Jul 18, 2017

Fixed @ #800

Temporary use is simple test running, more sophisticated benchmarking use is left as an exercise for the ... someone else. Let me know if there's anything I can help with in advancing that cause though.

@mcollina
Copy link
Member Author

cc @AndreasMadsen

@AndreasMadsen
Copy link
Member

AndreasMadsen commented Jul 18, 2017

@mcollina anything specifically I will comment on?

Primarily, I wanted a machine so we could run a Jenkins job like that proposed in nodejs/benchmarking#58. The biggest problem with the benchmarks right now, are that the benchmarks take too long to run on a personal machine and developers tends to use it for other things while it's benchmarking. The latter creates systematic noise that no statistics can't handle.

Running the entire benchmark suite on each release (maybe just major/minor) sounds nice. However, it will take a very long time, we are talking multiple days. Also, when running a single benchmark test there is a small risk of a false positive, this is not a problem when running just a few benchmarks but when running them all (1000+) there will be many false positives, thus it doesn't have much value. There is no theoretical nice way around this, the only thing that can really be done is to make the benchmarks run longer. This is one of the reasons why the Large Hadron Collider costs 7+ billion dollars :p

@octaviansoldea
Copy link

octaviansoldea commented Jul 18, 2017

Following some previous communication with colleagues, I would like to indicate that the Node.js
benchmark servers provided by Intel have the following details:

The machines are Wildcat Pass 2U 8x3.5" HDD 10Gbe Xeon DP v4 Server [R2308WTTYS-IDD], 1x1100W,
equipped with Intel® Xeon® Processor E5-2600 v4 Family, Socket R3, Intel® Server Board S2600WTTR,
Board Chipset: Intel® C612 Chipset. They are equipped with 8 slots of memory where each one is of type 8GB 2400 Reg ECC 1.2V DDR4 Kingston KVR24R17S8/8I Single Rank.
Moreover, each station is equipped with integrated LAN 2x 10GbE, the number of LAN Ports is 2, and has hard disk of 1 TB, SATA, 6Gb/s, 7200 RPM. This type of server aims Cloud/Datacenter.

@mhdawson
Copy link
Member

@AndreasMadsen I think nodejs/benchmarking#58 is the first thing we should aim to get running on these machines. Once we get experience with that we can see if we think regular runs (like on releases) might make sense.

@mhdawson
Copy link
Member

I'll see if @gareth-ellis has time to get together in the next few weeks to see if we can push nodejs/benchmarking#58 forward.

@rvagg
Copy link
Member

rvagg commented Jul 18, 2017

@octaviansoldea thanks for the specs! I might link to your comment from the nodes on Jenkins.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants