logistics: use new pre-cached builder image in Jenkins #4631
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Contains
Changes the label for the engine build to a new one, defined presently using https://github.com/Cervator/pre-cached-jenkins-agent (that repo will likely move and might itself get a build in our Jenkins but - one step at a time) - see its repo readme for more technical details.
In short the goal is to use a Docker build of a custom Jenkins agent to pre-cache our main dependencies (at the moment for the TS engine, joml-ext, and modules), as well as pre-download the files needed for each Gradle wrapper version.
This PR also removes the already-default setting for the Gradle Daemon in case any attempts to experiment are desired elsewhere (has no effect normally). Loooong time since that actually mattered, the setting was changed to true by default years ago.
Label reasoning goes with a potential later change to make light/medium/heavy agents (again), easily vary the version of Java (plus make it more explicit/visible), easily vary / diversify agents later for different archetypes (Terasology engine build vs Destination Sol engine build including Android, for instance)
How to test
Logistics change - tested in Jenkins already. No particular way to test this separately, can just look at the job execution on the Nanoware forks
Performance stats are hard to narrow down since builds may differ in duration based on what else is going on in the cluster. I tried to closely watch a few of them to make sure the system was otherwise idle to guesstimate "best case" improvements
Outstanding before merging