-
-
Notifications
You must be signed in to change notification settings - Fork 528
fix: only deploy to DockerHub on tag-triggered builds #280
Conversation
Ping @timja the windows build seems flacky. Do you mind merging or shall we fix it ? |
Yes, there is a commit that broke tests, if the build stage is passing for now it is probably ok. I am trying to address the issues in another PR. |
9805756
to
144112d
Compare
Since I was slower to update, I rebased now that your PR is merged (THANKS). This PR is required to ensure that jenkinsci/docker-agent#604 and jenkinsci/docker-agent#603 can be fixed without regression |
The creation of the nmap container prior to the tests starting requires updating the tests as well. The nmap container is cleaned up in the tests, so that would need to be removed as well so it doesn't get cleaned up and then recreated anyway. Also, the builds may be on other nodes, so the nmap container needs to be part of the parallel part, or am I missing something? |
Thanks for this info. I initially thought that building it would, at least, keep the image layers into the Docker Engine cache, so the
Not in the actual version of the pipeline: The
|
but the previous version was doing that wasn't it? We changed to an agent per image because we kept getting disk full on the windows builds |
True: I misread the |
d432c3c
to
6879b2c
Compare
Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
ef12b7e
to
ed29e4e
Compare
Ready ro review folks:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not handling the case where we have to use a different docker-agent build number than in inbound-agent.
sometimes we have to use e.g. 4.13.5-6 due to release issues or whatever in the docker-agent repo but here it's the first release version.
We can't just always sync them as if we have changes in this repository e.g. to change the dockerfile here, we need to release a new build but that would require a new docker-agent release in a different repository which I don't think is right.
(if they were in the same repo then I guess we could release all at once always)
not { buildingTag() } | ||
} | ||
steps { | ||
script { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
visualisation is broken in blue ocean unfortunately as it can't handle this case 😢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I remember, it is the main reason why nesting matrix/parallel are not supported in declarative: to avoid breaking the UI.
But in fact, it looks better once the build is finished (wether it's successful, failing or aborted):
-
Check https://ci.jenkins.io/job/Packaging/job/docker-inbound-agent/job/PR-280/12/pipeline-graph/
-
Compare in BlueOcean the following screenshot:
- "Broken" graph while it's building (I use quotes because it is still usable)
- Fixed graph once built:
=> when using a clean agent (with no priori cache), the new build jumped from ~39min to ~32 min
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting that it works after complete
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting that it works after complete
Yep, I realized it while looking to this comment: I was sure it was working and just discovered that's it is broken "while" building.
That is the goal of the variable With your example, if you want to build the image The change made in the bake file removes at least one case from the path (as it forces the remoting version to be the same for parent and agent image) |
default = split("-", "${IMAGE_TAG}")[0] | ||
} | ||
|
||
variable "BUILD_NUMBER" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's already a build number variable in this file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this one is the inbound-agent
build number (It took me 8hours in the plane to fully get it :D ).
Let me add a comment on this one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oooh good catch: this 2nd occurence is the result of me messing up the rebase.
Should be fixed now
@timja as soon as the JDK8/11 issues are fixed, I'm willing to open a PR to Sounds good to you? |
would be great yeah |
Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
…uportal/only-deploy-on-tags fix: only deploy to DockerHub on tag-triggered builds
…uportal/only-deploy-on-tags fix: only deploy to DockerHub on tag-triggered builds
…uportal/only-deploy-on-tags fix: only deploy to DockerHub on tag-triggered builds
…uportal/only-deploy-on-tags fix: only deploy to DockerHub on tag-triggered builds
…uportal/only-deploy-on-tags fix: only deploy to DockerHub on tag-triggered builds
…only-deploy-on-tags fix: only deploy to DockerHub on tag-triggered builds
Related to jenkinsci/docker-agent#605.
Closes jenkins-infra/helpdesk#2
This PR updates the
Jenkinsfile
(pipeline) and thedocker-bake.hcl
manifest with the following changes:--push
argument fordocker buildx bake
command) when a tag triggered a build