Skip to content

Latest commit

 

History

History
405 lines (341 loc) · 13.7 KB

README.adoc

File metadata and controls

405 lines (341 loc) · 13.7 KB

pipeline-lib

Global shared library for SaleMove pipeline jobs. Includes support for branch deployments, kubernetes agent provisioning helpers, and other utility functions.

See introductory video and slides for the branch deployment process.

Introduction

This is a Shared Jenkins Library. If included in a Jenkinsfile with @Library('pipeline-lib') _, then all the Global variables described below will be available in that Jenkinsfile.

Global variables

deployer

deployer supports branch deploys.[1]

To migrate an existing project out from the current orchestrated release flow and to adopt Continuous Deployment via branch deploys, follow these steps:

  1. Upgrade the project’s Jenkinsfile to:

    1. Include this library with @Library('pipeline-lib') _

    2. Configure the job’s properties

    3. Wrap the arguments of podTemplate, [code-inpod-code], or other with deployer.wrapPodTemplate

    4. Call deployer.deployOnCommentTrigger after the code has been tested and a Docker image built

  2. Remove the project from all current release files [2]

  3. Add continuous-integration/jenkins/pr-merge/deploy as a required check in GitHub

Each step is explained in detail below.

Configuring the job’s properties

This adds a build trigger so that deploys can be started with !deploy PR comments. It also configures the Datadog plugin, so that we can collect metrics about deployments.

In a declarative pipeline, add a properties call before the pipeline directive as follows.

+properties(deployer.wrapProperties())
+
 pipeline {
   // ...
 }

In a scripted pipeline, include the same properties call:

properties(deployer.wrapProperties())

Or wrap an existing properties call with the provided wrapper:

-properties([
+properties(deployer.wrapProperties([
   parameters([
     string(name: 'my-param', defaultValue: 'my-val', description: 'A description')
   ])
-])
+]))

Wrapping the pod template

Here’s an example.

-inDockerAgent(containers: [imageScanner.container()]) {
+inDockerAgent(deployer.wrapPodTemplate(containers: [imageScanner.container()])) {
   // ...
 }

Enabling deploy on comment trigger

The exact changes required depend on the project, but here’s an example.

 // At the top level
 @Library('pipeline-lib') _
+@Library('SaleMoveAcceptance') __ // (1)

 // In podTemplate, inPod, or similar, after building a docker image
 def image = docker.build('call-router')
 imageScanner.scan(image)
-def shortCommit = sh(returnStdout: true, script: 'git log -n 1 --pretty=format:"%h"').trim()
-docker.withRegistry(DOCKER_REGISTRY_URL, DOCKER_REGISTRY_CREDENTIALS_ID) {
-  image.push(shortCommit) // (2)
-}

+deployer.deployOnCommentTrigger(
+  image: image,
+  kubernetesNamespace: 'default', // Optional, will deploy to `default` namespace if not specified
+  kubernetesDeployment: 'call-router',
+  automaticChecksFor: { env ->
+    env['runInKube'](
+      image: '662491802882.dkr.ecr.us-east-1.amazonaws.com/smoke-test-repo:latest', // (3)
+      command: './run_smoke_tests.sh',
+      additionalArgs: '--env="FOO=bar" --port=12345' // (4)
+    )
+    if (env.name == 'acceptance') {
+      runAcceptanceTests(
+        driver: 'chrome',
+        visitorApp: 'v2',
+        suite: 'acceptance_test_pattern[lib/engagement/omnicall/.*_spec.rb]', // (5)
+        slackChannel: '#tm-engage,
+        parallelTestProcessors: 1
+      )
+    }
+  },
+  checklistFor: { env ->
+    def dashboardURL = "https://app.datadoghq.com/dash/206940?&tpl_var_KubernetesCluster=${env.name}" // (6)
+    def logURL = "https://logs.${env.domainName}/app/kibana#/discover?_g=" + // (7)
+      '(time:(from:now-30m,mode:quick,to:now))&_a=' +
+      '(query:(language:lucene,query:\'application:call_router+AND+level:error\'))'
+
+    [[
+      name: 'dashboard', // (8)
+      description: "<a href=\"${dashboardURL}\">The project dashboard (${dashboardURL})</a> looks OK" // (9)
+    ], [
+      name: 'logs',
+      description: "No new errors in <a href=\"${logURL}\">the project logs (${logURL})</a>"
+    ]]
+  }
+)

-build(job: 'kubernetes-deploy', ...)
  1. This is needed for running acceptance tests before deploying to other environments. If you already have a @Library import followed by a two underscores, then change them to three underscores (_) or more, as required. The symbol has to be unique within the Jenkinsfile.

  2. No need to push the image to anywhere. Just build it and pass to deployOnCommentTrigger, which tags and pushes as required.

  3. The image defaults to the current version of the application image.

  4. Additional arguments to kubectl run.

  5. The tests and the other checks run in acceptance obviously vary by project.

  6. Use env.name to customize links for the specific environment. It’s one of: acceptance, beta, prod-us, and prod-eu.

  7. Use env.domainName to customize URLs. For example, it’s beta.salemove.com in beta and salemove.com in prod US.

  8. This should be a simple keyword.

  9. Blue Ocean UI currently doesn’t display links, while the old one does. This means that links have to also be included in plain text, for Blue Ocean UI users to see/access them.

Disabling merges for non-deployed PRs

inPod

inPod is a thin wrapper around the Kubernetes plugin podTemplate + a nested node call. Every setting that can be provided to podTemplate can be provided to inPod and its derivatives (described below).

It provides default values for fields such as cloud and name, so that you don’t need to worry about them. It makes creating a basic worker pod very simple. For example, let’s say you want to build something in NodeJS. The following snippet is everything you need to achieve just that.

inPod(containers: [interactiveContainer(name: 'node', image: 'node:9-alpine')]) {
  checkout(scm)
  container('node') {
    sh('npm install && npm test')
  }
}
ℹ️
inPod and its derivatives also include a workaround for an issue with the Kubernetes plugin where the label has to be updated for changes to the container or volume configurations to take effect. It’s fixed by automatically providing a unique suffix to the pod label using the hash of the provided argument map.
When using inPod or its derivatives, it’s best to also use [code-passivecontainer-code], [code-interactivecontainer-code], and [code-agentcontainer-code] instead of using containerTemplate directly. This is because the containerTemplate wrappers provided by this library all share the same workingDir, which makes them work nicely together.

inDockerAgent

A pod template for building docker containers.

Unlike inPod, inDockerAgent has an agent container [6] which supports building docker images. So if you need to run docker.build, use inDockerAgent instead of inPod.

ℹ️
inDockerAgent is a derivative of [code-inpod-code], so everything that applies to inPod also applies to inDockerAgent.

inRubyBuildAgent

A pod template for building Ruby projects. Comes with an agent container with Ruby and Docker support and PostgreSQL and RabbitMQ containers.

ℹ️
inRubyBuildAgent is a derivative of [code-inpod-code], so everything that applies to inPod also applies to inRubyBuildAgent.

passiveContainer

A containerTemplate wrapper for databases and other services that will not have pipeline steps executed in them. name and image fields are required.

Example:

inPod(
  containers: [
    passiveContainer(
      name: 'db',
      image: 'postgres:9.5-alpine',
      envVars: [
        envVar(key: 'POSTGRES_USER', value: 'myuser'),
        envVar(key: 'POSTGRES_PASSWORD', value: 'mypass')
      ]
    )
  ]
) {
  // Access the PostgreSQL DB over its default port 5432 at localhost
}
⚠️
Only specify the workingDir, command, args, and/or ttyEnabled fields for passiveContainer if you know what you’re doing.

interactiveContainer

A containerTemplate wrapper for containers that will have pipeline steps executed in them. name and image fields are required. Pipeline steps can be executed in the container by wrapping them with container.

Example:

inPod(containers: [interactiveContainer(name: 'ruby', image: 'ruby:2.5-alpine')]) {
  checkout(scm)
  container('ruby') {
    sh('bundle install')
  }
}
⚠️
Only specify the workingDir, command, args, and/or ttyEnabled fields for interactiveContainer if you know what you’re doing.
ℹ️
interactiveContainer specifies /bin/sh -c cat as the entrypoint for the image, so that the image doesn’t exit. This allows you to run arbitrary commands with container + sh within the container.

agentContainer

A containerTemplate wrapper for agent containers. Only the image field is required. It replaces the default jnlp container with the one provided as the image. The specified image has to be a Jenkins slave agent.

Example:

inPod(containers: [agentContainer(image: 'salemove/jenkins-agent-ruby:2.4.1')]) {
  checkout(scm)
  sh('bundle install && rake') // (1)
  docker.build('my-ruby-project')
}
  1. Compared to the interactiveContainer example above, this doesn’t have to be wrapped in a container, because the agent itself supports Ruby.

⚠️
Only specify the name, workingDir, command, args, and/or ttyEnabled fields for agentContainer if you know what you’re doing.

withResultReporting

A scripted pipeline [7] wrapper that sends build status notifications to Slack.

Without specifying any arguments it sends Slack notifications to the #ci channel whenever a master branch build status changes from success to failure or back. To send notifications to your team’s channel, specify the slackChannel argument.

withResultReporting(slackChannel: '#tm-engage') {
  inPod {
    checkout(scm)
    // Build
  }
}
💡
If the main branch in a project is different from master, then reporting can be enabled for that branch by specifying mainBranch. E.g. withResultReporting(mainBranch: 'develop').

For non-branch builds, such as cronjobs or manually started jobs, the above status reporting strategy does not make sense. In these cases a simpler onFailure or always strategy can be used.

properties([
  pipelineTriggers([cron('30 10 * * 5')])
])

withResultReporting(slackChannel: '#tm-is', strategy: 'onFailure') {
  inPod {
    // Do something
  }
}

By default withResultReporting only includes the build status (success/failure), the job name, and links to the build in the slack message. Additional project-specific information can be included via the customMessage argument.

properties([
  parameters([
    string(name: 'buildParam', defaultValue: 'default', description: 'A parameter')
  ])
])

withResultReporting(customMessage: "Build was started with: ${params.buildParam}") {
  inPod {
    // Do something
  }
}

Developing

Guard is used for providing a preview of the documentation. Run the following commands to open a preview of the rendered documentation in a browser. Unfortunately there’s no live reload - just refresh the browser whenever you save changes to README.adoc.

bin/bundle install
bin/guard # (1)
open README.html # (2)
  1. This doesn’t exit, so following commands have to be entered elsewhere

  2. Opens the preview in browser. Manually refresh browser as necessary


1. Feature branches are deployed to and validated in production before merging back to master.
2. See e.g. release#769. This ensures that the production version isn’t overwritten by a release currently in beta, for example.
3. call-router settings are linked here as an example. Click SettingsBranchesEdit master in GitHub to access.
4. The status only becomes available for selection if GitHub has seen the status on at least one commit in the project. It should appear as soon as you’ve opened a PR with the Jenkinsfile changes described above.
5. Ensure that continuous-integration/jenkins/pr-merge and review/squash are also checked.
6. A container named jnlp, in which all commands will run by default, unless the container is changed with container.
7. As opposed to declarative pipelines.