Wednesday, April 26, 2017
Home » Automation » CI/CD Pipelines as Code

CI/CD Pipelines as Code

If you have full control over each aspect of your application’s lifecycle, it’s reasonable to assume you’d want that lifecycle to be as streamlined as possible through automated code merges, unit/regression testing, environment builds, and so on. This is achievable through a Continuous Integration and Delivery (CI/CD) pipeline, which is an automated way to orchestrate a series of steps for merging, building, testing and deploying application code. This delivery pipeline has been the tried and true method of achieving this level of efficiency, which leaves you and your team with the time to focus on what matters: your application.

Why represent my pipeline as code?

Application delivery should be considered just as critical as the application that’s being delivered. The ability to seamlessly and safely deliver new application features and updates is key to running a modern, customer responsive application stack. Leveraging CI/CD principles within your development lifecycle is now widely expected at the application, platform, and infrastructure layers. In order to leverage these CI/CD principles, each layer must be represented in textual form and version controlled. This is known as Everything as Code. It’s a well-established pattern for codifying infrastructure building blocks for your application and can be treated as an extension to your application code base.

Most components of an application stack are already automated and enhanced through being represented as code. Why not leverage the same benefits for CI/CD delivery pipelines? By doing so, all of the typical benefits of traditional software development paradigms (e.g. version control, code review, branching strategies) can also be leveraged for the pipeline itself. Putting your pipeline in a code repository and having it created and run automatically by your CI tool is much preferred to having to manually navigate through a confusing UI.

How Jenkins helps with delivery pipelines

While there are many CI/CD tools that can leverage Pipelines as Code (GitLab, Bitbucket, Drone) we’ll focus on Jenkins due to its open source and community-driven nature. There have historically been several varied approaches to craft code-driven jobs into “pipelines” with Jenkins, such as Job DSL Plugin, JJB, and Build Flow Plugin. However, the release of Jenkins 2.0, featuring the newly revamped Pipelines plugin, is recommended as the best path forward for doing Pipelines as Code (and CI jobs in general).

Jenkins Pipelines are represented in a Groovy DSL in the form of a Jenkinsfile within your application or infrastructure code repository. Using a dynamic, feature-rich language such as Groovy enables almost limitless automation capabilities within your pipeline. Not familiar with Groovy? Don’t worry, getting started is very intuitive and the Jenkins folks have even included a Snippet Generator right within your Jenkins installation for reference.

An example Jenkinsfile use-case

At Datapipe, we commonly focus on Infrastructure as Code deployments for our customer’s applications, so our example use-case will be a simple AWS deployment using Terraform, with the code stored on GitHub.

There are a few basic steps to our pipeline:

  1. Pull code repository (SCM).
  2. Perform a Terraform Plan operation to check our deployment changes.
  3. Perform a Terraform Apply to make the changes take effect.
  4. Run a test / send a notification.

This pipeline could be triggered manually, or, more ideally, via a GitHub post-commit webhook that would trigger the pipeline whenever an update is committed/pushed to the repo.

The pipeline itself is represented in a single Jenkinsfile in the same repo as the Terraform HCL code – here is a snippet from the file highlighting the discrete steps within your deployment:

node {
  env.PATH += ":/opt/terraform_0.7.13/"

  stage ('Checkout') {
    checkout scm
  }

  stage ('Terraform Plan') {
    sh 'terraform plan -no-color -out=create.tfplan'
  }

  // Optional wait for approval
  input 'Deploy stack?'

  stage ('Terraform Apply') {
    sh 'terraform apply -no-color create.tfplan'
  }

  stage ('Post Run Tests') {
    echo "Insert your infrastructure test of choice and/or application validation here."
    sleep 2
    sh 'terraform show'
  }

  stage ('Notification') {
    mail from: "jenkins@mycompany.com",
         to: "devopsteam@mycompany.com",
         subject: "Terraform build complete",
         body: "Jenkins job ${env.JOB_NAME} - build ${env.BUILD_NUMBER} complete"
  }
}

You can view this Jenkinsfile + a simple Terraform stack over at this GitHub repo.

Note how each stage of the pipeline is defined with a block of tasks for that particular step in the pipeline process. The Jenkinsfile pipeline merely orchestrates your CI/CD workflow in a concise, codified format.

Once Jenkins has been configured to scan your team’s GitHub Organization, any subsequently created repos containing a Jenkinsfile will be automatically loaded by Jenkins as a new pipeline job. This is one of the most powerful features of the Pipeline plugin – Multibranch Pipelines. Alternatively, you can also “import” your Jenkinsfile to a new, manually created pipeline job, either via pointing to an individual SCM repo or copy/pasting the Groovy itself.

Here is what the Pipeline looks like after it has run – note each of the defined stages listed above:

 

While the Terraform example pipeline shown is very rudimentary, some various enhancements that should definitely be considered are:

  • Parallelization of asynchronous tasks.
  • Build parameters for handling things like multiple environments.
  • Groovy methods to keep your pipeline DRY.
  • Secrets management via a call out to something like the Credentials Binding plugin.
  • Archiving / storing artifacts for other pipeline jobs.
  • Error handling with try/catch/finally.

Once your pipeline is in place, you can focus more time and energy on delivering the best application possible. Interested in learning more? Read additional blog posts about application development here.

Have something you’d like to share with us? Drop us a line on Twitter or Facebook

About Adam Patterson

Adam Patterson
Adam is an engineer with Datapipe’s Professional Services team. He is passionate about distributed systems, automation, infrastructure, and generally anything that makes deploying applications less painful. He has been seeking new ways to turn whatever he comes across into code for over a decade.

Check Also

How an Effective DevOps Culture Can Lead to Better Automation

It’s easy to think that DevOps and automation are interchangeable, but that’s not the case. Rather, automation is an integral aspect of the overall DevOps picture. Understanding the difference between the two can take an organization to the next level in terms of optimization and efficiency.