our blog

Our Theory

December 02, 2017 / by Ian David Rossi / In jenkins, pipeline, continuous-delivery

Jenkins Pipeline - Global Shared Library Best Practices

by Ian David Rossi

UPDATE (11/15/2018): We have posted a follow up to this article that we believe to be a better approach. We will leave this article in place for posterity (and for those of your that still prefer to use the src dir for your pipeline code). http://www.aimtheory.com/jenkins/pipeline/continuous-delivery/2018/10/01/jenkins-pipeline-global-shared-library-best-practices-part-2.html

While there are several great CI tools out there–like a new favorite of mine, ConcourseCI–many (large) organizations are still heavily invested in Jenkins, which still has a great deal to offer. Lately we have found ourselves writing a lot of Groovy code for Jenkins Pipelines. Specifically, global shared libraries.

We are passionate about best practices at aimtheory and we have observed that Jenkins Pipelines and global shared libraries are being structured and written in many different ways. There doesn’t seem to be much consistency or consensus regarding this. The Jenkins documentation does provide some direction, but it doesn’t match up with the way that many are actually using shared libraries. So we thought we would share what we have established for best practices.

The main driver for establishing these best practices is ease of use when it comes to implementation.

About Jenkins Global Shared Libraries

A Jenkins Global Shared Library is what it sounds like. It’s a library of Groovy code that can be shared by many Jenkins Pipelines (Jenkinsfiles). It lives in its own repo from which it is retrieved and executed by Jenkins.

It consists of the following folder structure: (straight from the docs)

+- src                     # Groovy source files
|   +- org
|       +- foo
|           +- Bar.groovy  # for org.foo.Bar class
+- vars
|   +- foo.groovy          # for global 'foo' variable
|   +- foo.txt             # help for 'foo' variable
+- resources               # resource files (external libraries only)
|   +- org
|       +- foo
|           +- bar.json    # static helper data for org.foo.Bar

The idea is that src contains the Groovy functions to be consumed by all pipelines whereas vars contains “global variables” to be accessed by pipelines. However, we have observed that most people do not follow this usage pattern. Some teams have put all their code in the src folder, while ignoring entirely the vars folder. And we’ve also seen the exact reverse–all code in the vars folder. These paths were taken for different reasons, but the main driver was basically the implementation that was most comfortable. Jenkins runs the code in each of these folders in a different context, and so implementers chose the context that they preferred or even just whatever ended up working for them as they fought through documentation, which could definitely be improved.

However, in our pursuit of best practices, we always make an effort to use tools as intended, in favor of establishing community best practices. So, here I’m just going to present an approach that we have adopted that strives to fulfill the original intention but also proves to be a very nice implementation.

Uniform Jenkins Pipelines

This approach does assume the implementer’s goal is to remove all scripting from the Jenkinsfile to make all Jenkins Pipelines conform to a particular process, which will be defined in the global shared library source code.

For example, if you are continuously testing and delivering Java applications and Python applications, you would want to have a standard “Java pipeline” and a standard “Python pipeline” with similar steps. However, you may want give the developer the ability to inform that pipeline to behave differently through inputs in the Jenkinsfile.

That brings us to…

Defining a Pipeline Type

Wouldn’t it be great if a developer never had to write any scripts in the Jenkinsfile? Instead, what if they could just configure a pipeline with standardized steps, like this:

# /path/to/a/project/repo/pipeline.yaml
pipelineType: python
runTests: true
testCommand: "pytest test.py"
deployUponTestSuccess: true
deploymentEnvironment: "staging"

Then the Jenkinsfile would only have to look like this, for every project repo that Jenkins delivers:

@Library('name_of_your_shared_lib') _
import org.acme.*
new stdPipeline().execute()

The stdPipeline().execute() call would now just read the values out of the pipeline.yml file and run a Groovy script that is specialized to run a standard pipeline for a Python application.

// /src/org/acme/stdPipeline.groovy
package org.acme;

def execute() {

  node {

    stage('Initialize') {
      checkout scm
      echo 'Loading pipeline definition'
      Yaml parser = new Yaml()
      Map pipelineDefinition = parser.load(new File(pwd() + '/pipeline.yml').text)

    switch(pipelineDefinition.pipelineType) {
      case 'python':
        // Instantiate and execute a Python pipeline
        new pythonPipeline(pipelineDefinition).executePipeline()
      case 'nodejs':
        // Instantiate and execute a NodeJS pipeline
        new nodeJSPipeline(pipelineDefinition).executePipeline()



Now, in this case, we have a pipelineDefinition.pipelineType of 'python', so we need a corresponding script/class in the src folder that will contain the reusable pipeline code. Here’s an example or how we could write that script/class to accept the inputs described above.

// /src/org/acme/pythonPipeline.groovy
package org.acme;

pythonPipeline(pipelineDefinition) {
  // Create a globally accessible variable that makes
  // the YAML pipeline definition available to all scripts
  pd = pipelineDefinition

def executePipeline() {
  node {
    if (runTests) {
      stage('Run Tests') {
        sh pd.testCommand

    if (deployUponTestSuccess) {
      stage('Deploy') {
        sh "path/to/a/deploy/bash/script.sh ${pd.deploymentEnvironment}"

return this

Note: The return this at the end of the class. This effectively inserts the class into the runtime of the Jenkinsfile. For example, looking at the above code, it is possible to do:

sh pd.testCommand

whereas sh is a Jenkins Pipeline step that can only be executed in a Jenkinsfile, usually, and only works because we did return this at the end of our class. If we hadn’t, then doing sh testCommand would fail and require several additional lines of code in our Jenkinsfile and in our src script to make it work. Without getting into the nitty gritty, we have found this to be most convenient. Otherwise, syntax would be far less intuitive. (You know what I’m talking about if you’ve seen pipeline library code with this.something or script.something. It’s not pretty or convenient.)

At this point, you could feel free to further modify the /src/org/acme/pythonPipeline.groovy class and abstract the stages defined above in the executePipeline() function out into other functions or just create helper functions for any other purpose common to object-oriented programming.

Now you could use this same design to make the following possible:

# /path/to/another/project/repo/pipeline.yaml
pipelineType: java
runTests: true
testCommand: mvn test
deployUponTestSuccess: true
deploymentEnvironment: staging

which would then activate the steps for a Java pipeline defined in new scripts in the src folder.

Of course, there are so many possibilities, but the main takeaways from our approach to global shared libraries are:

  • Make it easy for developers to configure their pipelines, try to shield them from Groovy/Jenkins Pipeline code
  • Use the return this trick at the end of your shared library classes so you can work directly with Jenkins Pipeline steps in your library code, instead of having to pass objects around
  • Try to use vars only for static global variables, that would otherwise be in a singleton class