Jenkins performance test pipeline jmeter/docker

Alexandru Ersenie
Categories: Tech, Uncategorized

I worked on this project about a year and a half ago, with the purpose of running some of our performance tests as a jenkins pipeline at the end of the functional test pipeline.

Up to this point we were using a Jmeter Framework that i wrote more than 7 years ago, exporting results in MySql and interpreting them using Jasper Reports. The drawback was that this approach was nothing close to being:

  1. dynamic
  2. close to the functional test pipeline
  3. proactive
  4. automated

It required manual intervention, manual interpretation, so it wasn’t a fit no more for the dynamic, CI oriented, ever changing landscape that we are running

So this needed to be refactored. Re-architectured. Rethought to answer the current needs. With that in mind, it was these points that i focused on:

  • Run as many JMeter Agents as i need
  • Use Docker for running JMeter Agents
  • Use Jenkins pipelines for running performance tests
  • Use Jenkins performance plugins for interpreting test results automatically
  • Have a dynamic, understandable way of defining SLA’s
  • Pass/Fail the pipeline based on predefined SLAS

Let’s dig into that, and see how i used all these fine tools, such as Jenkins, Docker, Jmeter, Groovy, and Jenkins plugins to achieve all of the above.

If you want to skip directly to the code, i have uploaded it into my own repository on Github:

https://github.com/alexandru-ersenie/jmeter-docker-jenkins-pipeline

Jmeter Docker Image

I was against using two docker images, for the purpose of the JMeter controller and JMeter agent. So i thought, why not have one image, and run it as a controller or agent depending on the arguments given?

I therefore created an entrypoint which, depending on the arguments given, will start either a JMeter Server ( so called controller), or a JMeter Agent

The jmeter docker images is built by unpacking the binaries, and then using a custom launch script. This way you can easily update your JMeter image with the newest binaries.

The Docker file looks like this: ( replace the docker jdk image with any openjdk docker image, i replaced the name of our registry with MYREGISTRY)

FROM MYREGISTRY:5000/docker-jdk8:8.0.232-1

ENV JMETER_INSTALL /opt
ENV JMETER_MODE MASTER
ENV JMETER_BIN  ${JMETER_INSTALL}/apache-jmeter/bin
ENV PATH $PATH:$JMETER_BIN
ENV SLEEP 30



ADD resources/binaries/apache-jmeter.tgz ${JMETER_INSTALL}
RUN mv ${JMETER_INSTALL}/apache-jmeter-5.0 ${JMETER_INSTALL}/apache-jmeter
COPY conf/user.properties  ${JMETER_BIN}

COPY scripts/launch.sh ${JMETER_INSTALL}
RUN chmod 0755 ${JMETER_INSTALL}/launch.sh


RUN useradd -d /home/jmeter -u 1001 -s /bin/bash -d /home/jmeter jmeter

WORKDIR /home/jmeter
USER jmeter

EXPOSE 60000 1099 50000


ENTRYPOINT ["/opt/launch.sh"]

The launch script defines the environment variables for running as a master or slave:

#!/bin/bash

set -e

export JVM_ARGS="-Xms1024m -Xmx4024m"

echo "START Running Jmeter on `date`"
echo "JVM_ARGS=${JVM_ARGS}"

# Keep entrypoint simple: we must pass the standard JMeter arguments

if [[ "${JMETER_MODE}" == "MASTER" ]]; then
    echo "starting JMeter in Master mode"
    #sleep ${SLEEP}
    exec "$@"
else
    echo "starting Jmeter in Agent mode"
    sleep ${SLEEP}
    exec jmeter-server "$@"
fi

The user properties file define environment variables that will be used by JMeter in finding the performance tests, since we use a lot of include controllers; additionally it defines the rmi and server port for Controller to Agent communication

Our test framework relies massively on groovy and beanshell scripts, that reside in the testplans/includes/scripts folder. This files need to exist on both master and agent, so we need to mount the entire repository containing the tests inside the container

Additionally, the test framework needs to know where the test plans reside, so include paths have to be provided. Due to a limitation in the entrypoint ( options cannot be appended after the entrypoint argument), this properties will be added in the jmeter image in form of user.properties, together with other options needed for distributed testing

Basically, the idea is that we will use a JMeter docker image, and mount the performance test repository into it when running the tests.

server.rmi.ssl.disable=true
includecontroller.prefix=/home/jmeter/tests
path=/home/jmeter/tests/jmeter/testplans/includes/scripts/
server.rmi.localport=50000
server_port=1099

Prepare the JMeter Docker Container for Jenkins

Since Jenkins is running it’s jobs from its workspaces, we need to make these known to the Docker Image as well. This is because we will need Jenkins to have access to the JTL Files containing the test results, that need to reside in the workspace. In the following example:

  • a JMeter test will be started, called Tplan_EOC_Play_Single_Game_EGB.jmx.
  • the test resuts will be stored in the jenkins workspace; we will name the file “perf-money.jtl”; this file will contain all samplers and their responses, and will be later used for interpretation
  • pass global arguments to each JMeter agent started:
    • 1 user per agent
    • 1 second ramp up time
    • 1 game round
    • host: the url against we will perform the test
    • casino: this test is performing some transactions against a casino (this is what we do, games)

Like i said, you need to check out your test repository first, in order to be able to mount it. Make sure you start the docker container from within the folder where you checked out the repo.

docker run -e JMETER_MODE=MASTER -v $PWD:/opt/jenkins/workspace/jmeter-test/jmeter myregistry/docker-edict-jmeter:4.0.0-1-local-1 jmeter -n -t /opt/jenkins/workspace/jmeter-test/jmeter/testplans/egb/Tplan_EOC_Play_Single_Game_EGB.jmx -l /opt/jenkins/workspace/jmeter-test/jmeter/perf-money.jtl -e -o /opt/jenkins/workspace/jmeter-test/jmeter/perf-money -Jsummariser.interval=5  -Gusers=1 -Gramptime=1 -Ggamerounds=1 -Ghost=testhost -Gcasino=testcasino

So what does this all do?

  • Test Framework containing all JMeter Tests is mounted into the jmeter container. Be sure to be inside the jmeter subfolder : -v $PWD:/opt/jenkins/workspace/jmeter-test/jmeter
  • Define the path to the testplan to be executed: -n -t /opt/jenkins/workspace/jmeter-test/jmeter/testplans/egb/Tplan_EOC_Play_Single_Game_EGB.jmx
  • Define the path to the testresults file: -l /opt/jenkins/workspace/jmeter-test/jmeter/perf-money.jtl
  • Define the path to the jmeter report: -e -o /opt/jenkins/workspace/jmeter-test/jmeter/perf-money
  • Define the load scenario (number of users, rampup, host): -Gusers=1 -Gramptime=1 -Ggamerounds=1 -Ghost=testhost -Gcasino=testcasino

Allright, so we are able to run one basic JMeter test using :

  • a JMeter Docker image
  • tests coming from our own repository, mounted into the image

It is time we take this further. What we want to acchieve is:

  • Start a JMeter Controller
  • Pass the arguments to the JMeter Agents
  • Start as many JMeter Agents as we wish

Let’s take it to the next step, automating it with the Jenkins pipeline. This has been very well documented by jenkins, and you can find a lot of info following the official documentation:

https://jenkins.io/doc/book/pipeline/docker/

Create a Jenkins pipeline for running the tests

Well first, all those global parameters that we passed above, like host, casino, users, ramp up and so on, we’d like to allow setting them dynamically. So let’s define ourselves some variables.

The pipeline will use variables injected through the Jenkins Job, such as:

  • tag=’5.0.1′ ( the GIT Tag for the performance tests in GIT)
  • host=”${HOST}” ( the host against we will run the tests, injected as Jenkins Variable)
  • casino=”${CASINO}” ( the casino against we will run the tests, injected as Jenkins Variable)

The logic in the pipeline implements following steps:

  • define variables in Jenkin job, such as tag, host, casino, users, ramp up etc.
  • start the job on a node labeled with “docker”
  • checkout the performance test repository from git, using the git credentials defined in the Jenkins Job itself
  • read the SLA’s (we’ll come to this in a minute)
  • start three JMeter Agents as docker images
  • get the ips of the JMeter Agents so we can use them in the JMeter Controller (the controller needs to know where the Agents reside); since everything is happening inside the Docker network, just get the ip of the containers
  • define the properties we will run the test with, such as host, users, rampup, casino, game rounds, repeat loop; we define this inside the jenkins file, thus allowing us to perform multiple tests without destroying the JMeter Docker images
  • start the JMeter Controller and the tests
  • publish the performance report
  • compare the results with the constraints defined in the sla file, and pass or fail test accordingly
tag='5.0.1'
host="${HOST}"
casino="${CASINO}"
agentIpList=''
workspace=''


/* Used for pulling the image. The withRegistry sets the context to retrieve only the docker image, not using the entire
link . That is why we need to perform the pull only using the image name
Yet, when running the container, we need to start it with the complete image name. Therefore we need two variables here
*/

tagged_image=docker.image('docker-edict-jmeter:'+tag)
image=docker.image('myregistry:5000/docker-edict-jmeter:'+tag)

/* We will hold the ip's of the JMeter Agent Containers in a list so we can forward it to the JMeter Master when starting the test
The handleList is used for storing the container handles of the JMeter Agents so we can perform shutdown of Agents
when finishing the test, or cleaning up when something goes wrong */

cIpList = []
cHandleList = []

//Use label to run pipeline only on docker labeled nodes. Set timeout to 60 minutes
timeout(240) {
    node('docker') {

        cleanWs deleteDirs: true, patterns: [[pattern: '*', type: 'INCLUDE']]
        stage('checkout') {
            checkout([$class: 'GitSCM', branches: [[name: "${BRANCH}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'gitlab-edictci-repo', url: 'git@mygitserver:performance/jmeter-performance-tests.git']]])
            // Read threshold values for testcases from sla.json file
            data = readJSON file: 'sla.json'
        }
        // Change into jmeter subfolder, so we do not mount the entire eoc, but only the performance tests
        dir('jmeter') {
            try {

                docker.withRegistry('https://myregistry:5000') {
                    tagged_image.pull()
                    stage('startAgents') {
                        // Start 3 JMeter Agents and retrieve their IP and the container handle. Mount current folder into the container
                        for (i = 0; i < 3; i++) {
                            agent = image.run('-e SLEEP=1 -e JMETER_MODE=AGENT -v $WORKSPACE:/home/jmeter/tests', '')
                            agent_ip = sh(script: "docker inspect -f {{.NetworkSettings.IPAddress}} ${agent.id}", returnStdout: true).trim()
                            cIpList.add(agent_ip)
                            cHandleList.add(agent)
                        }
                        // Store the formatted list of JMeter Agent Ips in a String
                        agentIpList = cIpList.join(",")
                    }
                  
                    // Load Test for measuring KPI'S for a defined throughput of 40 games / second
                    stage('performancePlayMoneyGames'){
                        propertiesMap = [
                                'users': 30,
                                'rampup': 30,
                                'gameRounds': 100,
                                'host': host,
                                'wallet': host,
                                'casino': casino,
                                'repeatLoop': 1
                        ]
                        performTest('egb/Tplan_BI_Kafka.jmx',"${STAGE_NAME}",setPlanProperties(propertiesMap))
                    }


                
                    stage('cleanup') {
                        // Handle shutdown of previous started JMeter Agents
                        cleanup(cHandleList)
                        cleanWs deleteDirs: true, patterns: [[pattern: '*', type: 'INCLUDE']]
                    }
                }
            }
            catch (Exception e) {
                // Handle shutdown of previous started JMeter Agents
                cleanup(cHandleList)
                cleanWs deleteDirs: true, patterns: [[pattern: '*', type: 'INCLUDE']]
            }
        }
    }
}


// Method for cleaning started JMeter Agents
def cleanup(containerHandleList) {
    for (i =0; i < containerHandleList.size(); i++) {
        containerHandleList[i].stop()
    }
}

def performTest(testplan,report,propertiesList) {
    image.inside('-e JMETER_MODE=MASTER -v $WORKSPACE:/home/jmeter/tests') {
        sh "jmeter -n -t /home/jmeter/tests/jmeter/testplans/$testplan -l $WORKSPACE/jmeter/${report}.jtl -e -o $WORKSPACE/jmeter/$report -Jsummariser.interval=5 -R$agentIpList $propertiesList"
    }
    publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: ''+report, reportFiles: 'index.html', reportName: 'HTML Report '+report, reportTitles: ''])
    perfReport constraints: configureCheckList(report),
            graphType: 'PRT', modeEvaluation: true, modePerformancePerTestCase: true, modeThroughput: true, percentiles: '0,50,90,100', persistConstraintLog: true,
            sourceDataFiles:  report+'.jtl'

}

def configureCheckList(report)
{
    constraintList = []
    // Get constraints from JSON File by looking after the key name = test name (given as variable)
    readConstraints = data.find { it['name'] == report }?.get("constraints")
    println("constraints determined dynamically are:"+readConstraints)
    readConstraints.absolute.each {
        constraintList.add(absolute(escalationLevel: 'WARNING', meteredValue: 'LINE90', operator: 'NOT_GREATER', relatedPerfReport: report + '.jtl', success: false, testCaseBlock: testCase(it.name), value: it.threshold))

    }
    println("my final constraint list: "+constraintList)
    return constraintList
}
def setPlanProperties(propertiesMap)
{
    // Retrieve properties defined in the properties map for each plan, and create a string of properties
    // to be passed to JMeter
    propertiesList="-G"+propertiesMap.collect { k,v -> "$k=$v" }.join(' -G')
    println("property list"+propertiesList)
    return propertiesList
}
I guess it is quite clear how the JMeter Controller / Agents inside Docker is working. Let's take a look now at how we make use of the SLA's, defining the so called constraints, which will help Jenkins decide if a test passes or fails, after comparing the results with the constraints

I wanted to have some sort of structure in the SLA file. I wanted it to be readable, understandable, easy to manage. So i came up with the idea of having it in a json file, giving me access to the structures in a programmable fashion. Let's dive into this. We'll define constraints for three performance tests:

- performancePlayMoneyGames
- performanceLongRunPlayMoneyGames
- memoryMetaPlayAllGames

It is important to know that the name of the SLA Group has to be identical to the name of the JMeter Report being used in the pipeline
This will allow us for example to run all three tests within a single pipeline, like this:
stage('performancePlayMoneyGames'){
                        propertiesMap = [
                                'users': 30,
                                'rampup': 30,
                                'gameRounds': 100,
                                'host': host,
                                'wallet': host,
                                'casino': casino,
                                'repeatLoop': 1
                        ]
                        performTest('egb/Tplan_BI_Kafka.jmx',"${STAGE_NAME}",setPlanProperties(propertiesMap))
                    }
stage('performanceLongRunPlayMoneyGames'){
                        propertiesMap = [
                                'users': 10,
                                'rampup': 10,
                                'gameRounds': 1000,
                                'host': host,
                                'wallet': host,
                                'casino': casino,
                                'repeatLoop': 1
                        ]
                        performTest('egb/Tplan_BI_Kafka.jmx',"${STAGE_NAME}",setPlanProperties(propertiesMap))
                    }

And this is how the sla file can look like. Make sure that the Test Case in your JMeter Plan is called exactly like the constraint name, otherwise the plugin will fail with: Performance Plugin: Could’t find a test case specified in the performance constraints! TestCase: “Game/Spin” Report: “performancePlayTest.jtl”

[
    {
      "name": "performancePlayMoneyGames",
      "loadConfiguration": {
        "users": 20,
        "rampup": 20,
        "gamerounds": 100
      },
      "constraints": {
        "absolute": [
          {
            "name": "Game/spin, Game/finish",
            "threshold": 180
          },
          {
            "name": "Game/start",
            "threshold": 350
          },
          {
            "name": "Game/close",
            "threshold": 150
          },
          {
            "name": "Portal/getGames",
            "threshold": 150
          }
        ]
      }
    },
    {
      "name": "performanceLongRunPlayMoneyGames",
      "loadConfiguration": {
        "users": 10,
        "rampup": 10,
        "gamerounds": 1000
      },
      "constraints": {
        "absolute": [
          {
            "name": "Game/spin, Game/finish",
            "threshold": 150
          },
          {
            "name": "Game/start",
            "threshold": 1000
          },
          {
            "name": "Game/close",
            "threshold": 100
          },
          {
            "name": "Portal/getGames",
            "threshold": 300
          }
        ]
      }
    },
    {
       "name": "memoryMetaPlayAllGames",
      "loadConfiguration": {
        "users": 1,
        "rampup": 1,
        "gamerounds": 5
      },
      "constraints": {
        "absolute": [
          {
            "name": "Game/spin, Game/finish",
            "threshold": 300
          },
          {
            "name": "Game/start",
            "threshold": 300
          },
          {
            "name": "Portal/getGames",
            "threshold": 150
          }
        ]
      }
    }

  ]

Let’s take the first group of slas, and see what it means:

  1. JMeter samplers performing Game/spin and Game/finish requests, should have a 90 % line no greater than 180 ms
  2. Samplers performing “Game/start” should have a 90% line no greater than 350 ms
  3. Samplers performing “Game/close” should have a 90% line no greater than 150 ms
  4. Samplers performing “Portal/getGames” should have a 90% line no greater than 150 ms
{
      "name": "performancePlayMoneyGames",
      "loadConfiguration": {
        "users": 20,
        "rampup": 20,
        "gamerounds": 100
      },
      "constraints": {
        "absolute": [
          {
            "name": "Game/spin, Game/finish",
            "threshold": 180
          },
          {
            "name": "Game/start",
            "threshold": 350
          },
          {
            "name": "Game/close",
            "threshold": 150
          },
          {
            "name": "Portal/getGames",
            "threshold": 150
          }
        ]
      }

We will read this values in the pipeline, accessing the “constraints” field, and adding the constraints above to the constraint list. This will then be checked upon at the end of the test, and the results will be compared to the constraints, as defined in the constraint list.

Here i added an absolute constraint, which compares absolute values. Basically i said:

If 90 % of the response times for the defined samplers is greater than the defined constraint, mark the Jenkins job as WARNING

More to the implementation and usage of the Jenkins Performance Plugin (the documentation on constraints can be improved) here:

https://plugins.jenkins.io/performance/

def configureCheckList(report)
{
    constraintList = []
    // Get constraints from JSON File by looking after the key name = test name (given as variable)
    readConstraints = data.find { it['name'] == report }?.get("constraints")
    println("constraints determined dynamically are:"+readConstraints)
    readConstraints.absolute.each {
        constraintList.add(absolute(escalationLevel: 'WARNING', meteredValue: 'LINE90', operator: 'NOT_GREATER', relatedPerfReport: report + '.jtl', success: false, testCaseBlock: testCase(it.name), value: it.threshold))

    }
    println("my final constraint list: "+constraintList)
    return constraintList
}

Let’s put it to work and see how this looks like. This is the sample from a Jenkins run, where the constraints are being read:

constraints determined dynamically are:[absolute:[[name:Game/spin, Game/finish, threshold:180], [name:Game/start, threshold:350], [name:Game/close, threshold:150], [name:Portal/getGames, threshold:150]]]
[Pipeline] echo
my final constraint list: [@absolute(escalationLevel=WARNING,meteredValue=LINE90,operator=NOT_GREATER,relatedPerfReport=performancePlayMoneyGames.jtl,success=false,testCaseBlock=@testCase(<anonymous>=Game/spin, Game/finish),value=180), @absolute(escalationLevel=WARNING,meteredValue=LINE90,operator=NOT_GREATER,relatedPerfReport=performancePlayMoneyGames.jtl,success=false,testCaseBlock=@testCase(<anonymous>=Game/start),value=350), @absolute(escalationLevel=WARNING,meteredValue=LINE90,operator=NOT_GREATER,relatedPerfReport=performancePlayMoneyGames.jtl,success=false,testCaseBlock=@testCase(<anonymous>=Game/close),value=150), @absolute(escalationLevel=WARNING,meteredValue=LINE90,operator=NOT_GREATER,relatedPerfReport=performancePlayMoneyGames.jtl,success=false,testCaseBlock=@testCase(<anonymous>=Portal/getGames),value=150)]

Let’s get to the magic, and see how the results are being interpreted:

----------------------------------------------------------- 
There are no relative constraints to evaluate! 
-------------- 
Evaluating all absolute constraints! 
-------------- 
Absolute constraint successful! - Report: performancePlayMoneyGames.jtl 
The constraint says: 90% Line of Game/spin, Game/finish must not be greater than 180
Measured value for 90% Line: 130.0
Escalation Level: Warning
-------------- 
The constraint says: 90% Line of Game/start must not be greater than 250
Measured value for 90% Line: 200.0
Escalation Level: Warning
-------------- 
The constraint says: 90% Line of Game/close must not be greater than 150
Measured value for 90% Line: 150.0
Escalation Level: Warning
-------------- 
The constraint says: 90% Line of Portal/getGames must not be greater than 500
Measured value for 90% Line: 138.0
Escalation Level: Warning
-------------- 
There were no failing Constraints! The build will be marked as SUCCESS

Since all SLA’s were reached, and no samplers had 90 % of the requests greater than the constraints, the test was marked as “passed”

Additionaly a HTML Report was created, with some nice useful reports:

JMeter Jenkins Performance Report
JMeter Jenkins Performance Report
JMeter Jenkins Performance Report

These are just some samples from all existing reports, try it out for yourself to get more.

This is pretty much it. Hope you enjoyed the reading, and that you can use it in your pipelines as well.

Once again, you can find all these artifacts in my repository on github:

https://github.com/alexandru-ersenie/jmeter-docker-jenkins-pipeline

Best,

Alex

Print Friendly, PDF & Email