How to Set Up a Local Pipeline in Jenkins and Kubernetes
Development of new features and deployment in an enterprise company will most likely feature a dedicated server for building and deploying applications. I think that with containerization and the new Kubernetes with Docker Desktop, everyone can have their own build server running locally and can publish their applications without having a dedicated server.
The core concepts are nothing new. We were all capable of this before, but it was a lot more painful and time consuming to set up and maintain. Ignoring the benefits of the pipeline approach to building and maintaining applications is impractical. The added benefit to implementing the solution locally is that it offers a clear and simple path for when you need to upgrade to a dedicated server, not to mention the experience you will get that can benefit you at your everyday job.
First, you will need to follow the steps to configure Kubernetes and Jenkins. The final code is available at JobRM/websiteui:jobrm@1.0.
Outline
After setting up the base for our pipeline, we need to create a rough draft of what we want the pipeline to do.
- Get latest version of code
- Code Formatting checks
- Build
- Run Unit Tests
- Run Code Coverage
- Deploy Local
- Run Integration Tests
- Deploy Production
- Run Post Deployment Tests
This pipeline is pretty simple and gives a great way to have a deployment process for your application locally.
Breaking it down
Get latest version of code: In most cases we would just get one git repository, but since we are local, we are just limited to how we mount the root folder.
Code Formatting checks: Depending on your language and tool set you may have several different tools to ensure standard code formatting and styles that can be run quickly; examples include eslint/eshint and tslint and ReSharper. I assume compilation is a longer process than formatting checks, and you will want to front load any quick and simple tasks that can fail in the beginning. You want to run any kind of cleanup and formatting checking before compilation to make sure the formatted code still compiles.
Build: Do any build calls here. This could include creating docker images if used for future deployments.
Run Unit Tests: Run any unit tests you have.
Run Code Coverage: This step is dependent on the previous step. If configured to be run, this check could fail the build if a threshold is set.
Deploy Local: Here, you’ll need to check the application on the local cluster to verify that the application can be deployed. You can create charts for helm deployments and do Docker builds and publish to an image repository.
Run Integration Tests: Any tests that require multiple applications to be executed should be run before production deployment. Delaying these to the last possible moment will ensure rapid feedback before anything runs for very long. A good set of tests should give us confidence that when deployed, our application will continue serving customers.
Deploy Production: Here we want to complete any steps that are required to deploy the application to production. This could include a manual approval step and also any messaging about completion, such as a Slack message.
Run Post Deployment Tests: Here we want to run any smoke tests, happy path tests, or regression tests that exist.
Implementation
In my case of using Jenkins as CI/CD server, I translate the above steps into pipeline script. To start we need to define our outline.
Sometimes when I tried to run a build, I discovered that the builds were starting the slaves but it seemed like the jenkins-slave
pods could not communicate back to the master.
Unfortunately, I could not get to the bottom of this, and simply deleted Jenkins with helm delete jenkins --purge
and reinstalled Jenkins to start from scratch. That seemed to have cleared up the error and everything started to work again. Many times after my initial install I found myself deleting and reinstalling Jenkins to get things to work properly, especially after the initial configuration changes.
If you are using Windows, you might run into an error that will say the command that you are trying to run is not found.
'ng' is not recognized as an internal or external command,
operable program or batch file.
This could be because the .bin
folder was never created when npm install
was running. One of the reasons may be that when you were running npm install,
your terminal was not running as administrator. Also any other errors in the install process seem to break the generation of the .bin
folder. To fix this, delete the node_modules
directory and the package-lock.json
file and reinstall from a terminal that is run as administrator.
Picking a Container
Picking the right container to run the commands for npm
and ng
proved to be difficult. My ideal containers are usually Alpine Linux images which have the tools necessary to execute a command that I need. This does mean that sometimes I use very specific containers that only have one tool installed on them. I also tend to use containers that are already defined and maintained by either the official tool creator, or something that is well maintained by the open source contributors. I encourage everyone to read the Dockerfile
for the containers that they are planning to use and to understand what is in them and their complexity.
My requirements started with having angular-cli pre-installed on the image. I searched and tried several and settled on alexsuch/angular-cli. That worked well enough for the lint
and build
step, but failed on the unit test step, because karma
requires a browser to execute the tests and none were installed on the docker image. Using the tag 6.1.2-chromium
would have solved the problem, but that image has an outdated version of node and npm so it can’t be used with the latest angular version.
Later on, I realized I did not need angular-cli to be present on the machine and could just use the ng
command from the node_packages
. The requirements became something that has Chrome pre-installed, and preferably on an Alpine image. After further searching, I found one from Zenika/alpine-chrome which is an Alpine image with chrome installed and as the last step, it is cleaned up to decrease the size of the container. But using the image turned out to have a failure of a different kind.
sh: 1: cannot create /home/jenkins/workspace/JobRelationshipManager@tmp/durable-e64609b5/jenkins-log.txt: Permission denied
touch: cannot touch '/home/jenkins/workspace/JobRelationshipManager@tmp/durable-e64609b5/jenkins-log.txt': Permission denied
mv: cannot stat '/home/jenkins/workspace/JobRelationshipManager@tmp/durable-e64609b5/jenkins-result.txt.tmp': No such file or directory
...
process apparently never started in /home/jenkins/workspace/JobRelationshipManager@tmp/durable-e64609b5
After much digging and failing, I noticed that the image they built creates a chrome
user and sets the image to that user. According to alenkomljen:
Important: Jenkins JNLP slave agent is running with Jenkins user. The Jenkins user UID/GID is 10000, which means that workspace owner is the same user. If you are using root user in other containers to do some work you will not have any problems. But, in above example, I used the official gradle container image which is running with the gradle user. The issue is that this user has UID/GID of 1000. This means that gradle commands will probably fail because of permission issues. To fix it you would need to update gradle container to use 10000 as UID/GID for the gradle user or to use custom JNLP slave agent image.
In this case the UID/GID will not be 10000. So I forked and made my own image and removed the user from the Dockerfile completely. In hindsight, I should have just changed the user to have the same group id to see if that worked. I might do that in the future along with removing dependence on zenika/alpine-node.
Once I picked the container, all that was left was to add it to the Jenkinsfile.
containerTemplate(name: 'chrome', image: 'garunski/alpine-chrome:latest', command: 'cat', ttyEnabled: true)
Running Unit Tests
Running unit tests locally is pretty simple, npm run test
. But that does not work in a container. To get karma to play well with the container Chrome installation, I needed to modify karma.conf with appropriate Headless Chrome settings.
...
browsers: ['Chrome', 'ChromeHeadless'],
singleRun: false,
customLaunchers: {
ChromeHeadless: {
base: 'Chrome',
flags: [
'--headless',
'--disable-gpu',
'--remote-debugging-port=9222',
'--no-sandbox'
]
}
}
...
I also added a new script to npm scripts.
"test-ci": "ng test --no-progress --watch=false --browsers=ChromeHeadless --code-coverage",
The various flags make it easier to run the command on a CI system. --no-progress
allows a less verbose output to the command line. --watch
is needed for the process to stop and not continue watching for changes. Without disabling the watch flag, the pipeline will wait indefinitely. --browsers
defines the Headless Chrome browser that was set up which has the necessary flags for karma to run. Code coverage is also enabled to gather metrics for the next stage of the pipeline.
To get the unit tests to show up in Jenkins, I needed to add the karma-junit-reporter, and junit
to reporters in karma.conf.js
and set some options as follows.
...
junitReporter: {
outputDir: 'test-results',
outputFile: 'unit-test-results.xml'
},
...
After adding junit ‘test-results/**/*.xml’
as a step in the Run Unit Tests
stage of the Jenkinsfile, I was able to see my unit tests in the Jenkins UI.
Code Coverage and Cobertura
Next I wanted to enable and start gathering metrics about code coverage. The first step was to add the right reporter to karma so that the Cobertura plugin can pick up the files and run its tests and create graphs. I am using the karma-coverage-istanbul-reporter
which needs the following configuration:
...
coverageIstanbulReporter: {
dir: require('path').join(__dirname, 'coverage'),
reports: ['html', 'lcovonly', 'cobertura', 'text-summary'],
'report-config': {
cobertura: {
file: 'cobertura.xml'
}
},
fixWebpackSourcePaths: true
},
...
reporters:
config.buildWebpack &&
config.buildWebpack.options &&
config.buildWebpack.options.codeCoverage
? ['progress', 'junit', 'coverage-istanbul']
: ['progress', 'kjhtml'],
...
The reporters are configured to be JUnit and Cobertura in the code-coverage case, because I only need them in my CI build. Currently the coverage reports only show up in the old Jenkins UI and not in Blue Ocean UI.
Once everything is setup, the main page of the old Jenkins UI gets two graphs with trends. They are helpful to see if I am keeping up with unit testing.
The process to get the Cobertura parameters for the scripted pipeline syntax is quite simple. In the Jenkins UI go to the project and select Pipeline Syntax
.
From the Sample Step
, select Cobertura
and enter **/cobertura.xml
for the Cobertura xml report pattern
and select Generate Pipeline Script
.
This is also how you can figure out how to run other plugins that have scripts within the pipeline.
Deploy to Local Kubernetes
Build the Image
In this step, I tried to figure out what is the simplest and yet production-ready container I could make.
...
container('docker') {
sh "echo \$'FROM nginx:stable-alpine \nCOPY ./dist /usr/share/nginx/html \n' > Dockerfile"
sh 'docker build -t jobrm:latest .'
}
...
This is a script that creates a Dockerfile in the pipeline and then uses the newly created file to build the image with whatever tags you want to pass to it. I chose not to create a separate Dockerfile because I did not want maintain it. The container image will be built and available on my local Docker repository. In the case of Docker Desktop, this is the same repository that you can inspect with docker image list
command from the terminal. If you are implementing this on minikube, the container images are on minikube’s virtual machines’ local repository.
The docker
container enables me to use the docker build
commands against the docker.sock that has been mounted onto the agent. There are two ways to add docker.sock to your containers. One is to add volumes to podTemplate.
podTemplate(
...
volumes: [
hostPathVolume(
mountPath: '/var/run/docker.sock',
hostPath: '/var/run/docker.sock')
]) {
...
And the other is when deploying Jenkins with Helm chart onto the Kubernetes cluster, you can specify volumes to be included in the spin up of the slave pod.
Agent:
volumes:
- type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
Deploy Container
I considered having a very long kubectl configuration file, but since I already had Helm experience, I decided to add a small helm chart. The chart is pretty much the template that is generated for you when you run helm create {new name}
with a few cleanups.
container('helm') {
sh 'helm upgrade --install --force jobrm-static ./charts/jobrm-static --set image.repository=jobrm --set image.tag=latest --set service.type=NodePort --set service.nodePort=31001'
}
I am using --install --force
to avoid checking if the chart already has been deployed previously to the cluster. Any issues are going to be cleared out by deleting and recreating the chart. For a production scenario this may not be an ideal setup. Forcing the deployment may hide issues in the helm deployment process. And a few strings should be extracted into variables, like repository name and tag and port.
Run Integration Tests
This deserved its own post, Angular E2E Testing with Protractor.
Almost Completed Pipeline
I have left the last 2 steps blank, since I am not currently deploying the application anywhere. The final Post Deployment Tests can be run easily by changing the --base-url
to be the production URL in the npm scripts.
Conclusion
The final code is available at JobRM/websiteui:jobrm@1.0. I plan on expanding and improving the project and giving more examples of implementations of the back end and front end.
I have to apologize for the git history on the project. I could have squashed all the commits and created one commit with all the changes. But I think its important to show that all development is an iterative process. There is no one way of doing things, and finding answers can take hundreds of iterations of very small steps. Some of these steps work and some do not, and having small commits can let you go through your history and revert or see what changed and how. It also shows how long it can really take to build a pipeline.