Welcome to

Mimpi Development

Developing for tomorrow, one dream at a time

Our CICD Pipeline Explained

Posted on 22 Sep 2020 in Systems | 0 comments

For the second article in the series, I thought I would discuss parts of the CICD pipeline. Firstly, my predecessor (Guy Gershoni), had already set up Gitblit, Jenkins, and minio in docker stacks. He had also created 1 or 2 working examples, which I extended for production uses, and I will be discussing this.

Probably the best place to start is with our git platform. Guy, and our boss had decided to go with Gitblit for our git repository. One of the unique approaches Gitblit possesses is its approach to merge requests, or pull requests. Gitblit’s version is called a ticket, which is essentially a branch with a special tag. For a deeper explanation of what gitblit tickets are, please read the following: GitBlit Tickets. The Docker image we used in our git stack can be found at the following URL on Dockerhub.

Our Jenkins implementation is a custom build of the Jenkins docker container, which gives Jenkins the ability to run docker containers. Following the instructions found here, will give you the beginning of a Jenkins Docker setup that will allow you to execute jobs inside Docker containers in your pipeline.

We currently configure our Jenkins Projects to poll the related gitblit repo every minute for new commits. As our git repo and Jenkins system reside internally on our network, the fact that we are polling git every minute from Jenkins, and not using a webhook is not too much of an issue, as we don’t have too many Jenkins projects at this time. At the time of writing, I do believe Gitblit can handle webhooks, and it will be something I will be investigating in the future.

Our Jenkins projects are configured to watch a certain branch on the git repo. When the polling notices a new commit on the nominated branch, it then processes our Jenkins Pipeline script. Depending on the project, if it is a Java application, the pipeline script executes ANT on the project, to compile the code into jars. The pipeline script then, using fpm, builds our rpm file for deployment. The execution of the ANT build and the creation of the rpm are done in their own docker containers, so we don’t have dependancy issues between packages. This is the reason for the custom Jenkins implementation.

Once Jenkins builds the RPM, it then connects to our minio server and copies the created RPM.  One of the functions in Minio that we use, is it’s webhook system. We have a webhook attached to each of our buckets (1 for testing and 1 for production for example), that when a new file is copied into the bucket, it refreshes the RPM repository index. On the other side of Minio, we have a webserver that hosts the directory out as our RPM repository for our client systems to grab their RPM from.

Source Code for jenkins docker and a pipeline example: github

Important lessons I learned:

  1. Clean out the RPM repo regularly. This was learned early on, because when we started, we only set our Infrastructure virtual machine to have about 50Gb of hard drive space, and with multiple daily commits from bug fixes and small feature requests, this quickly added up to burning through that hard drive space.
  2. Similar to point 1, clean out the Jenkins artifacts regularly. We chose to create artifacts of both the built jar files and the rpms, the so, having the rpms in two places on the system, meant double the fun in hard drive space.
  3. When using the Jenkins pipeline, the poll will not only check the main repository the Project is attached to, but the 2 required repositories that are defined in the Jenkins Pipeline file, meaning if we pushed an update into either of the 2 required repositories, it would automatically rebuild the Project.
  4. To help our support team we have recently added the ability for Jenkins and minio to leave messages in a private slack channel upon success or failure of operations, giving us better process visibility.
  5. We found that if 2 projects are feeding into the same minio bucket, we can end up with a race condition, which means, some files, although in the directory were not appearing in the RPM repo catalogue, and are currently in the process of a remedy for this.
  6. I have also recently changed our Jenkins pipeline definitions across our projects to tag the commit with a successful build number. This will allow us to better diagnose client issues, as we will be able to checkout a tagged commit to confirm issues. Tags comprise of “Production-<build num>” or “Testing-<build num>”
  7. When we migrated to a rebuild of the Infrastructure VM, it was at this point, that I learnt all about webhooks in minio. The new minio stack had the buckets created manually, and we didn’t transfer any of the old RPM files. In creating the buckets manually, without the script Guy had created, the buckets were missing the all important webhook. Up until this point I thought it was just built into minio how the rpm repo was updating, as I’d never actually looked into minio, and what it does.
  8. Another problem that plagued our setup, was when we had to restart the Infrastructure virtual machine, We would then have to manually remove the CICD stack and deploy the stack again to give it the connection to the docker stack it needs to execute steps in their own docker containers. This turned out to be, that we needed to add jenkins to the docker group with the same group id in the jenkins docker container, as the host system, and making sure that jenkins was a member of the docker group. I didn’t have time to track this problem down, but one of our other sys admins found the solution.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.