Continuous integration and delivery helps DevOps teams ship higher quality software, faster. But is all CI/CD created equal? What does successful CI/CD implementation look like and how do you know you’re on the right track?
In this four-part series, we talk about modernizing your CI/CD: Challenges, impact, outcomes, and solutions. In part one, we focused on common CI/CD challenges. In part two, we talked about the revenue impacts. Today, we’ll talk about what CI/CD can deliver and how to measure its success.
If these problems hit a little too close to home, stay tuned for part four where we dive deeper into finding the right CI/CD solution for you.
What are some of the benefits of a good CI/CD strategy?
1. Increased speed of innovation and ability to compete in the marketplace
Two identical companies: One implements CI/CD technology and the other doesn’t. Who do you think deploys applications faster? While this seems like a silly comparison, because of course the company with more automation deploys faster, there are organizations out there still convinced they don’t need CI/CD because they’re not looking at their competition. Organizations that understand the importance of CI/CD are setting the pace of innovation for everyone else.
2. Code in production is making money instead of sitting in a queue waiting to be deployed
Organizations that have implemented CI/CD are making revenue on the features they deploy, not waiting for a manual check to see if the code is up to par. They already know the code is good because they have tests that are automated, and continuous delivery means that code is deployed automatically if it meets certain standards. They’ve removed human error and delays from the process.
3. Great ability to attract and retain talent
Engineers that can focus on what they’re best at will be happier and more productive, and that has far-reaching impact. Turnover can be expensive and disruptive. A good CI/CD strategy means engineers can work on important projects and not worry about time-consuming manual tasks. They can also work confidently knowing that errors are caught automatically, not right before deployment. This kind of cooperative engineering culture inevitably attracts talent.
4. Higher quality code and operations due to specialization
Dev can focus on dev. Ops can focus on ops. Bad code rarely makes it to production because testing is automated. Developers can focus on the code rather than the production environment, and operations doesn’t have to feel like a gatekeeper or a barrier. Both teams can work to their strengths, and automated handoffs make for seamless processes. This kind of cooperation makes DevOps possible.
What capabilities are required to make this happen?
1. Robust CI/CD
When we use the term “robust,” it’s all about avoiding half-baked or partial solutions. There are several CI/CD solutions out there but there are varying degrees of effectiveness. Continuous integration and continuous delivery go hand in hand, so having a solution that offers both is ideal. The tool you use should offer the automation you need, not just some. If your CI/CD tool is prone to failure or “brittle,” it can be just one more thing to manage. This was precisely why the team at Ticketmaster replaced Jenkins CI and moved to weekly releases, decreasing their pipeline execution time from two hours to only eight minutes to build, test, and publish artifacts.
2. Containers and Kubernetes
Containers have made a huge impact on the way companies build and deploy code. While it was once difficult to develop applications with a microservices architecture, over the past five years it has become considerably easier with container orchestration tools like Kubernetes, comprehensive CI/CD tools that automate testing and deployments, and APIs that update automatically. Breaking up services so they can run independently reduces dependencies and creates better workflows.
3. Functionality for the entire DevOps lifecycle
Visibility is a huge asset when improving DevOps workflows. For some teams, they can have several tools handling different facets of the SDLC, which creates integration issues, maintenance issues, visibility issues, and is just plain expensive from a cost standpoint. A complex toolchain can also weaken security. In a Forrester survey of IT professionals, 45% said that they had difficulty ensuring security across the toolchain.
How would you measure success?
1. Cycle time
Cycle time is the speed at which a DevOps team can deliver a functional application, from the moment work begins to when it is providing value to an end user. See how the team at Axway was able to achieve a 26x faster DevOps cycle with GitLab.
2. Time to value
Once code is written, how long before it’s released? This delay from when code is written to running in production is the time to value, and is a bottleneck for many organizations. Continuous delivery as well as examining trends in the QA process can help to overcome this barrier to quick deployments.
3. Uptime, error rate, infrastructure costs
Uptime is one of the biggest priorities for the ops team, and with a good CI/CD strategy that automates different processes, they should be able to focus more on that goal. Likewise, error rates and infrastructure costs can be easily measured once CI/CD is put in place. Operations goals are a key indicator of process success.
4. Team retention rate
Happy developers stick around, so looking at retention rates is a reliable way to gauge how well new processes and applications are working for the team. It might be tough for developers to speak up if they don’t like how things are going, but looking at retention rates can be one step in identifying potential problems.
The benefits of a good CI/CD strategy are felt throughout an organization: From HR to operations, teams work better and achieve goals. In such a competitive development landscape, having the right CI/CD in place gives any company an edge.
So what makes “good” CI/CD? We invite you to compare GitLab CI/CD to other CI tools and see why we were rated #1 in the Forrester CI Wave™.