Dec 17 4 min read

How Well Are We Transitioning to Continuous Delivery Best Practices?

How Well Are We Transitioning to Continuous Delivery Best Practices?

Natalie
Breuer

The authors of Accelerate surveyed over 23,000 individuals in 2,000 distinct software companies to uncover the methodologies that set top-performing organizations apart. Their research suggests that “speed and stability are outcomes that enable each other” and that any software organization can measure and improve these outcomes.

The Continuous Delivery (CD) best practices they recommend, such as keeping batch size small, automating repetitive tasks and investing in quick issue detection, all perpetuate speed and quality while instilling a culture of continuous improvement on the team.

While most of the industry is actively adopting CD, few have set up any way to measure their progress. Concrete metrics, such as those found within Velocity, are a prerequisite to ensuring success in this transition.

In this guide, we’ve outlined how you can use Velocity to answer:

  • How effectively is my team adopting CD practices?
  • What’s hindering faster adoption?

Measure Improvement to Shipping Speed and Throughput

There are two success metrics that can represent how “continuously” your organization is shipping: speed (Cycle Time), and throughput (Pull Requests Merged or Deploy Volume).

Start by looking at the Analytics report to see how well you’ve been improving on one of those metrics. We recommend looking back at least 90 days.

cycle time graph

Cycle Time, or the time between an engineer’s first commit to merging to production, should be trending down. Good coding habits, such as working in small batches, should keep changesets moving through the process with little friction, while CI/CD tooling should automate a lot of late engineering or QA work that frequently blocks merging to production.

Here’s how you can read this trend:

  • Decreasing means you’re moving in the right direction. You’ve already adopted some Continuous Delivery best practices that have unblocked engineers and enabled them to move a single work in progress through the pipeline as quickly as possible.
  • Flat is what you’d expect when you’re not in a state of transition. Typically, teams hit a local maximum with process efficiency when they’ve optimized as much as they can. If you’re in the middle of transitioning to CD, however, a flat Cycle Time is a bad thing. It means that even if you’ve changed some of the tooling or the messaging around how to ship software, this has not had the intended effect.
  • Spiky indicates inconsistencies, and that your process is not delivering predictable results. You’ll want to take a closer look at days or weeks with particularly high Cycle Times to diagnose why work is getting stuck.
  • Increasing is not a state you want to be in for a prolonged period of time, but can be normal during change management, as your team learns new practices and transitions to new tooling.

Alternatively, you can use Pull Requests Merged or Deploys as your yardstick. In this case, you can invert how you interpret results. Increasing throughput is desired, while flat and decreasing trends are a red flag that your team’s new practices are not yet yielding better outcomes.

Dive Into Key Health Metrics Team by Team

After understanding your overall engineering speed, you’ll want to investigate health metrics to find specific areas for improvement.

Velocity metrics can serve as strong proxies for the following Continuous Delivery practices:

  • Good Coding Hygiene, which means working in small batch sets (PR size) and opening Pull Requests early (Time to Open).
  • High Review Effectiveness, which means balancing review thoroughness (Review Coverage) and speed (Review Speed), while ensuring that comments lead to action (Review Influence).
  • High Engineering Capacity, which means developers have enough time for engineering work (Weekly Coding Days).

In Velocity’s Compare report, you can look at these metrics across teams or for individuals to identify coaching opportunities or process improvements.

velocity compare report

Click on a team to drill down and see performance on an individual basis:

compare report

Finally, get more context on a team or individual by seeing how they’ve performed on a specific metric, historically. The Analytics report lets you pick how far back you look and then drill down into any units of work that are dragging the average up or down.

pr size graph

Find the Biggest Opportunities for Improvement in Your Software Delivery Process

Now that you have all the context for how your team is working, create a mental model of your software delivery pipeline to see where a unit of work is most likely to get stuck. This will help you prioritize where you should start making optimizations.

We recommend breaking your process into three portions:

  • Time to Open, or how long does development take.
  • Review Speed, or how long does work sit before getting picked up for review.
  • Time to Merge, or how long does the entire code review process take.

You can look at these three metrics side by side, by selecting them in the Analytics report and viewing them as bar graph clusters by week or by month.

velocity analytics report

With this data, you’ll be able to determine which area of your process is worth a closer look. If multiple stages need attention, we recommend starting with the one that comes earliest in your development pipeline, as improvements at early stages can have an impact downstream.

To dig into your own data and start measuring your progress towards Continuous Delivery, sign up for a free trial of our engineering analytics product, Velocity.


Actionable metrics for engineering leaders.

Try Velocity Free right_arrow_white

Start your 15-day free trial today.

See what Velocity reveals about your productivity and discover
opportunities to improve your processes, people and code.