Skip to content

What are DORA Metrics and Why Do They Matter?

Cc brackets

By: Code Climate
July 26, 2022 (Updated: February 26, 2024)

Team members in a brinstorming session

Born from frustration at the silos between development and operations teams, the DevOps philosophy encourages trust, collaboration, and the creation of multidisciplinary teams. As DevOps rose in popularity, DevOps Research and Assessment (DORA) began with the goal of gaining a better understanding of the practices, processes, and capabilities that enable teams to achieve a high velocity and performance when it comes to software delivery. The startup identified four key metrics — the “DORA Metrics” — that engineering teams can use to measure their performance in four critical areas. 

This empowers engineering leaders, enabling them to benchmark their teams against the rest of the industry, identify opportunities to improve, and make changes to address them.

What is DORA?

DevOps Research and Assessment (DORA) is a startup created by Gene Kim and Jez Humble with Dr. Nicole Forsgren at the helm. Gene Kim and Jez Humble are best known for their best-selling books, such as The DevOps Handbook. Dr. Nicole Forsgren also joined the pair to co-author Accelerate in 2018.

The company provided assessments and reports on organizations’ DevOps capabilities. They aimed to understand what makes a team successful at delivering high-quality software, quickly. The startup was acquired by Google in 2018, and continues to be the largest research program of its kind. Each year, they survey tens of thousands of professionals, gathering data on key drivers of engineering delivery and performance. Their annual reports include key benchmarks, industry trends, and learnings that can help teams improve.

What are DORA metrics?

DORA metrics are a set of four measurements identified by DORA as the metrics most strongly correlated with success — they’re measurements that DevOps teams can use to gauge their performance. The four metrics are: Deployment Frequency, Mean Lead Time for Changes, Mean Time to Recover, and Change Failure Rate. They were identified by analyzing survey responses from over 31,000 professionals worldwide over a period of six years. 

The team at DORA also identified performance benchmarks for each metric, outlining characteristics of Elite, High-Performing, Medium, and Low-Performing teams. 

Deployment Frequency

Deployment Frequency (DF) measures the frequency at which code is successfully deployed to a production environment. It is a measure of a team’s average throughput over a period of time, and can be used to benchmark how often an engineering team is shipping value to customers.

Engineering teams generally strive to deploy as quickly and frequently as possible, getting new features into the hands of users to improve customer retention and stay ahead of the competition. More successful DevOps teams deliver smaller deployments more frequently, rather than batching everything up into a larger release that is deployed during a fixed window. High-performing teams deploy at least once a week, while teams at the top of their game — peak performers — deploy multiple times per day.

Low performance on this metric can inform teams that they may need to improve their automated testing and validation of new code. Another area to focus on could be breaking changes down into smaller chunks, and creating smaller pull requests (PRs)‌, or improving overall Deploy Volume.

Mean Lead Time for Changes

Mean Lead Time for Changes (MLTC) helps engineering leaders understand the efficiency of their development process once coding has begun. This metric measures how long it takes for a change to make it to a production environment by looking at the average time between the first commit made in a branch and when that branch is successfully running in production. It quantifies how quickly work will be delivered to customers, with the best teams able to go from commit to production in less than a day. Average teams have an MLTC of around one week. 

Deployments can be delayed for a variety of reasons, including batching up related features and ongoing incidents, and it’s important that engineering leaders have an accurate understanding of how long it takes their team to get changes into production.

When trying to improve on this metric, leaders can analyze metrics corresponding to the stages of their development pipeline, like Time to Open, Time to First Review, and Time to Merge, to identify bottlenecks in their processes

Teams looking to improve on this metric might consider breaking work into smaller chunks to reduce the size of PRs, boosting the efficiency of their code review process, or investing in automated testing and deployment processes. 

Change Failure Rate

Change Failure Rate (CFR) is a calculation of the percentage of deployments causing a failure in production, and is found by dividing the number of incidents by the total number of deployments. This gives leaders insight into the quality of code being shipped and by extension, the amount of time the team spends fixing failures. Most DevOps teams can achieve a change failure rate between 0% and 15%.

When changes are being frequently deployed to production environments, bugs are all but inevitable. Sometimes these bugs are minor, but in some cases these can lead to major failures. It’s important to bear in mind that these shouldn’t be used as an occasion to place blame on a single person or team; however, it’s also vital that engineering leaders monitor how often these incidents happen.

This metric is an important counterpoint to the DF and MLTC metrics. Your team may be moving quickly, but you also want to ensure they’re delivering quality code — both stability and throughput are important to successful, high-performing DevOps teams.

To improve in this area, teams can look at reducing the work-in-progress (WIP) in their iterations, boosting the efficacy of their code review processes, or investing in automated testing. 

Mean Time to Recovery

Mean Time to Recovery (MTTR) measures the ‌time it takes to restore a system to its usual functionality. For elite teams, this looks like being able to recover in under an hour, whereas for many teams, this is more likely to be under a day.

Failures happen, but the ability to quickly recover from a failure in a production environment is key to the success of DevOps teams. Improving MTTR requires DevOps teams to improve their observability so that failures can be identified and resolved quickly. 

Additional actions that can improve this metric are: having an action plan for responders to consult, ensuring the team understands the process for addressing failures, and improving MLTC. 

Making the most of DORA metrics

The DORA metrics are a great starting point for understanding the current state of an engineering team, or for assessing changes over time. But DORA metrics only tell part of the story. 

To gain deeper insight, it’s valuable to view them alongside non-DORA metrics, like PR Size or Cycle Time. Correlations between certain metrics will help teams identify questions to ask, as well as highlight areas for improvement.

And though the DORA metrics are perhaps the most well-known element of the annual DORA report, the research team does frequently branch out to discuss other factors that influence engineering performance. In the 2023 report, for example, they dive into two additional areas of consideration for top-performing teams: code review and team culture. Looking at these dimensions in tandem with DORA metrics can help leaders enhance their understanding of their team.

It’s also important to note that, as there are no standard calculations for the four DORA metrics, they’re frequently measured differently, even among teams in the same organization. In order to draw accurate conclusions about speed and stability across teams, leaders will need to ensure that definitions and calculations for each metric are standardized throughout their organization.

Why should engineering leaders think about DORA metrics?

Making meaningful improvements to anything requires two elements — goals to work towards and evidence to establish progress. By establishing progress, this evidence can motivate teams to continue to work towards the goals they’ve set. DORA benchmarks give engineering leaders concrete objectives, which then break down further into the metrics that can be used for key results.

DORA metrics also provide insight into team performance. Looking at Change Failure Rate and Mean Time to Recover, leaders can help ensure that their teams are building robust services that experience minimal downtime. Similarly, monitoring Deployment Frequency and Mean Lead Time for Changes gives engineering leaders peace of mind that the team is working quickly. Together, the metrics provide insight into the team’s balance of speed and quality.

Learn more about DORA metrics from the expert himself. Watch our on-demand webinar featuring Nathen Harvey, Developer Advocate at DORA and Google Cloud.