Blog Category

Leadership

Strategies for becoming a more effective leader

Using DORA Metrics: What is Deployment Frequency and Why Does it Matter?

DORA, or DevOps Research and Assessment, is a research group formed by Nicole Forsgren, Gene Kim, and Jez Humble. Acquired by Google Cloud in 2019, the DORA team established four key metrics to measure critical areas of engineering team performance. The framework also offers benchmarks for different stages of production, so that teams can check their performance against others in the industry.
Mar 6, 2023
7 min read

DORA, or DevOps Research and Assessment, is a research group formed by Nicole Forsgren, Gene Kim, and Jez Humble. Acquired by Google Cloud in 2019, the DORA team established four key metrics to measure critical areas of engineering team performance. The framework also offers benchmarks for different stages of production, so that teams can check their performance against others in the industry.

DORA metrics are one tool engineering leaders can use to gain a high level understanding of outcomes, and hone in on areas of their software delivery process that need improvement. With these insights, leaders can stay up to date on SDLC (software development lifecycle) best practices, which can have a significant impact on the success of the business.

One of the four DORA metrics is Deployment Frequency, a measure of throughput that can be a starting point for examining one aspect of your engineering practices.

What is the Deployment Frequency metric?

Deployment Frequency is a measure of how often engineering teams successfully deploy code to production. As with all DORA metrics, Deployment Frequency can help leaders understand their team’s throughput and quantify how much value is being delivered to customers.

TIP: For Velocity users: use Velocity’s Deploy API to ingest data from your CI/CD tool to automate the calculation of Deployment Frequency.

Velocity uses your actual deployment data, rather than proxy data which can be error-prone, for a more accurate measurement of how often your team is successfully deploying code.

Why is Deployment Frequency important?

Deployment Frequency tracks how quickly teams can release new features or fixes, helping engineering leadership benchmark how often the team is able to get work out and receive actionable feedback from end-users.

Innovating quickly is the key to maintaining a competitive edge over other organizations. Deployment Frequency gives engineering leaders a clearer understanding of how often their customers are receiving updates to the products on offer.

Tip: For Velocity users: Use our Application functionality to associate repositories and teams to an Application. This allows users to filter and group deploys by Application and Team, making it easier to identify areas of improvement within your organization.

How to improve Deployment Frequency

If your team is deploying less than multiple times per day, it’s time to focus on improving your Deployment Frequency metric. Look for opportunities to release smaller changes or new components in isolation. An ideal time to identify this is during refinement sessions or planning sessions for a sprint. You can even set the goal of a sprint to be a smaller unit of the overall functionality.

Low Deployment Frequency can indicate that PRs, or units of work, are too large. To improve this, leaders can look at the PR Size metric. If PR Size is too large, leaders can coach teams to practice good coding hygiene and break work into smaller units. Top performing engineering organizations keep PRs to less than 140 lines of code.

Low Deployment Frequency could also indicate unnecessary or complex barriers to releasing to a production environment, like inefficiencies in the CI/CD pipeline, or requiring sign-off from a team member who’s unavailable. To attempt to remove these blockers, start by running focused retrospectives with the engineering teams to identify areas that could be improved or blockers that could be removed entirely. Improvements to automatic testing and validation of new code can also help to increase Deployment Frequency.

Lack of confidence in the stability of the functionality being produced could also be a barrier to deployment. To build confidence in new deliveries, engineering leaders can push for higher test coverage and improve Mean Time to Recovery, a DORA metric that focuses on the team’s ability to recover from incidents in production environments.

What is a good Deployment Frequency?

According to benchmarks determined by DORA through surveys of over 35,000 organizations (varying in size and industry) high-performing teams are able to deploy on-demand, meaning they deploy multiple times per day. The higher your Deployment Frequency, the more often code is going out to end users. Teams are considered medium-performing if they deploy between 1 week and one month.

Using this rubric, engineering leaders can determine where they fall among competitors. Engineering leaders can use Deployment Frequency to figure out which software teams are excelling within the organization, and apply their best practices across other teams. You can also benchmark your team against itself to measure progress over time, and see whether changes you’re implementing have the desired effect on your Deployment Frequency.

Putting Deployment Frequency into Context

As with all metrics, Deployment Frequency should be considered in the context of other data. Other DORA metrics, like Change Failure Rate, can tell you about the quality of your team’s deployments, while Mean Lead Time for Changes offers insight into your team’s efficiency. It’s important not to optimize for a single metric, but instead to strike a balance between speed metrics and stability metrics.

With Velocity’s Analytics module, engineering leaders can view multiple metrics in tandem to see how they affect one another in real time. To gain even more actionable insights, you can view DORA metrics with key, non-DORA engineering metrics.

For example, if your Deployment Frequency is low, you might want to start by viewing it alongside PR Size, which is the number of lines of code added, changed, or removed.

You might notice that the larger the PR Size, the lower a team’s Deployment Frequency, which offers concrete reasoning for developers to submit smaller PRs. While a correlation between these metrics is not diagnostic, it does offer engineering leaders a starting point to investigate a low Deployment Frequency, and identify necessary changes to processes.

If an engineering team demonstrates a favorable correlation between Deployment Frequency and PR Size, or another metric, engineering leaders have incentive to scale that team’s best practices across the organization.

Communication is Key

Engineering leaders should use objective data like DORA metrics as a starting point to identify opportunities for improvement. Only by talking with developers and looking at data holistically can you make lasting improvements to your delivery and processes.

DORA metrics can also be shared to communicate progress towards goals with broader leadership; they are great introductory metrics to share with technical and non-technical executives.

To learn more about how DORA metrics can help your team streamline and enhance delivery, speak with a Velocity Product Specialist.

DORA Assessment is Tricky — Here’s How We Calculate the 4 Metrics

The four DORA metrics — Deployment Frequency, Change Failure Rate, Mean Lead Time for Changes, and Mean Time to Recovery — were identified by the DevOps Research & Assessment group as the four metrics most strongly statistically correlated with success as a company.
Feb 22, 2023
7 min read

The four DORA metrics — Deployment Frequency, Change Failure Rate, Mean Lead Time for Changes, and Mean Time to Recovery — were identified by the DevOps Research & Assessment group as the four metrics most strongly statistically correlated with success as a company.

Within those four metrics, the institute defined ranges that are correlated with meaningfully different company outcomes. They describe companies based on the outcomes they achieve as “High Performing,” “Medium Performing,” or “Low Performing.”

Moving between categories — for example, improving your Deployment Frequency from “between once per month and once every six months” to “between once per month and once every week” — leads to a statistically significant change in the success of a business. Moving within a bucket (for example, from once per month to twice per month) may be an improvement, but was not shown to drive the same level of shift in outcome.

DORA calculations are used as reference points across the industry, yet, there is no agreed-upon approach for DORA assessment, or accurately measuring DORA metrics. To set the original performance benchmarks, the DORA group surveyed more than 31,000 engineering professionals across the world over a span of six years, but responses were not based on standardized, precise data.

DORA metrics have been interpreted and calculated differently for different organizations, and even for teams within the same organization. This limits leaders’ ability to draw accurate conclusions about speed and stability across teams, organizations, and industries.

Because of this, there are subtle pitfalls to using automated DORA metrics as performance indicators.

Code Climate Velocity measures DORA metrics with real data, as opposed to proxy data, for the most useful understanding of your team health and CI/CD processes.

Code Climate’s Approach to DORA Assessment

As mentioned, there are many different approaches to automating the measurement of the DORA metrics in the market. In order to enable engineering executives to understand how their organization is performing across the four DORA metrics, we wanted to provide the most accurate and actionable measurement of outcomes in Velocity.

Our approach relies on analytical rigor rather than gut feel, so engineering leaders can understand where to investigate issues within their software practices, and demonstrate to executives the impact of engineering on business outcomes.

Using Real Incident Data, Not Proxy Data for DORA Calculations

Not every platform tracks Incident data the same way; many platforms use proxy data, resulting in lower-quality insights. Velocity instead uses actual Incident data, leading to more accurate assessment of your DevOps processes.

Velocity can ingest your team’s Incident data directly from Jira and Velocity’s Incident API. These integrations provide a way for every team to track metrics in the way that most accurately reflects how they work.

The Most Actionable Data

We made it possible for engineering leaders to surface DORA metrics in Velocity’s Analytics module, so that customers can see their DORA metrics alongside other Velocity metrics, and gain a more holistic overview of their SDLC practices.

Teams can evaluate their performance against industry benchmarks, as well as between other teams within the organization, to see which performance bucket they fall under: high, medium, or low. Based on that information, they can scale effective processes across the organization, or change processes and measure their impact.

Balancing Speed with Stability: How Velocity Metrics Contextualize DORA Metrics

If teams evaluated DORA metrics in isolation and discovered that their teams have a high Deployment Frequency or that they deploy multiple times a day, they may still be considered “high performing” — yet we know this does not tell the whole story of their software delivery. Velocity metrics and other DORA metrics within the Analytics module help contextualize the data, so that teams can understand how to balance speed with stability.

For example, the Velocity Metric PR size (number of lines of code added, changed, or removed) can be a useful counterpoint to Deployment Frequency. If you view these metrics together in Velocity’s Analytics module, you can see a correlation between the two — does a low Deployment Frequency often correlate with a larger PR size? If so, leaders now have data-backed reasoning to encourage developers to submit smaller units of work.

This doesn’t necessarily mean that your Deployment Frequency will be improved with smaller PR sizes, but it does provide a starting point to try and improve that metric. Leaders can note when this change was implemented and observe its impact over time. If Deployment Frequency is improved, leaders can scale these best practices across the organization. If not, it’s time to dig deeper.

DORA Metrics Definitions

Deployment Frequency – A measurement of how frequently the engineering team is deploying code to production.

Deployment Frequency helps engineering leadership benchmark how often the team is shipping software to customers, and therefore how quickly they are able to get work out and learn from those customers. The best teams deploy multiple times per day, meaning they deploy on-demand, as code is ready to be shipped. The higher your Deployment Frequency, the more often code is going out to end users. Overall, the goal is to ship small and often as possible.

Mean Lead Time for Changes – A measurement of how long, on average, it takes to go from code committed to code successfully running in production.

Mean Lead Time for Changes helps engineering leadership understand the efficiency of their development process once coding has begun and serves as a way to understand how quickly work, once prioritized, is delivered to customers. The best teams are able to go from code committed to code running in production in less than one day, on average.

Change Failure Rate – The percentage of deployments causing a failure in production. If one or more incidents occur after deployment, that is considered a “failed” deployment.

Change Failure Rate helps engineering leaders understand the stability of the code that is being developed and shipped to customers, and can improve developers’ confidence in deployment. Every failure in production takes away time from developing new features and ultimately has negative impacts on customers.

It’s important, however, that leaders view Change Failure Rate alongside Deployment Frequency and Mean Lead Time for Changes. The less frequently you deploy, the lower (and better) your Change Failure Rate will likely be. Thus, viewing these metrics in conjunction with one another allows you to assess holistically both throughput and stability. Both are important, and high-performing organizations are able to strike a balance of delivering high quality code quickly and frequently.

Mean Time to Recovery – A measurement of how long, on average, it takes to recover from a failure in production.

Even with extensive code review and testing, failures are inevitable. Mean Time to Recovery helps engineering leaders understand how quickly the team is able to recover from failures in production when they do happen. Ensuring that your team has the right processes in place to detect, diagnose, and resolve issues is critical to minimizing downtime for customers.

Additionally, longer recovery times detract from time spent on features, and account for a longer period of time during which your customers are either unable to interact with your product, or are having a sub-optimal experience.

Doing DORA Better

Though there is no industry standard for calculating and optimizing your DORA metrics, Velocity’s use of customers’ actual Incident data, and ability to contextualize that data in our Analytics module, can help teams better understand the strengths and weaknesses of their DevOps process and work towards excelling as an engineering organization.

Interested in learning which performance benchmark your team falls under, and how you can scale or alter your engineering processes? Reach out to a Velocity specialist.

Get More Out of DORA Metrics With Code Climate Velocity

For organizations looking to fine-tune their DevOps practices or understand where they should improve to excel, DORA metrics are an essential starting point. Code Climate Velocity now surfaces the four DORA metrics in our Analytics module, allowing engineering leaders to share the state of their DevOps outcomes with the rest of the organization, identify areas of improvement, establish goals, and observe progress towards those goals.
Feb 21, 2023
7 min read

For organizations looking to fine-tune their DevOps practices or understand where they should improve to excel, DORA metrics are an essential starting point. Code Climate Velocity now surfaces the four DORA metrics in our Analytics module, allowing engineering leaders to share the state of their DevOps outcomes with the rest of the organization, identify areas of improvement, establish goals, and observe progress towards those goals.

When used thoughtfully, DORA metrics can help you understand and improve your engineering team’s speed and stability. Here’s how Code Climate Velocity delivers the most accurate and reliable measurements for each metric, and how to maximize their impact in your organization.

What is DORA?

The DORA metrics were defined by the DevOps Research & Assessment group, formed by industry leaders Nicole Forsgren, Jez Humble, and Gene Kim. These metrics, along with their performance benchmarks, were largely popularized by the book Accelerate, co-authored by the DORA group. Not only do they represent the most critical areas in the DevOps process, they’re also the metrics most statistically correlated with a company’s organizational performance.

What are the four DORA metrics?

The four DORA metrics fall under two categories:

Stability Metrics (Measuring the impact of incidents in production):

  • Change Failure Rate: The percentage of deployments causing a failure in production.
  • Mean Time to Recovery: How long, on average, it takes to recover from a failure in production.

Speed Metrics (Measuring the efficiency of your engineering processes):

  • Deploy Frequency: How frequently the engineering team is deploying code to production.
  • Mean Lead Time for Changes: The time it takes to go from code committed to code successfully running in production.

How Velocity Does DORA Differently

While many leaders rely on homegrown calculations to surface their DORA metrics, a tool like Velocity allows teams to get even more out of DORA. Not only do we help teams standardize measurement and ensure accuracy, we also make it possible for leaders to go a level deeper — digging into the aspects of the SDLC that influence DORA metrics, so they can identify specific opportunities for high-impact changes.

Our approach to DORA metrics is unique because:

We use the most accurate data

Velocity uses our customers’ real incident and deployment data, through Velocity’s curl command, to ingest data from JIRA and our Incidents API, for accurate calculations of each metric.

Many platforms rely on proxy data as they lack integration with incident tools. Yet this approach yields lower quality, error-prone insights, which can lead to inaccurate assessments of your DevOps processes.

You can view trends over time

The Velocity Analytics module gives users the ability to see DORA Metrics trend over time, allowing you to select specific timeframes, including up to a year of historical data.

You can use annotations to add context and measure the impact of organizational changes on your processes

Additionally, users can use annotations to keep a record of changes implemented, allowing you to understand and report on their impact. For example, if your team recently scaled, you can note that as an event on a specific day in the Analytics module, and observe how that change impacted your processes over time. Reviewing DORA metrics after growing your team can give you insight into the impact of hiring new engineers or the efficacy of your onboarding processes.

You can surface DORA metrics alongside Velocity metrics

The platform also allows customers to surface these four metrics in tandem with non-DORA metrics.

Why is this important? DORA metrics measure outcomes — they help you determine where to make improvements, and where to investigate further. With these metrics surfaced in Analytics, it’s now easier for engineering leaders to investigate. Users can see how other key SDLC metrics correlate with DORA metrics, and pinpoint specific areas for improvement.

For example, viewing metrics in tandem may reveal that when you have a high number of unreviewed PRs, your Change Failure Rate is also higher than usual. With that information, you have a starting point for improving CFR, and can put in place processes for preventing unreviewed PRs from making it to production.

Engineering leaders can coach teams to improve these metrics, like reinforcing good code hygiene and shoring up CI/CD best practices. Conversely, if these metrics comparisons indicate that things are going well for a specific team, you can dig in to figure out where they’re excelling and scale those best practices.

Metrics offer concrete discussion points

DORA metrics are one tool engineering leaders can use to gain a high level understanding of their team’s speed and stability, and hone in on areas of their software delivery process that need improvement. With these insights, leaders can have a significant impact on the success of the business.

Sharing these discoveries with your engineering team is an excellent way to set the stage for retrospectives, stand ups, and 1-on-1s. With a deeper understanding of your processes, as well as areas in need of improvement or areas where your team excels, you can inform coaching conversations, re-allocate time and resources, or extrapolate effective practices and apply them across teams.

Leaders can also use these insights in presentations or conversations with stakeholders in order to advocate for their team, justify resource requests, and demonstrate the impact of engineering decisions on the business.

Ready to start using DORA metrics to gain actionable insights to improve your DevOps processes? Speak with a Velocity product specialist.

For engineering teams, disruption to the business can have a significant impact on the ability to deliver and meet goals. These disruptions are often a result of reprioritization and budget changes on an organizational level, and are amplified during times of transition or economic instability.

In a survey led by CTO Craft in partnership with Code Climate, engineering leaders were asked to name the main cause for disruption to their businesses in 2022, and to offer their predictions for productivity challenges in 2023. The survey also included questions about engineering leadership in particular, including how leaders intend to keep engineering teams motivated in the coming year, and how their leadership has been impacted by disruptive times.

In total there were 114 respondents, comprised mainly of CTOs, followed by Engineering Managers, then heads of technology, development, and engineering.

Read on for the key takeaways, and see the full survey results on CTO Craft.

Hiring challenges ranked as the #1 cause for disruption

Attracting and retaining talented software engineers is top of mind for many engineering leaders, and with developers in short supply but high demand, this remains a challenge for organizations. Over half of survey respondents said hiring challenges were the leading cause of business disruption in 2022.

Many survey respondents said that recruiting top talent will continue to be a challenge in 2023.

Other common responses were reprioritization of business objectives, followed by a drop in revenue. Over half of respondents (54%), predict that issues with budgets will be a threat to productivity in 2023.

Leaders aim to assign more engaging work in 2023

Nearly half (forty-five percent) of respondents said that they plan to motivate engineers in 2023 with more engaging work, followed by twenty percent of respondents who said they would focus on developers’ career paths. The remaining eight percent said they will use compensation as a motivator next year.

Teams are still getting work done

Despite these challenges, 70% of survey participants said that they almost always deliver on business commitments.

In terms of identifying root causes of going off track and not delivering on commitments, 60% of respondents said they can assess the problem with relative ease, while 25% said it’s difficult for them to do so.

Learn more about CTO Craft on their website.  

Buyer’s Checklist: How to Choose an Engineering Management Platform

An engineering management platform, or EMP, is a comprehensive tool that offers data-driven insight into your engineering team. The platform allows engineering leaders to demonstrate the impact of their investments, advocate for more resources, understand engineering team health, and measure team progress.
Dec 14, 2022
7 min read

Download the Buyer’s Checklist here.

An engineering management platform, or EMP, is a comprehensive tool that offers data-driven insight into your engineering team. The platform allows engineering leaders to demonstrate the impact of their investments, advocate for more resources, understand engineering team health, and measure team progress.

With this deep level of visibility, companies can enhance their software engineering practices and gain a competitive edge in the marketplace. Features across different EMPs can vary, and it can be challenging to evaluate which platform works best for your organization. We’ve put together a checklist that can help you make your choice, with selection criteria categorized by:

  • Process and team health
  • Aligning engineering initiatives to business value
  • Consistent, high-quality delivery
  • Integrations into your existing systems
  • Scalability and customization
  • Security

To give you a better understanding of what to look for in an EMP, download a free copy of our Buyer’s Checklist, which breaks down features by level of importance, from nice to have, to should have, to essential.

Sign up for our newsletter to be the first to know when our EMP Buyer’s Guide is available.

As the adage goes, the best laid plans go awry, and that also applies to building software. The planning phase, including maintaining alignment, is critical in engineering, but even for the most mature teams, plans can change and evolve as they’re executed.

Engineering leaders need to keep teams aligned with the overall goals of their organization, and that includes setting and managing expectations for stakeholders so they won’t be blindsided when roadmaps need to change. How do CTOs successfully align on strategic priorities, navigate planning, and course-correct when things go off-track?

As part of Code Climate’s Next event, we invited engineering leaders Catherine Miller, CTO at Flatiron Health; Juan Pablo Buriticá, SVP of Engineering at Ritchie Bros.; Ryan Grimard, SVP of Engineering at EverQuote; and D Orlando Keise, Head of Banking Foundational Platform at UBS, to share their experience leading engineering teams while keeping the C-Suite at ease.

Read on for highlights from the panel discussion, led by Peter Bell, CTO and Founder of CTO Connection.

Peter Bell: An engineering leader’s responsibility is keeping the team aligned with a larger organization, and that starts at the top. What do you do to ensure that you’re on the same page with the rest of the C-Suite around things like resourcing, roadmap planning, and strategic priorities?

Ryan Grimard: Our teams use the Scaled Agile Framework, SAFe, for our planning and execution, and we involve our business leaders way in advance. They’re helping us with strategic planning for the company, and then our product and engineering teams are working through portfolio management and bringing things into the conveyor belt. When things are ready for final prioritization for a particular quarter, we use that process to go through “big room planning” and a one-week prioritization planning process. The end of that is effectively a confidence vote for all of the teams. We have 17 Agile teams, and the business leaders are in those meetings and hearing the confidence vote, and giving the thumbs up that they agree that these priorities that the teams have picked actually match up with the OKRs that we set at the company level.

Juan Pablo Buriticá: I have two techniques. One is to force the C-Suite to compromise on a single thing that they care about through a guiding principle. So, do we care about speed, do we care about quality? And then, I use that guiding principle to force decision making on the group for alignment.

The second thing is, a set of cascading strategies: you have business strategy, the product strategy that cascades from the business strategy, and the engineering strategy, which should enable both. And then, it forces resourcing, staffing, and prioritization to be aligned with the guiding principle. That guiding principle is the tiebreaker for everything.

D Orlando Keise: What I’ve found is important is to align on the mission. And I’m using “mission” in a sense that’s more than just the objective. I actually mean it almost in a military sense. What is our target? What are our threats? What’s the landscape and the terrain that we’re operating in? I think we all know that we’re going to come up with a plan, that we have a roadmap, but I find that the plan is not going to survive first light with the outside world.

I want to make sure that when that happens, we’re aligned not only on what we’re trying to do, but why we’re doing it, and the environment that we’re doing it in. Because when we start changing that plan, if we’re not aligned on all those other things, we’re going to run into problems.

Peter: What kind of conversation do you find the most difficult? Is it when you’re in the planning phase, and you have to say, ‘We’re not going to fit all that into this quarter,’ or is it once you’ve said, ‘Sure, we’ll do this by the end of the year,’ and then it’s December 12th and you’re realizing that Christmas might have to be canceled this year?

Catherine Miller: Planning conversations are about feelings, and execution conversations are about data. I like the execution conversations better, because the planning conversation ends up being a very abstract conversation that is based on trust. The more trust you have the better that goes, but you’re all just guessing, and at the end of the day, you are trading on some relationship or some extrapolation, and you know it’s wrong.

Then you get to the execution, and first of all, no one actually expects it to go on track, but what I love about execution is, you can check in along the way, you can see how it’s going, and it forces a conversation. What are you going to do? We literally cannot hit this deadline because of where we are. That is a fact. There is no hoping or wishing that will make that go away. So you’re forced into decision-making in a way that less mature teams often avoid at the planning stage where they just hope everything will be okay.

Ryan: I would say that our company has really evolved over the last two or three years. It used to be a much more difficult conversation upfront when business leaders asked, ‘These are the strategic priorities; how many of these can we complete in a quarter?’ Because they weren’t necessarily involved in the process that I described earlier, they were surprised when we couldn’t do everything at the same time and get everything out the door in that particular quarter. So since we’ve introduced this process, I feel like the more difficult part of that is through the backside, or partway through the quarter when the conversation might be: Why aren’t we executing? And those really get broken down into retrospectives. When something doesn’t quite go right, teams will do a full retrospective. They will come up with a root cause analysis through that retrospective, and line up whatever we need to fix, then present that to those business leaders. I think that builds confidence across the board.

More recently it’s been boiling down to either high levels of tech debt for a particular team, or our team isn’t onboarded to what we call our “paved path,” or our delivery pipeline.

Juan Pablo: I like solving problems by preventing them from happening, so I’m usually more uncomfortable when things have gone off track, because that means that we didn’t prevent what we were trying to prevent. Whereas, when I’m on the planning aspect, I feel very comfortable. Plans are lies we tell ourselves to get to the next stage and motivate our teams and motivate ourselves, so I have a much easier time on that end than the former.

Peter: When you realize for whatever reason that your team isn’t going to hit a date — there’s some commitment you’ve made, it’s clear now that it’s not going to happen — what’s the first thing you do?

Juan Pablo: The moment I find out, I broadcast that state to the organization, because rather than rebuilding trust, I prefer not losing trust. I’ve learned that people will usually handle  transparency better than we think.

Since we’re building software and things tend to go wrong in software, I like building a culture where being off-track is not uncommon, and people don’t react with fear. We’re a good engineering team if we’re good at reacting quickly.

D: I want to underscore something that Juan Pablo said. The very first thing is to communicate it. The second thing is to understand what kind of problem we have. Did we plan poorly for the amount or type of resources that we have? Did we have great resources but the ground shifted under us?

I find that people think they’re solving the real problem but they’re really solving some symptom. So, I try to go a few layers down to find the real problem that caused us to be off track to begin with, and attack that.

Catherine: Part of what I would do is figure out whose problem this is to solve. Is this something that I should really be [calling on] one of my VPs or even directors for, and setting a framework for them to solve, or is this significant enough for me to dig into directly?

Ryan: Keeping it amongst the team is important, at least for a place to start, where the team is a handful of engineers and the product manager or the product owner. There’s a lot of trust that should have been built up over time. We’re pretty big on keeping teams whole and not rotating people around unless there’s some sort of internal mobility opportunity for folks.

Peter: So you found out things are off track, and you have to prepare for a difficult conversation with a senior stakeholder. Are there any things you do when preparing for that to try to make it just a little more likely to go smoothly?

D: A lot of my preparation for moments like that happens long before the moment. A lot of us have used the word trust at various times, and that’s huge. I’m trying to establish trust with stakeholders and partners from day one, because so many of these moments are going to rely on trust to go the way they need to go. The other thing I try to do in the moment, when it comes to explaining a difficult truth, is to take any of the emotions out of it. I try to lay out something that’s very fact-based.

Cat: I very much agree with the emotional regulation piece as a key part. If I have the leeway and the time, the best thing I can do is I can sit down with my coach. As we talk, I lose my emotional attachment to things, until I’m just thinking about options that are good and bad and what the cost to the business is.

Ryan: Data gathering, absolutely. Pre-reads I think are huge, meaning sending out what you plan on talking about beforehand, and making it clear that reading the pre-read is a requirement. You shouldn’t be going into the meeting and then learning about what you’re about to hear about. The pre-reads give you that opportunity to be informed before the meeting, and if successful, meetings are for making decisions.

We’ve run into situations where the pre-reads are pretty in-depth, all of the decision making and our suggestions on where to go next are already laid out, and the meeting is five minutes long.

Juan Pablo: I use paper as my coach. I believe writing is thinking, and I try to anticipate questions that might come from the conversation. If I’m walking into a board meeting, I try to minimize the times I say: I don’t know, let me get back to you.

I have my talking points. What is the outcome of the conversation that I’m seeking? Am I trying to inform? Am I trying to convince? Am I trying to drive to a decision? Based on that, I bring options or I shape my talking points.

Peter: Things didn’t go according to plan, you missed the deadlines, how do you rebuild trust both within your engineering org, and with your executive stakeholders?

Cat: It’s not inherently breaking trust to miss your deadlines if you’re being transparent about your metrics and what you’re seeing as it goes along. It’s mostly that transparent communication that will leave no one surprised.

Ryan: I think rebuilding trust is really about consistency and showing continuous improvement. We’ve been using some of the DORA Metrics that Code Climate has been putting out.

We have custom reports for DORA Metrics, and they’ve been really awesome. Being able to show that the team is performing, that the issue that we ran into was kind of out of our control, and showing there’s an upward progression to how they’re performing…it’s big to show that they’re not not performing.

Juan Pablo: I usually lead product engineering teams, and so the focus is less the plan or the deadline and more the outcome, so ensuring that the teams understand the goal that they’re supposed to be driving. And in product, your goals are so variable, and you can invest 6 months launching a feature that tanks or spend a week and suddenly you have a great product, and so what I care about is… less about times and deadlines and estimates and plans and more about, are you being impactful and are you shipping as fast as you can so we can learn?

D: I would underscore that rebuilding trust is really about being reliable in the things that you say. Communication is important. We had talked about diagnosing the real problem, and now when you’re going and acting on that and executing. The second time around, are you hitting what you aimed to hit based on the problem that you diagnosed? Hopefully you are doing a good job of identifying what that real problem is and then when you drive the team to execute towards that milestone, it actually solves what you said it was going to solve … as you do that you’re rebuilding trust.

Audience member: [To Juan Pablo] Tactically, how often are you resetting your principles because you can’t make the mark or realize priority two has to be priority one?

Juan Pablo: I’m doing a good job if I’m not resetting them that much, or they at least have an 18-24 month lifespan. I do start from a mission. When I was at Splice, the mission of engineering was to enable Splice to learn faster than the market before we ran out of money. So that was driving engineers, that was our mission, anything you do is in service of that, because we’re serving musicians.

And from there, the principles that we set were about sensible choices in technology. We were like, yes it’s fun to learn, but our job is not to learn; our job is to learn faster than the market. I also crowdsourced feedback from the team as far as what resonated with them in regards to our principles, because usually I don’t have the full perspective. If there’s ownership from the group, it’s easier for them to also embrace the principles.

Audience member: There’s a known standard in the tech industry of listening to your customers versus understanding business objectives from leadership. What has been your experience trying to carve in both business objectives and customer feedback?

Cat: More and more, I think every dichotomy in tech is long- and short-term planning. For example, I think that tech debt is long term and product and desires is short term, and I think what you’re talking about a little bit here is what your customers are beating down your door right now, whereas presumably your business metrics are about your long range plans, what your business needs to grow, what our burn rate needs to be… And I don’t necessarily have an answer for you, but I think dispassionately thinking about how much are we investing immediately — how much do we need to do this month or this year, versus three years from now? — is an interesting way to think about your portfolio across a lot of different things like this.

Ryan: I worked at a company where it was very referral based, and the business metrics were: How many referrals did we make, and how much revenue did we make from that referral? The other side of that is, how happy is your customer? Does a happier customer actually produce more revenue? Maybe not in the very short term, but repeat customers, and happier folks in the marketplace of that particular referral business goes a long way. The business is able to interact with a consumer that’s happy and potentially upsell them and so on.

Juan Pablo: Usually we’ve done a good job as a leadership team if we’ve set business metrics that empower teams to take these customer needs and act on them. Because customer needs and business metrics should not be diametrically opposed, and if they are, there’s something wrong.

4 Ways to Improve Your Engineering Team Health During a Transition

Changes within a company, like growth, restructuring, down-sizing, or shifting goals, can be disruptive for engineering teams. Developers often look to team leads to understand how these organizational changes will impact them. Amid transitions and uncertainty, engineering leaders can be a beacon for their teams and help maintain the culture that they’ve cultivated.
Dec 1, 2022
7 min read

Changes within a company, like growth, restructuring, down-sizing, or shifting goals, can be disruptive for engineering teams.

Developers often look to team leads to understand how these organizational changes will impact them. Amid transitions and uncertainty, engineering leaders can be a beacon for their teams and help maintain the culture that they’ve cultivated.

Read on for four steps leaders can take to continue to have a positive impact on engineering team health:

Prioritize psychological safety

A phrase often referenced in leadership is psychological safety, the idea of fostering a culture within your team where engineers are comfortable taking risks, openly expressing their concerns, and sharing ideas. Tactics for enhancing psychological safety can include:

  • Leading with curiosity to find a root cause of a failure or bottleneck.
  • Focusing on the work, not the engineer themselves, when identifying issues.
  • Creating a safe culture free from microaggressions.
  • Getting to know direct reports on an interpersonal level.

Psychological safety promotes employee satisfaction, and it also often leads to better team performance.

As Heidi Waterhouse, Principal Developer Advocate at LaunchDarkly said during our webinar on using data to promote psychological safety, “The faster a team moves, the more psychologically safe they are, the more they feel like they can take risks, but also the more they feel like they can take risks, the faster they move.”

Keep an eye out for burnout

Burnout is present across all industries, especially after the work from home era upended the way we work and challenged us to reevaluate our priorities.

Understandably, feelings of burnout are not something employees often want to share with team leads, and once they get to that point, it might be too late to offer solutions. Managers might need to do some exploring in order to address burnout before it becomes a problem.

Code Climate identified and addressed burnout on our own engineering teams by dogfooding Velocity.

After several 1-on-1s revealed that engineers were excited about their work but feeling under pressure to ship new features, our engineering manager used Velocity’s Reports Builder to find out the number of pushes that happened after 8 pm. This inquiry revealed the team was working late more often than in previous months.

Their manager used this data to make two key changes: adjusting the scope of their work to something more more manageable, and giving engineers days off. She continued to check in with her team and make sure work was evenly distributed among ICs.

Spotting and tackling undesirable trends, like engineers working on nights and weekends, allows leaders to make necessary changes before their teams experience burnout.

Use metrics to communicate the business value of your team

During board meetings and stakeholder conversations, engineering leaders act as the advocates and representatives for engineering teams. It’s important for developers to know that their team leaders are championing their work, and communicating their impact with stakeholders.

Software engineering teams power organizations, but because it’s so hard to quantify engineering work, it can be difficult to demonstrate the true value that work brings to the business.

With metrics, engineering leaders can make a business case for their teams by demonstrating trends in numbers to stakeholders. For example, the CTO of WorkTango, Mike Couvillion, uses revenue-specific metrics when presenting to his board of directors.

In board slides, Couvillion now uses the following metrics to demonstrate efficiency:

  • Innovation Rate
  • PR Throughput
  • Cycle Time
  • Average Coding Days

Objective data also helps leaders advocate for new talent or tools, because they can clearly demonstrate how those resources will help the company meet their goals.

Address engineers’ concerns with enhanced 1-on-1s

As we mentioned above, metrics are highly useful for showcasing the impact of your engineering teams, spotting burnout in advance, and identifying when and where teams might be stuck.

Metrics can also enhance 1-on-1s, and ensure that conversations are actionable. You can bring concerning trends or units of work that need to be addressed to a meeting, and work with that team member to come up with a plan for improvement. Or, you can call out and celebrate successes, such as positive trends or progress towards goals.

Yet metrics can’t replace conversations with direct reports, and 1-on-1s should leave room for topics beyond progress and performance.

Part of fostering a positive culture is getting to know employees on an interpersonal level, demonstrating that you recognize their hard work, and offering collaboration and coaching when things go off track.

Taking the right steps

If your company is going through a transition, it’s important to address your engineering team directly. Pay attention to cultivating or maintaining a culture of psychological safety so they can express their concerns or take a risk by trying something new; be on the lookout for conditions that might cause burnout; be their champion when talking to stakeholders; and have more impactful 1-on-1 conversations to address issues and offer support.

To learn more about using metrics to enhance board meetings and 1-on-1s, speak with a Velocity product specialist.

For many CTOs, communicating with the CEO (or any member of the executive team) can be an unending source of frustration. Though I’m a CEO today, I’ve also been a CTO, and I’ve seen firsthand the challenges that brings. It’s not easy to convey a complete picture of the state of an engineering organization to a technical leader who isn’t involved in the team’s day-to-day, and it’s even harder when you’re speaking to someone without a technical background. You may be working towards the same goal, but if you’re not aligned on how to get there or how things are going, you’ll face unnecessary challenges at every turn.

Unless you know the secret — the key to enhancing alignment with executive stakeholders, including the CEO — clear, objective reporting.

Reporting isn’t just for your boss

CTOs often face difficulties securing budget for critical initiatives, facilitating agreement on the state of a situation, explaining that engineering isn’t always the bottleneck, and more. These may seem like distinct challenges, but in reality they share a common foundation — they’re all difficulties rooted in issues of communication and alignment.

A key responsibility of executive leadership is improving communication and facilitating alignment. No matter how well your team performs, no matter how stellar your software, your department’s success will likely be limited if you can’t get stakeholders on the same page. In order to promote alignment, you’ll need to leverage one of the most underappreciated, oft-maligned tools at your disposal: reporting.

Though it has a bad reputation — Office Space’s TPS reports always come to mind — reporting has a lot to offer. Not timecards, not compulsory bureaucratic tracking, but great reporting (more on what that means in a moment), can offer enormous benefit to you and your team. Done well, reporting allows you to frame the conversations you need to have, and inform the decisions that need to be made.

Every other department has already learned this lesson. Sales, Marketing, HR, and Finance are all reporting on objective data, using it to advocate for their departments and drive critical discussions with the rest of the executive team. It’s time for engineering leaders to do the same.

What is great reporting?

In this context, reporting is the process of gathering and sharing quantitative and qualitative information in order to create the opportunity for shared, fact-based understanding. It ensures that everyone comes to the table with the same data, and that they’re operating on the basis of facts, not feelings. Understanding occurs when that data is contextualized and internalized, and can be used to drive conversations and decisions.

Great reporting goes above and beyond the requirements of that definition. It involves:

  • Consistent data — Tracking the same metrics in every report makes it possible to track trends and surface patterns.
  • Curated data — Sticking to the most relevant data makes reporting more useful; too much information can be just as useless as none at all.
  • Predictable intervals — Reporting on a regular cadence helps establish and strengthen understanding.
  • Appropriate context — Sharing additional information — for instance, pairing data with industry benchmarks, past trends, or other relevant metrics — can help tell a more complete story.
  • Necessary precision — Using the most logical unit of measurement is important; if you measure something in hours instead of minutes or days, it can be a distraction unless the reason for that interval is clear.
  • Correct elevation — Choosing data with the right level of granularity can make it easier for your report’s recipient to understand.

Reporting establishes a shared foundation for having critical conversations and making key decisions, but it’s just a starting point. Your report might show your CEO that things are going well, or that a major initiative is off-track, but it can’t explain why, nor can it solve problems. Still, when done well, reporting can be the basis for productive collaboration, and can help you drive success in your organization.

To find out how to leverage clear, objective reporting to meet your organizational goals, request a consultation.

To deliver maximum value within an organization, engineering teams must balance strategic, long-term vision with enough flexibility to react to unforeseen, yet inevitable, changes along the way. An operational cadence — a scaffolding of benchmarks and meetings that provide structure for a team’s activities — is one way successful teams stay focused on goals, improve performance, respond to changes, and communicate with other business units within their organizations. We recently went in-depth on the subject in a virtual conversation between Code Climate Founder and CEO, Bryan Helmkamp, and Juan Pablo Buriticá, SVP of Engineering at Ritchie Bros.

Read on for highlights and check out the on-demand recording here to see the full conversation.

What is an operational cadence anyway?

Juan Pablo kicked the conversation off by summarizing an operational cadence as a collection of meetings, ceremonies, or regular communications that “help create a rhythm for an organization on which to operate, on which to become predictable and do things better.”

Bryan quickly agreed, “I think about operational cadences as really being almost like an operating system… [a cadence is] a set of interlocking practices, some of them might be daily, weekly, monthly, quarterly, annually…together these give the organization the ability to have a shared understanding of what’s happening.” He elaborated that in addition to creating a predictable schedule, operational cadences allow organizations “to have a shared language to talk about, ‘this is what we do each month and this is what that means,’ without having to explain it from scratch every time.”

Fostering alignment, predictability, observability, and autonomy

With the structure and high level benefits of an operational cadence introduced, the conversation turned to the specific value that observing this cadence delivers to engineering teams and their organizations. Bryan zeroed in on one crucial benefit, alignment, and the ripple effect that ensues once alignment is achieved: “With an established cadence, you have the opportunity to increase alignment. And alignment is… a huge factor into how much impact a given effort’s going to have.”

He elaborated on the importance of this alignment, “Parts of a cadence can serve as a way to identify issues earlier. Because if you have an issue and it doesn’t get surfaced until it’s been going on for months, the cost associated with that and the waste associated with that can be quite significant. By having a regular cadence and set of interlocking processes, you get this backstop at a minimum that makes sure that if there’s an alignment issue, if there’s a change in context that requires a change in priorities, that it’s going to get surfaced and be able to be addressed with appropriate visibility and appropriate collaboration and stakeholders.”

Juan Pablo added his own perspective, “I’d distill the value to three things. When I was in startups, the principal value I got was predictability. I didn’t like running a team where the strategy was changing every week. By pushing strategic reviews into a monthly cadence or a little bit of a longer stretch, we got air cover to work, and we also got the leadership group some checkpoints. Next, observability. If it is a larger organization, if it’s grown past 40, 50, 60 engineers, it’s hard to know — and you shouldn’t be looking to know — what everyone is doing, but rather trying to observe the system or monitor the system from outside…And then the last thing is autonomy. If you have predictability and observability, then teams can be autonomous… to not have a ton of oversight, but there’s still this broadcast of information that is flowing.”

Identifying the optimal cadence

As the conversation progressed, Bryan and Juan Pablo transitioned from the abstract to a more detailed discussion of how and when to implement an organizational cadence. One thing was immediately apparent — there is no universal “optimal cadence.” With variables such as size, structure, goals, and more affecting each business differently, teams must identify what works best for their unique situation. Juan Pablo shared his personal preference, “I generally like… having some weekly ceremonies, some biweekly ceremonies, monthly, quarterly, and then either every six months or a year.” He emphasized that this varies by organization however, “depending on your stage, if you’re trying to move really, really quickly, doing quarterly or yearly planning doesn’t really make sense.”

Bryan expanded on Juan Pablo’s assessment, “I 100% agree that there’s no single right answer. There might be a this-is-best-for-us-right-now answer that everybody can work towards, and it’s a moving target.” He then encouraged people to think about their own organizational rhythms that may be on a longer timeline than a weekly sprint, suggesting that teams supplement their sprints with sessions on a monthly, or even quarterly basis, which “can be very helpful both in terms of the planning side of things, to give people more information about what to expect… and also for the standpoint of that continuous improvement process.”

Bryan then compared the benefits of a strategic organizational cadence against the commonly-used retrospective, “I think retrospectives are fantastic tools, but they tend to gravitate towards small incremental improvements.” He then clarified, “They don’t naturally serve as well to press the really big questions… when you have 45 minutes to try to make some improvements for the next week. And I think what we’ve seen is there’s a benefit to being able to ask bigger questions and to think about bigger time horizons as a supplement to whatever’s working well for you at the tactical execution level.”

Borrowing cross-functional best practices

As the conversation touched on the strategies and practices Juan Pablo and Bryan’s own organizations employ, Bryan relayed several things that he has picked up from watching non-engineering departments within Code Climate. In particular, his sales team caught his attention, “Some of the things that they do that I think have a really interesting opportunity for helping on the engineering side as well, are things like internal QBRs, or Quarterly Business Reviews. They’re not really running team-level weekly retrospectives, but each quarter they pull out, ‘What were all of our successes and opportunities for improvement that we learned over the past quarter. What are our key focus areas going forward for next quarter?’ Maybe there are new ways we’re going to do things or there’s new content that we’re going to introduce into our sales process, and that’s at the quarterly level.”

Juan Pablo responded to Bryan’s point about QBRs with a practice he has put into place, “The QBR is a written exercise. So all engineering groups report on their business metrics because that helps engineers understand what levers they have… it starts giving insight to other people about how things are working, how they’re being impactful or not, and how to be a little bit more business-oriented.”

Bryan tied their ideas on the topic together, “Two elements of that that I think are really powerful: One is that shift from verbal communication or informal communication to written and structured communication. And that’s something that as organizations get larger, I think becomes more and more important. And you just hit this tipping point where if it’s not written down, then this is not going to work.”

He continued on, “But with respect to sort of data and metrics, part of what I’m hearing in there is that there’s advantage to using regular operating cadences as an opportunity to push information out to the collaborators and to those other stakeholders who would benefit from having that same understanding of what’s going on. And I think that that’s an area where every department can always be improving, but engineering in some ways has been a little bit further behind than some of the other functional areas in organizations.”

Metrics as a shared language

With the conversation pivoting to the idea of using metrics as a shared language to ensure cross-functional alignment, Juan Pablo relayed a fitting anecdote. When a former boss approached him to ask why their team was operating slowly, he was initially unable to answer the question. However, after a few months of digging into the metrics, “I was able to talk a lot more about Cycle Time and throughput, and not only talk about it, but visualize it. I started… to understand how to distill all of that to the board of directors who shouldn’t really know that much about capabilities or many of the underlying reasons for these metrics, but every board meeting I could show, ‘This is how far we are. Here’s an industry comparison, what a high performing engineering organization is… how far we are, and the impact we’ve had.”  

With Juan Pablo’s board and team aligned on metrics and strategy, the results followed shortly after, “Two or three quarters after we had started our acceleration strategy, you could clearly see the reduction of Cycle Time… In the period of 18 months, we reduced it by 10 times because we had visibility, and I could explain it and continue to track it with the executive team and with the board, but also I was able to give concrete direction to the engineering group.”

Balancing proactive planning with reactive needs  

In a perfect world, organizations would identify their optimal cadence, align business units and goals based on universally understood metrics, and proactively address anything looming on the horizon. Unfortunately, real life gets more complicated and situations arise that teams are forced to address reactively. Juan Pablo discussed how he manages this reality, “I’ve learned that once I’ve moved to a certain level of direction, I can’t be prescriptive on how we achieve our goals. Where I need to focus is on drawing the line and ensuring that our product strategy and our business strategy is solid… Product engineering needs to find ways to achieve those outcomes. Because then the agility is really only on how they are getting it.”

Bryan distilled his thoughts on the balance succinctly, “There’s a value in being proactively reactive.” He elaborated with an example, “I’m thinking about how there’s this tension between, for example, roadmap feature work and things that might come up from incidents, escalations, or customer requests… I think that’s the first piece, to plan for the fact that some of the work is going to need to be reactive and you’re not going to know what it is until it comes along, but you know that something is going to come along.”  

Implementing and optimizing an organizational cadence

To close out the conversation, Bryan and Juan Pablo turned to the practical matter of who should be responsible for deciding upon and implementing an organization’s cadence, and how to do so. Juan Pablo laid out his perspective that while cadence should be coordinated with the executive group to ensure company-wide alignment, it “should be sponsored by the leaders who have ownership over it. I think engineering managers can only get as far as their own group, some of their peers groups, product, or other functions that they work in, but they’re going to have zero influence on strategic executive planning or other things.”

Bryan added, “I would say don’t make perfect the enemy of good. Get started with understanding that you’re going to iterate the cadence itself. Everything’s going to be iterated. And I agree 100% with what Juan Pablo said, that leaders do need to lead, and this is an area where leadership is really important.”

Using data to drive engineering success beyond the sprint

Successful engineering teams can leverage data in tandem with an organizational cadence to stay aligned and perform at their highest level. To learn more, request a consultation.

 Never Miss an Update

Get the latest insights on developer productivity and engineering excellence delivered to your inbox.