Blog Category

Coaching

Using metrics to foster developer growth

Using DORA Metrics: What is Change Failure Rate and Why Does it Matter?

An increasingly common starting point for leaders is the four DORA metrics — key engineering metrics established by the DevOps Research and Assessment Group, including Deployment Frequency, Mean Lead Time for Changes, Mean Time to Recovery, and Change Failure Rate. DORA metrics fall under two categories: incident metrics and deploy metrics. These metrics look at critical markers of performance, and help software organizations balance the tradeoff between speed and stability when it comes to software delivery.
Mar 23, 2023
7 min read

The four DORA Metrics — Deployment Frequency, Change Failure Rate, Mean Time to Recovery, and Mean Lead Time for Changes — were identified by the DevOps Research and Assessment group as the metrics most strongly correlated to a software organization’s performance.

These metrics are a critical starting point for engineering leaders looking to improve or scale DevOps processes in their organizations. DORA metrics measure incidents and deployments, which can help you balance speed and stability. When viewed in isolation, however, they only tell part of the story about your engineering practices.

To begin to identify how to make the highest-impact adjustments, we recommend viewing these DORA metrics in tandem with their non-DORA counterparts, which can be done through Velocity’s Analytics module. These close correlations will help are a great starting point if you're looking for opportunities to make improvements, or might highlight teams that are doing well and might have best practices that could scale across the organization.

While there is no one-size-fits-all solution to optimizing your DevOps processes, certain pairings of metrics are logical places to start.

DORA Metric: Change Failure Rate

Velocity Metric: Unreviewed Pull Requests

Change Failure Rate is the percentage of deployments causing a failure in production, while Unreviewed Pull Requests (PRs) refers to the percentage of PRs merged without review (either comments or approval).

How can you identify the possible causes of high rates of failures in production? One area to investigate is Unreviewed PRs. Code review is the last line of defense to prevent mistakes from making it into production. When PRs are merged without comments or approval, you’re at a higher risk of introducing errors into the codebase.

In Velocity’s Analytics module, choose Unreviewed PRs and Change Failure Rate to see the relationship between the two metrics. If you notice a high Change Failure Rate correlates to a high percentage of Unreviewed PRs, you have a basis for adjusting processes to prevent Unreviewed PRs from being merged.

Engineering leaders may start by coaching teams to improve on the importance of code review so that they make it a priority, and if necessary, setting up a process that assigns reviews or otherwise makes it more automatic. If you’re using Velocity, you can note the date of this change right in Velocity in order to observe its impact over time. You can take this data to your team to celebrate successes and motivate further improvements.

For reference, according to the State of DevOps report for 2022, high-performing teams typically maintain a CFR between 0-15%.

DORA Metric: Deployment Frequency

Velocity Metric: PR Size

Deployment Frequency measures how frequently the engineering team is successfully deploying code to production, and PR Size is the number of lines of code added, changed, or removed.

Our research shows that smaller PRs pass more quickly through the development pipeline, which means that teams with smaller PRs are likely to deploy more frequently. If you’re looking to increase Deployment Frequency, PR size is a good place to start your investigation.

If you view these two metrics in tandem and notice a correlation, i.e. that a larger PR Size correlates to a lower Deployment Frequency, encourage your team to break units of work into smaller chunks.

While this may not be the definitive solution for improving Deployment Frequency in all situations, it is the first place you might want to look. It’s important to note this change and observe its impact over time. If Deployment Frequency is still trending low, you can look at other metrics to see what is causing a slowdown. Within Velocity’s Analytics module, you also have the ability to drill down into each deploy to investigate further.

DORA Metric: Mean Time to Recovery

Velocity Metric(s): Revert Rate or Defect Rate

Mean Time to Recovery (also referred to as Time to Restore Service) measures how long it takes an engineering team to restore service by recovering from an incident or defect that impacts customers.

Debugging could account for a significant amount of the engineering team’s time. Figuring out specifically which areas in the codebase take the longest time to recover could help improve your MTTR.

In Analytics, you can view MTTR and Revert Rate or Defect Rate by Application or Team. Revert Rate is the total percentage of PRs that are “reverts”— changes that made it through the software development process before being reversed — which can be disruptive to production. These reverts could represent defects or wasted efforts (undesirable changes). Defect Rate represents the percentage of merged pull requests that are addressing defects.

By viewing these metrics side by side in the module, you can see which parts of the codebase have the most defects or reverts, and if those correlate to long MTTRs (low-performing teams experience an MTTR of between one week and one month).

If you notice a correlation, you can drill down into each revert, speak to the team, and see whether the issue is a defect or an undesirable change. To prevent defects in the future, consider implementing automated testing and/or code review. To prevent wasted efforts, the solution may lie further upstream. This can be improved by focusing on communication and planning from the top down.

DORA Metric: Mean Lead Time for Changes

Velocity Metric: Cycle Time

Mean Lead Time for Changes is the time it takes from when code is committed to when that code is successfully running in production, while Cycle Time is the time between a commit being authored to a PR being merged. Both are speed metrics, and can offer insight into the efficiency of your engineering processes.

Low performing teams have an MLTC between one and six months, while high-performing teams can go from code committed, to code running in production in between one day and one week.

If your team is on the lower-performing scale for MLTC, it could indicate that your Cycle Time is too high or that you have issues in QA and testing. View these metrics in tandem in Velocity in order to check your assumptions. If your Cycle Time is high, you can dig deeper into that metric by investigating corresponding metrics, like Time to Open, Time to Merge, and Time to First Review.

Conversely, if your Cycle Time is satisfactory, the problem could lie with deployments. You should investigate whether there are bottlenecks in the QA process, or with your Deploy Frequency. If your organization only deploys every few weeks, for example, your team’s PRs could be merged but are not being deployed for a long time.

The power of DORA metrics in Analytics

DORA metrics are outcome-based metrics which help engineering teams identify areas for improvement, yet no single metric can tell the whole story of a team’s performance. It’s important to view DORA metrics with engineering metrics to gain actionable insights about your DevOps processes.

To learn more about using DORA metrics in Velocity, talk to a product specialist.

Recent headlines might lead one to conclude that it’s more difficult than ever to build a high-performing team. Hiring is increasingly competitive, salaries are on the rise, and a growing number of people are choosing to switch jobs or exit the workforce entirely. But building a stellar team is about more than just recruiting great talent — it’s about investing in the talent you have. And that’s not investment in the financial sense (though salaries and benefits are important!), it’s a commitment to coaching and upskilling your existing team.

Focusing on professional development is a win-win. Helping developers excel will boost team performance. Ensuring that developers feel both challenged and supported will increase their job satisfaction and make them more likely to stick around.

How Can You Help Developers Upskill?

Of course, helping engineers level up their skills is a multi-layered process. Time and money set aside for learning is important, but it’s not enough. As a leader, there are things you can do to create a culture where positive feedback is welcomed, missteps are seen as learning opportunities, and developers feel comfortable openly discussing their professional goals. Once the cultural foundation is set, you can make adjustments to incorporate coaching into your team’s processes and help ensure that it remains a priority.

Culture Is Key to Leveling Up

Psychological safety is a prerequisite to the success of any coaching or professional development initiatives. In order for developers to have honest conversations about their career goals, or to be comfortable receiving feedback, they must trust that they will not be penalized for aspirations that are out of alignment with current responsibilities.

Though psychological safety is essential, it is just a baseline. An organization looking to prioritize professional development may also benefit from adopting elements of Continuous Improvement. In Continuous Improvement, every member of a team is on the lookout for opportunities to make incremental improvements. The underlying belief is that even small changes to processes, products, and more can have a big impact.

At the individual level, it would be detrimental to engage every team member in a conversation about one engineer’s professional development. The critical takeaway from Continuous Improvement is that improving should not be a top-down process. When it comes to coaching, it’s important to empower individuals with an active role in their professional development. They can actively contribute by identifying areas of incremental improvement, making plans for their own development, and setting and tracking progress toward goals. When they are involved in making plans, they’ll be more likely to see them through. As they realize the value of making small, positive changes, they’ll be motivated to keep learning.

Create Regular Touchpoints

At the process level, effective upskilling requires consistent check-ins and conversations. Regular 1:1s are a great opportunity to surface opportunities for upskilling, and to evaluate progress toward goals. Come prepared with observations and discussion points, and encourage your team members to do the same. Give them the chance to raise their questions and concerns first, so you can get a more complete understanding of what blockers are impacting them the most, and what skills they’d most like to improve. Make their goals a priority whenever possible, and seek out opportunities to challenge team members to envision how their goals align with business priorities.

These touchpoints will be most effective when a baseline of safety has already been established, though it’s still important to be proactive about reinforcing trust during 1:1s. Practicing vulnerability can help establish the right tone. You may also want to remind team members that 1:1s are not meant for work-related status updates, but for bigger picture conversations about their role, skills, and aspirations.

Leverage Data To Coach More Effectively

Leaders can supplement qualitative conversations with Engineering Intelligence data from a platform like Code Climate. With the help of objective data, it’s possible to cut through biases, check assumptions, and more accurately assess how a particular developer is working.

For example, you may observe that a particular team member rarely contributes in meetings, and only speaks when spoken to. You may conclude that this team member is not engaged or invested in their work, or that they don’t value collaboration. Engineering data can help you test that hypothesis. You might find that this same team member is an active participant in Code Reviews, frequently leaving thorough, impactful feedback for their peers. Where you once might have encouraged this team member to be more collaborative, you can now offer more specific feedback around participating in meetings. Alternatively, you may decide to accept their participation in reviews as evidence of their commitment to teamwork, and instead, work with them on another area of growth.

You can also use engineering data to identify specific units of work that may present learning opportunities. For example, if you notice that a developer has an abnormally large or long-running PR, you can have a conversation about the circumstances that are drawing things out. This allows you to surface potential anti-patterns or areas of weakness that may benefit from coaching. You may learn that the developer is having an issue with that particular area of the codebase, or you may find that they would benefit from coaching around coding hygiene.

It’s important to remember that metrics are not diagnostic, and quantitative data must always be placed in context. Different projects will naturally progress at different speeds, and non-code-related factors can impact the data. One engineer may appear to be struggling when in reality, they’re simply working through a tricky problem. Another engineer may be adding value through glue work that isn’t as recognizable as shipped code. If you’re gathering relevant context and having open, honest conversations with your team, you’ll be able to determine whether a concerning data point has a reasonable explanation, is an anomaly, or indicates something that needs to be addressed.

Data can do more than help you surface potential areas for improvement. It can help you make those improvements a reality. Goals are more effective when paired with objective data. Metrics make it possible to set and track progress towards specific, actionable targets, which will set your team members up for success. You and your team members will be able to align on exactly what they’re working toward and see how effectively they’re getting there. If progress seems to stall, you can check-in and re-evaluate your tactics — or the goal itself.

Upskilling Is Key to Building a High-Performance Team

Coaching and professional development take time, but they’re critical to driving success and retaining your top performers. It’s not enough to simply hire talented people, as even the most skilled developers will be looking for opportunities to keep growing. With a mixture of cultural and process-level adjustments, you can help create an environment that encourages development while still advancing business priorities.

To find out how to leverage data from a Software Engineering Intelligence (SEI) platform to upskill team members and boost retention, request a consultation.

It’s no secret that performance reviews are flawed. Not only is occasional feedback unlikely to affect meaningful growth, but the feedback itself can also be suspect — studies indicate that numerical ratings reveal more about the rater than the person being reviewed, while open-ended evaluations are subject to a host of performance review biases.

Despite this research, most companies still rely on some form of performance review. They’re not ready to pivot from the idea of using reviews to promote professional development, and many employees aren’t either. As an engineering leader, you may not be able to overhaul your company’s review process, but you can still take steps to minimize some of its flaws. Engineering data found in your VCS and project management tools can help by acting as a check against anecdotal evidence and gut feel, helping you combat some common performance review biases.

Common Performance Review Biases

Biases may be evident in nearly every aspect of your day-to-day, but the open-ended format of most performance review frameworks is particularly vulnerable to some common biases. If you’re aware of their existence, you can take steps to counteract them.

Recency Bias

When reviews happen infrequently, the period of time right around review season is freshest in the reviewer’s mind and tends to be given the most weight.

What you can do: A skim of the Issues a developer worked on over the past year can be an important reminder of all they contributed, and a great way to refresh your memory. In addition, a deep dive into specific engineering metrics can help you distinguish longstanding patterns from recent anomalies. For example, you may have noticed that a developer is prioritizing their own work and putting off reviewing teammates’ Pull Requests. By looking at trends in a metric like Review Speed, you can determine whether or not that’s a new development, so you can calibrate your conversation accordingly.

Halo/Horns Effect

The Halo/Horns Effect occurs when a manager lets one trait — good or bad — skew their entire impression of an individual.

What you can do: There may be an engineer on your team who rarely speaks during meetings, and only participates when directly spoken to. You could take that as evidence that they’re generally disengaged at work, but data might reveal otherwise. If that same engineer has a consistently high PR Throughput and frequently participates in Code Review, it’s more likely that they just don’t like speaking up in group settings. With this information, you can offer specific feedback about their participation in meetings, rather than general (and inaccurate) feedback about their overall level of engagement, or you can adapt your expectations to match their work style.

Gender Bias

Studies show that reviewers tend to focus more on the personality traits of women and female-presenting individuals than on their skills or accomplishments.

What you can do: Make a conscious effort to focus on an individual’s work, and be sure to check any of your assumptions against objective data. For example, you can look at a developer’s history of commits, pushes, and review activity to confirm whether their work is in line with your expectations for a developer in their role. You might expect a more senior developer to spend less time committing new code and more time giving feedback to their teammates, but your expectations may be the opposite for a more junior team member.

Similarity Bias

This is the tendency of a manager to look more favorably on team members who remind them of themselves, perhaps due to a particular personality trait, a shared work style, or an aspect of their training or background.

What you can do: You may feel a particular kinship with a developer and assume that they’re achieving at the level you would in their role, but a look at the data — whether it’s their PR Throughput or a review of their contributions to Code Reviews —  can help you regain perspective and ground your assessment in the reality of their work.

Using Data to Check Your Performance Review Biases

Data is not only a valuable tool for dismantling performance review biases, it can help you deliver specific, actionable feedback and collaborate with developers to set clear goals for the future. As with any tool, however, it must be used carefully.

Quantitative data should always be contextualized with qualitative data, and it’s important to resist the urge to compare or rank the members of your team. Developers coding new features will naturally work at a different pace than those working through technical debt, and team leads will likely be focused more on coaching or project management than committing code. You’ll need to pair data with an understanding of the circumstances surrounding each team member’s work in order to have a complete picture of their overall performance.

If you’re interested in learning more about incorporating data into your performance reviews, request a consultation.

Effective engineering leaders today deftly balance the needs of their organization with the needs of their developers. Tasked with making strategic decisions, coaching their teams, and driving process improvements to meet business objectives and key results, leaders are often distanced from the actual work of writing code. As such, leaders must empower their team members to excel, and engineering intelligence data can help in a number of ways.  

Find out how data can help you cultivate an engineering environment that drives success in our new ebook, The Engineering Leader’s Guide to Empowering Excellence with Data.

Focusing on four key areas—removing blockers, minimizing micromanagement, personalizing coaching, and fostering a culture of psychological safety—our ebook will help you gain actionable insights from data, rather than gut feelings, to achieve a developer-focused work environment.  

empowering excellence with data

With the right data, you can determine when to step in to lend support, and when to step back and encourage autonomy, so you can empower your team members to go above and beyond. Used thoughtfully, data can help you build stronger, more successful teams and drive continuous improvement.

empowering excellence with data

An empowered team is a successful team. Download our ebook (for free!) today.

The final principle in the Agile manifesto urges developers to reflect on the past and use that knowledge to improve future outcomes. Since we learn from the past, holding sprint retrospectives is key to improving the results of future iterations. Conducted well, sprint retrospectives can boost outputs and propel teams forward; conducted poorly, they may breed toxicity. The careful use of objective data can help you steer your retro in the right direction — read on to find out how to leverage data from the beginning to the end of the retrospective process, so you can maximize the value of this key opportunity for continuous improvement.

Preparing for Your Sprint Retrospective  

Practices vary by organization, but sprint retrospectives may be facilitated by anyone familiar with the sprint, from a developer on the team to a stakeholder from another department. If you find yourself in the facilitator role, it’s crucial that you build a strong foundation for your retro by performing an audit to collect data in advance.

Look back on the lifetime of the sprint, and ask yourself questions like:

  1. What was finished?
  2. Did we deliver all the items we intended to? If not, why?  
  3. What specific units of work didn’t get shipped?
  4. What bottlenecks or blockers arose, and are they part of a pattern?

The answers will help you identify patterns and problem areas and formulate meaningful conversation points to guide the retrospective.

For example, if your sprint finished with a lot of unshipped work, you’ll want to know that in advance, so you can dig into the reasons during the retrospective. Look for unassigned tickets, which may indicate that some units of work were not prioritized correctly or that tickets were lost or overlooked unintentionally — though you’ll need to bring these tickets up at the retro to know for sure.

You’ll also want to look at the Issues from the iteration that are still categorized as In Progress, and see how many days they’ve been open. You can dig deeper by looking at the Pull Requests (PRs) associated with that Issue, and taking a look at relevant activity and comments for each. This can help you formulate a hypothesis as to why a unit of work was unshipped. For example, Issues with many PRs may indicate that work was not batched efficiently, while PRs with high levels of Rework may signal that an engineer was struggling with a difficult area of the codebase, or unclear technical direction. You can further investigate that hypothesis during your retro by discussing particular units of work to gain additional context and information.

While you can piece together this information from your VCS and project management tools, gaining a holistic view can be tedious as this data is typically dispersed. A Software Engineering Intelligence solution, like Code Climate, can save time and add a layer of valuable insights by aggregating that data in a series of customizable dashboards and rich visualizations.

Prioritize Your Sprint Retrospective

Typically, retrospectives last approximately 30 minutes for each week of the sprint, so if your sprint was three weeks long, you may want to carve out an hour and a half for your retro. Keep this time frame in mind to help you prioritize speaking points and focus on conversation topics that will keep your team engaged and on task.

Once you have compiled a list of topics, see if you discover any common themes and group them together. It may be helpful to get the perspective of your team members when you reach this point. Here at Code Climate, our facilitators ask the team to vote on which items should be talked through first to ensure engagement and alignment.

Open the Floor to Collaboration

In order to have a productive retrospective — one that surfaces meaningful opportunities for improvement — the team must feel safe talking through any missteps. The purpose of a retrospective is to measure processes, not individuals, so it’s important to remind your team to focus on the work, and not on the people behind it. As you set the stage for your retro, keep in mind that the data you gathered during preparation is to be used purely as an empowerment tool. When used appropriately, data can keep the conversation grounded in facts and tackle negative biases, allowing you and your team to have genuine conversations about things that could have been done better without making developers feel singled out.

Discuss

Now to discuss. Based on the topics you prioritized, you can split your sprint retrospective discussion portion into easily digestible parts. Though format can vary based on team and personal preference, many teams focus on three categories using a “Start, Stop, Continue” exercise, which asks developers to provide feedback on the following:  

  • Start: Actions we should start taking
  • Stop: Actions we should stop or do away with
  • Continue: Actions we should continue and codify

It can be helpful to use a visual aid to facilitate this exercise and keep the conversation on track. For in-person teams, that might mean distributing sticky notes that can be written on and affixed to a board; for remote teams, that might mean using a collaborative online platform like Trello. Take time to talk through each part, and…

Develop an Action Plan

By the end of the sprint retrospective, you and your team should have several actionable ideas to put into practice to help the next iteration go smoother. While this data is qualitative in nature, these new ideas can then be measured against the quantitative data (such as PR size) they are meant to improve during the next sprint, enabling you to enhance software development strategies as time goes on.  

Standardize Sprint Retrospectives and Keep Improving

Best practices are best utilized when reinforced. Each new retro you hold keeps you on the path of continuous improvement.  

While there is no golden rule as to how retros should be structured and held, some form of review is vital to achieving continuous improvement. By incorporating data into your retros, you can maximize the value of your discussions and help build a highly capable team that can successfully drive business goals.

Every day, engineering leaders ask their team members the same three standup questions to ensure the software development engine hasn’t come to a complete stop. It’s excruciatingly boring to be asked the same questions day in and day out, and it’s not a productive use of engineers’ time. These questions — the “are things moving?” questions — can be answered with the help of a Software Engineering Intelligence (SEI) solution, like Code Climate, freeing up valuable standup time for an even more impactful set of questions, the “how can we move faster?” questions. These are the questions that dig deeper into a team’s processes and practices, helping leaders identify opportunities for improvement and drive excellence on their teams.

In this two-post series, I’ll explain how data can help you answer the classic standup questions, then walk through the “next three questions,” that every engineering leader should be asking to level up their standups — and level up their team.

First, the classic questions:

  1. What did you do yesterday?
  2. What will you do today?
  3. What (if anything) is blocking your progress?

Let’s dig in.

Standup questions one and two: What did you do yesterday? What will you do today?

These questions are meant to help you understand what progress your team has made so far, so you can assess your team’s output within the context of your current sprint or cycle. In comparing what’s been done so far to what’s planned, you can get a sense of the sprint’s status.

Rather than ask your developers for a rundown of yesterday’s completed tasks and today’s to-do list, let the data speak for itself. You could do this by running a Git diff for every branch, or you could let Code Climate's SEI platform do the work, drawing connections between Commits and Issues and providing an accurate view of what your team is working on. This can also help you check that the team is prioritizing appropriately, and working on the most impactful Issues. You can also review the Issues that have yet to be started to determine whether the team is saving the trickier work for last. This isn’t necessarily a problem, but it could indicate that a slowdown is to come, and may be worth discussing during standup.

Of course, data won’t tell you for sure whether your team will hit its deadline, but it will give you a big picture view of how your sprint is progressing and where the team might need your help.

Standup question three: What (if anything) is blocking your progress?

Once you have a high-level understanding of the progress of your current iteration, you’ll want to know if there’s anything that might throw it off track. Developers might be hesitant to call out blockers, preferring instead to solve problems themselves, or they may lack the context to foresee potential slowdowns. Data addresses both of those problems, though the relevant information has historically been harder to find, as it’s spread throughout your VCS and project management tools. This is where a Software Engineering Intelligence (SEI) solution is particularly valuable — Code Climate can even send you a customized alert when a particular unit of work is stuck or at risk.

Work that is stuck may have gone a few days without a Commit, or may be marked “In Progress” in Jira for an atypical length of time. Work that is at risk is work that is not moving through the software development pipeline as expected, and which may become a bottleneck if not addressed. Though the signs of risk vary from team to team, Pull Requests with high levels of Rework, or many comments or Review Cycles are worth further investigation, as are PRs that haven’t been picked up for review in a timely fashion.

Data can also help you identify possible blockers in developers’ collaboration patterns. Every engineer will have their own opinion of how the team is working, and though it’s important to understand how engineers are feeling about the current state of collaboration on the team, it’s critical to check any hidden biases with objective data.

Start by looking at the distribution of work across the team. Is there someone who is overloaded, or someone who isn’t getting work? Code Climate makes this easy with a Work in Progress metric that tells you how much each engineer is working on at a given time. Then, to get a sense of the team’s broad collaboration patterns, it can be helpful to determine how many engineers are working on the same Issue or Pull Request. This way, you can be aware of a possible “too many cooks” situation, or dependencies that may impact delivery.

So, what standup questions should you be asking instead?

When you enter standup with this information, you can skip past the classic three standup questions in favor of three more impactful questions:

  • How can I help remove distractions?
  • How can the team help resolve that risk?
  • Are we working on the right things (see what people are working on, blend that with your context)?

The answers to these questions will help you move beyond a short-term focus on getting work done and help you get to the next level, where you’re focused on helping your team excel. Find out how in my next post.

This is the first post in a two-part series. Read the second post here.

A good leader can respond to issues and course correct in the moment, but a great leader is proactive about staying ahead of the curve. With the help of data provided by Engineering Intelligence tools, an Engineering Manager can gain visibility into the development pipeline and stay equipped with the knowledge needed to cut problems off at the pass. No matter where an EM is in their professional lifecycle, a proactive one can prioritize more successfully, strengthen coaching strategies, and boost team effectiveness in the short term, mid-term, and long term.  

Short Term Strategy: Spot Risk and Prevent Blockages

A lot of dedication goes into keeping sprints on track and software delivery on schedule. With so many moving parts in the development pipeline, an EM may find it tough to determine what needs their attention first, making it challenging to triage risks. However, by using a Software Engineering Intelligence platform — like Code Climate —  that conveys insights based on aggregated data, an EM can swiftly analyze coding progress and prioritize their task list to focus on what’s most important.  

For example, an EM can assess PR and Commit data to surface at-risk work — work that could benefit from their time and attention. If a Commit has had several days of inactivity and work on the associated PR has seemingly halted, it may indicate that the scope of the task is not clear to ICs, and that they require guidance. Or it may be a sign of task-switching, where too much Work In Progress pulls an IC’s focus and makes it difficult to complete work.

This is where data from a Software Engineering Intelligence platform is critical, as it can signal to a manager that a specific PR or Issue needs attention. Code Climate enables EMs to set Risk Alerts for PRs based on custom criteria, since risk thresholds vary from team to team. From there, the EM can use that information in standups, retros, and other conversations with ICs to help identify the root cause of the blocker and provide coaching where needed.

Mid-term Strategy: Improve Collaboration

As a proactive leader, an EM must understand the nuances of collaboration between all parties to ensure ICs and teams are working together effectively and prioritizing issues that are aligned with company goals. If teams fail to work cohesively, roadmaps may be thrown off course, deadlines may be missed, and knowledge silos may be created. Using their Engineering Intelligence tools, an EM can easily surface the quantitative data needed to gain visibility into team collaboration and interdepartmental alignment.  

When it comes to collaboration on their team, an EM might want to look at review metrics, like Review Count. Viewing the number of PRs reviewed by each contributor helps an EM understand how evenly reviews are distributed amongst their team. Using these insights, a manager can see which contributors are carrying out the most reviews, and redistribute if the burden for some is too high. Doing so will not only help keep work in balance, but the redistribution will expose ICs to different parts of the codebase and help prevent knowledge silos.

To look at collaboration between teams, an EM can rely on quantitative data to help surface signs of misalignment. Looking at coding activity in context with information from Jira can help an EM identify PRs that signal a lack of prioritization, such as untraceable or unplanned PRs. Since these PRs are not linked back to initial project plans, it may indicate possible misalignment.  

Long Term Strategy: Support Professional Growth and Improve Team Health

A proactive EM also needs to identify struggling IC’s, while simultaneously keeping high performers engaged and challenged to prevent boredom. This starts with understanding where each individual IC excels, where they want to go, and where they need to improve.

Using quantitative and qualitative data, an EM can gain a clearer understanding of what keeps each IC engaged, surface coaching opportunities, and improve collective team health. Qualitative data on each IC’s coding history — Commits, Pushes, Rework, Review Speed — can help signal where an IC performs well and surface areas where it might be useful for an EM to provide coaching. An EM can then use qualitative data from 1 on 1’s and retros to contextualize their observations, ask questions about particular units of work, or discuss recurring patterns.

For example, if an EM notes high levels of Rework, this signals an opportunity to open up a meaningful discussion with the IC to surface areas of confusion and help provide clarity. Or, an EM might see that an IC has infrequent code pushes and can coach the IC on good coding hygiene by helping them break work down into smaller, more manageable pieces that can be pushed more frequently.

Using a combination of both data sets, a manager can initiate valuable dialogue and create a professional development roadmap for each IC that will nurture engagement and minimize frustration.

Proactivity as an EM – The Long and Short of It

Proactivity is a skill that can be developed over time and enhanced with the help of data. Once empowered with the proper insights, EMs can more effectively monitor the health of their sprints, meet software delivery deadlines, keep engineers happy, and feel confident that they are well-informed and can make a marked, positive impact.

Developers who are struggling to keep up, as well as those developers who are excelling but no longer growing, are likely to be unsatisfied in their current roles. But engineering leaders, especially managers of managers or those who are not involved in the day-to-day writing of code, may not have insight into who is in need of support and who is craving their next challenge.

With the right combination of quantitative and qualitative data, you’ll be able to spot opportunities to coach developers of all levels. You’ll also have an easier time setting and tracking progress towards concrete targets, which will empower your team members to reach their goals.

Start by Gathering Qualitative Engineering Data

Use your 1 on 1 time to initiate conversations with each of your team members about where they think their strengths and weaknesses lie, and what they’d like to improve. You may want to give your team members some time to prepare for these conversations, as it can be hard to make this sort of assessment on the spot.

Pay extra attention in standups and retros, keeping an eye out for any patterns that might be relevant, like a developer who frequently gets stuck on the same kind of problem or tends to surface similar issues — these could represent valuable coaching opportunities. It can also be helpful to look for alignment between individual goals and team objectives, as this will make it easier to narrow your focus and help drive progress on multiple levels.

Dig Into Quantitative Engineering Data

Next, you’ll want to take a closer look at quantitative data. A Software Engineering Intelligence platform like Code Climate can pull information from your existing engineering tools and turn them into actionable reports and visualizations, giving you greater insight into where your developers are excelling, and where they might be struggling. Use this information to confirm your team member’s assessment of their strengths and weaknesses, as well as your own observations.

You may find that an engineer who feels like they’re a a slow coder isn’t actually struggling to keep up with their workload because of their skill level, but because they’ve been pulled into too many meetings and aren’t getting as much coding time as other team members. In other cases, the quantitative data will surface new issues or confirm what you and your team member already know, underscoring the need to focus on a particular area for improvement.

A Code Climate customer and CTO finds this kind of quantitative data particularly helpful for ensuring new hires are onboarding effectively — and for spotting high-performers early on. As a new member of the team, a developer may not have an accurate idea of how well they’re getting up to speed, but data can provide an objective assessment of how quickly they’re progressing. James finds it helpful to compare the same metrics across new developers and seasoned team members, to find out where recent hires could use a bit of additional support. This type of comparison also makes it easier to spot new engineers who are progressing faster than expected, so that team leaders can make sure to offer them more challenging work and keep them engaged. Of course, as is true with all data, James cautions that these kinds of metrics-based comparisons must always be viewed in context — engineers working on different types of projects and in different parts of the codebase will naturally perform differently on certain metrics.

Set Concrete Engineering Goals

Once you’ve identified areas for improvement, you’ll want to set specific, achievable goals. Quantitative data is critical to setting those goals, as it provides the concrete measurements necessary to drive improvement. Work with each team member to evaluate the particular challenges they’re facing and come up with a plan of action for addressing them. Then, set realistic, yet ambitious goals that are tied to specific objective measurements.

This won’t always be easy — if a developer is struggling with confidence in their coding ability, for example, you won’t be able to apply a confidence metric, but you can find something else useful to measure. Try setting targets for PR Size or Time to Open, which will encourage that developer to open smaller Pull Requests more frequently, opening their work up to constructive feedback from the rest of the team. Ideally, they’ll be met with positive reinforcement and a confidence boost as their work moves through the development pipeline, but even if it does turn out their code isn’t up to par, smaller Pull Requests will result in smaller changes, and hopefully alleviate the overwhelming feeling that can come when a large Pull Request is met with multiple comments and edits in the review process.

Targets like these can be an important way to help developers reach their individual goals, but you can also use them to set team- or organization-wide goals and encourage progress on multiple fronts. Roger Deetz, VP of Engineering at Springbuk, used Code Climate to identify best practices and help coach developers across the organization to adopt them. With specific goals and concrete targets, the team was able to decrease their Cycle Time by 48% and boost Pull Request Throughput by 64%.

Though it’s certainly possible to coach developers and drive progress without objective data, it’s much harder. If you’re looking to promote high-performance on your team, look to incorporate data into your approach. You’ll be able to identify specific coaching opportunities, set concrete goals, and help every member of your team excel.


Request a consultation to learn more.

How Velocity Metrics Can Help Your Team Achieve Continuous Delivery [Webinar]

Engineering leaders looking to drive high performance and achieve Continuous Delivery often hear that metrics are the answer. With metrics, it’s possible to objectively evaluate your team’s progress, measure it against industry benchmarks, and set targets for improvement.
Mar 9, 2021
7 min read

Engineering leaders looking to drive high performance and achieve Continuous Delivery often hear that metrics are the answer.

With metrics, it’s possible to objectively evaluate your team’s progress, measure it against industry benchmarks, and set targets for improvement.

But how should metrics be used? If you’re looking to translate Continuous Delivery ideals into actual practices and processes, where should you start?

In this free, 30-minute on-demand webinar, Code Climate Engineering Data Specialist, Nico Snyder, explains how Velocity metrics can help you implement Continuous Delivery best practices on your team.

He offers actionable strategies for:

  • Identifying the metrics that matter most in your organization
  • Using metrics to understand where your team stands
  • Setting quantitative targets and driving progress towards those goals

To find out more about how engineering metrics can help your team, reach out to one of our product specialists.

 Never Miss an Update

Get the latest insights on developer productivity and engineering excellence delivered to your inbox.