How to Unblock Engineers and Boost Engineering Productivity

One of the first and most essential uses of Velocity is to cut through the noise and help managers identify the signals of stuck engineers. This enables management to eliminate unnecessary check-ins, while still having the ability to unblock engineers and boost engineering productivity by stepping in to help an engineer who might hesitate to raise their hand.
Dec 23, 2020
7 min read

One of the first and most essential uses of Velocity is to cut through the noise and help managers identify the signals of stuck engineers. This enables management to eliminate unnecessary check-ins, while still having the ability to unblock engineers and boost engineering productivity by stepping in to help an engineer who might hesitate to raise their hand.

Velocity provides visibility into:

  • Who on your team is blocked
  • Whose work has been churning

Look out for four main behavioral patterns in Velocity to help address these concerns.

Engineers Who Haven’t Committed for a Long Time

A quick scan of the Activity tab will help you identify developers who aren’t checking in code.

Head into the Team360 report, select the Activity tab, and look for team members with few or no commits, represented as purple circles.

In the example above, Hecate hasn’t committed in a couple of days. This could indicate that she is:

  • Working on a large chunk of work locally
  • Stuck for whatever reason
  • Tied up in non-engineering related projects

If you see a similar work pattern in your team’s Activity Log, you might want to check in to identify the bottleneck and help your developer get back on track.

Engineers Who Are Committing but Churning

Any developers who are committing, but not opening PRs, might be churning. Once again, go to the Activity tab in the Team360 report to see which engineers’ work appears to be blocked.

As noted in the key, Commits and Merged Commits are indicated by dark and light purple circles respectively and open PRs by light blue diamonds. You’ll want to look out for clusters with a high count of circles and a missing count of diamonds.

As you can see in the top row, Donalbain has been consistently committing code, but not opening any PRs.

This could be because he is:

  • Committing a lot then planning to open one big PR (which isn’t ideal)
  • Committing something, then redoing the work he just did for some reason
  • Committing something, then heading off in a different direction and starting a new track of work

Take this opportunity to dive in and identify the issue.

Engineers Who Have Long-running PRs

Long-running PRs may indicate that an engineer is stuck on that particular unit of work, or that they’re multi-tasking, causing delays for multiple PRs.

Investigate all open and active PRs in the Pull Requests report. (Note that if you look at this report in the morning, it might look bare, since it automatically shows “today’s” activity. In this case, use the date picker to extend to yesterday, or the past two days to see what’s in progress).

To surface the oldest PRs, sort by age by clicking on the “AGE” header. Pay close attention to anything that’s been open for over 72 hours.

A PR might be long-running because:

  • An engineer is having trouble with this PR and keeps adding on commits.
  • It’s unclear whether this PR is done.
  • An engineer’s PR hasn’t been picked up for review, either because it was overlooked or because it’s perceived as complex.
  • An engineer is blocked on another third party.

Engineers Whose Work is Stuck in the Review Process

Finally, the Analytics tab is a good place to go to identify late-stage churn. You’ll want to run a query for Review Cycles, or the number of times a Pull Request has gone back and forth between the author and reviewer.

To obtain this report, select Review Cycles as your metric, and group by contributor. Run a query for the last week or two, and scroll to the bottom until you see the following bar graph visualization:

When Review Cycles are high, it may indicate:

  • There are differing opinions about what “done” means.
  • There’s misalignment around what kind of changes are expected to come out of the review process.
  • There are conflicting ideas about how a solution should be implemented.

Boost Your Own Engineering Productivity with Data

With the right data, you can identify which of your team members are stuck right now, so you can help remove the roadblock and get things moving again.

If you want to boost engineering productivity, but don’t have a way to track and analyze your engineering metrics, reach out to find out more about our Software Engineering Intelligence platform, Velocity.

How Well Are We Transitioning to Continuous Delivery Best Practices?

The Continuous Delivery (CD) best practices they recommend, such as keeping batch size small, automating repetitive tasks and investing in quick issue detection, all perpetuate speed and quality while instilling a culture of continuous improvement on the team.
Dec 17, 2020
7 min read

The authors of Accelerate surveyed over 23,000 individuals in 2,000 distinct software companies to uncover the methodologies that set top-performing organizations apart. Their research suggests that “speed and stability are outcomes that enable each other” and that any software organization can measure and improve these outcomes.

The Continuous Delivery (CD) best practices they recommend, such as keeping batch size small, automating repetitive tasks and investing in quick issue detection, all perpetuate speed and quality while instilling a culture of continuous improvement on the team.

While most of the industry is actively adopting CD, few have set up any way to measure their progress. Concrete metrics, such as those found within Velocity, are a prerequisite to ensuring success in this transition.

In this guide, we’ve outlined how you can use Velocity to answer:

  • How effectively is my team adopting CD practices?
  • What’s hindering faster adoption?

Measure Improvement to Shipping Speed and Throughput

There are two success metrics that can represent how “continuously” your organization is shipping: speed (Cycle Time), and throughput (Pull Requests Merged or Deploy Volume).

Start by looking at the Analytics report to see how well you’ve been improving on one of those metrics. We recommend looking back at least 90 days.

Cycle Time, or the time between an engineer’s first commit to merging to production, should be trending down. Good coding habits, such as working in small batches, should keep changesets moving through the process with little friction, while CI/CD tooling should automate a lot of late engineering or QA work that frequently blocks merging to production.

Here’s how you can read this trend:

  • Decreasing means you’re moving in the right direction. You’ve already adopted some Continuous Delivery best practices that have unblocked engineers and enabled them to move a single work in progress through the pipeline as quickly as possible.
  • Flat is what you’d expect when you’re not in a state of transition. Typically, teams hit a local maximum with process efficiency when they’ve optimized as much as they can. If you’re in the middle of transitioning to CD, however, a flat Cycle Time is a bad thing. It means that even if you’ve changed some of the tooling or the messaging around how to ship software, this has not had the intended effect.
  • Spiky indicates inconsistencies, and that your process is not delivering predictable results. You’ll want to take a closer look at days or weeks with particularly high Cycle Times to diagnose why work is getting stuck.
  • Increasing is not a state you want to be in for a prolonged period of time, but can be normal during change management, as your team learns new practices and transitions to new tooling.

Alternatively, you can use Pull Requests Merged or Deploys as your yardstick. In this case, you can invert how you interpret results. Increasing throughput is desired, while flat and decreasing trends are a red flag that your team’s new practices are not yet yielding better outcomes.

Dive Into Key Health Metrics Team by Team

After understanding your overall engineering speed, you’ll want to investigate health metrics to find specific areas for improvement.

Velocity metrics can serve as strong proxies for the following Continuous Delivery practices:

  • Good Coding Hygiene, which means working in small batch sets (PR size) and opening Pull Requests early (Time to Open).
  • High Review Effectiveness, which means balancing review thoroughness (Review Coverage) and speed (Review Speed), while ensuring that comments lead to action (Review Influence).
  • High Engineering Capacity, which means developers have enough time for engineering work (Weekly Coding Days).

In Velocity’s Compare report, you can look at these metrics across teams or for individuals to identify coaching opportunities or process improvements.

Click on a team to drill down and see performance on an individual basis:

Finally, get more context on a team or individual by seeing how they’ve performed on a specific metric, historically. The Analytics report lets you pick how far back you look and then drill down into any units of work that are dragging the average up or down.

Find the Biggest Opportunities for Improvement in Your Software Delivery Process

Now that you have all the context for how your team is working, create a mental model of your software delivery pipeline to see where a unit of work is most likely to get stuck. This will help you prioritize where you should start making optimizations.

We recommend breaking your process into three portions:

  • Time to Open, or how long does development take.
  • Review Speed, or how long does work sit before getting picked up for review.
  • Time to Merge, or how long does the entire code review process take.

You can look at these three metrics side by side, by selecting them in the Analytics report and viewing them as bar graph clusters by week or by month.

With this data, you’ll be able to determine which area of your process is worth a closer look. If multiple stages need attention, we recommend starting with the one that comes earliest in your development pipeline, as improvements at early stages can have an impact downstream.

To dig into your own data and start measuring your progress towards Continuous Delivery, sign up for a demo of our Software Engineering Intelligence platform, Velocity.

Velocity vs. GitPrime: Choosing an Engineering Intelligence Tool

Velocity or GitPrime? Read a head-to-head analysis to decide which is best for your team.
Dec 7, 2020
7 min read

Velocity or GitPrime? Read a head-to-head analysis to decide which is best for your team.

In competitive markets, the viability of a business depends on engineering performance. In their 2020 study of 400+ enterprises across 12 industries, McKinsey concluded that engineering departments performing in the top quartile of the Developer Velocity Index (DVI) “outperform others in the market by four to five times.”

Historically, however, engineering has been a black box. The average company invests millions of dollars a year into the department, but most have no way of assessing the returns.

This is why many market-leading organizations, like Kickstarter, Gusto and VMWare, are starting to adopt Engineering Intelligence to get visibility into their software development workflows. Doing so has enabled them to effectively improve performance, boost Time to Market, and out-innovate competitors.

The two most popular engineering analytics platforms, Velocity and GitPrime (recently acquired by Pluralsight Flow), both offer transparency into engineering performance and process efficiency but differ in their approaches.

To help you make a decision about which approach to engineering metrics works best for your team, we put together a thorough head-to-head comparison of Velocity and GitPrime. Read the post through, or click on a link to skip to the section that’s most important to you.

Setting Up
Coaching
Tracking Progress
Goal Setting
Scope of Visibility
Surfacing Issues
Customization
Cost

Setting Up

Tl;dr: The setup process can be just as fast for both, GitPrime and Velocity, so you can be up and running as soon as your data imports.

First, you’ll want to know the time and effort it takes to get set up, so you can have an accurate expectation of how soon you’ll be up and running. Both analytics tools recognize the friction involved with process changes, so they’ve done their best to streamline this experience.

Velocity

Start setting up Velocity by first signing in with your GitHub or Bitbucket account. Once you’re in, you’ll be prompted to add your repositories, so you can start seeing your engineering data in the app.

GitPrime

GitPrime has a similar setup process. You start by creating a new GitPrime account and then setting up integrations with whichever Git or product management tools you might be using.

GitPrime supports more version control systems than Velocity, and each has a slightly different workflow. You can import repos accessible over HTTPS or SSH from any server, or use OAuth to connect to your GitHub, GitLab, or Bitbucket organization.

From there, you’ll also have to organize your data. You won’t be able to assign repos to applications, but you organize them by tag. Contributors can similarly be hidden from reports, merged or assigned to teams.

Coaching

Tl;dr: Velocity has a more robust set of coaching features than GitPrime. Whereas GitPrime offers a few metrics per developer, Velocity offers a 360 degree view that covers the day-to-day, week-to-week improvement, and long-term development.

A top priority that we often hear from organizations looking to invest in engineering analytics is the need to improve team and individual performance.

Velocity’s 360 reports combine all coaching features in one comprehensive report that provides a complete picture of developers’ and teams’ work habits. GitPrime reduces developer performance to a few key metrics, and offers more prescriptive guidelines.

Velocity

Velocity’s Developer360 report gives managers instant visibility into your developer’s active work, improvements along key metrics, and skills.

The report includes four tabs:

  • Snapshot shows what a developer is working on right now and the impact of that work. A manager can leverage this data to spot bottlenecks before they have significant, negative downstream effects.
  • Activity provides a visual summary of what an engineer has been working on over the past month. Many managers scan this report to see how an individual’s workload has changed over time and to ensure that work distribution matches their expectations.
  • Foundations depict how a member of your team is trending according to every critical Velocity metric. Incorporate this data into 1:1s and/or performance conversations to check your biases, come to a shared understanding of where strengths and weaknesses lie, and set quantitative, actionable goals.
  • Skills displayswhat coding languages a developer has been working with. This data can be helpful to glance over before a coaching session, so you can get a sense for an engineer’s language gaps and work with them to improve their expertise.

Velocity’s Developer360 report focuses on objective metrics and does not presume what they may indicate. We recommend Velocity for teams who are looking to avoid reductive metrics.

GitPrime

GitPrime has two main reports for coaching developers:

  • The Player Card, which is limited to performance along three key categories: the core metrics included in the Code, Review, and Submit Fundamentals. At a glance, a manager can see a contributor’s percentile performance, a work log of what an engineer has been working on, as well as how collaborative engineers are in the review process. This report can be used to inform 1:1s or quarterly reviews.
  • Snapshot, a report which plots contributors on a quadrant, based on their average throughput and churn. This report shows how the contributor compares to other engineers org-wide and offers feedback suggestions, based on where the engineer falls on the graph

GitPrime’s coaching reports are a fit for leaders who desire suggestions towards specific action based on how a given contributor is performing relative to their peers. For those who prefer GitPrime’s more prescriptive approach to coaching, however, we recommend keeping in mind that metrics don’t always paint a full picture.

For example, if you look at PR Throughput on this graph, you’ll see how many changes a given developer has shipped in contrast to his or her team members. But a data point on the top right of the graph doesn’t include the context that many of the deploys were relatively small in impact.

Tracking Progress

Tl;dr: Both tools provide at-a-glance dashboards that let you see trends over weeks, months or quarters. Velocity provides more PR-related metrics and has a real-time view into how you’re doing this sprint. These metrics allow you to evaluate progress across projects, sprints, and cohorts, making it possible to implement high-level process changes that can fundamentally improve the way your team works. GitPrime has more contributor-based metrics, which make it more difficult to help your entire team improve together.

The same insights that previously required hours of digging through repos and countless 1:1s are available at-a-glance in both analytics tools. But each application tracks “progress” slightly differently. Where Velocity makes it easy to track process-level metrics like Push Volume and compare progress across teams and time periods, GitPrime prioritizes reports that track metrics by individual contributor.

Velocity

Velocity has two main features that allow for progress tracking:

  • Overview: This is the home dashboard, and it offers a summary of the progress your team has made over time, based on metrics like Impact, PRs Merged, or Push Volume. By pulling these together in one place, the Overview provides an at-a-glance look at the way your team’s progress is trending across a variety of metrics, so you can dig deeper into the ones that are most aligned with your goals.
  • Analytics: Every team works differently, which is why the Analytics feature is designed to give you the data you need most. Managers can create customized reports, slicing and dicing 50+ metrics available within the app to understand how various development behaviors have changed over time on an org, team, or individual level.

Velocity makes it easier to do things like identify and learn from your highest-performing teams, or track the success of particular initiatives. For example, you might track new developers’ Deploy Volume to evaluate how they’re progressing with onboarding based on how much of their work is making it into the codebase. And if our standard reports don’t include the insights you need, you can use our customizable Analytics report to dig even deeper into your data.

Velocity’s progress tracking reports are most suitable for managers who interpret metrics as insights about the work, not the person.

GitPrime

GitPrime has its own report for progress tracking:

  • Project Timeline: This dashboard is similar to Velocity’s Overview dashboard, reporting work progress over time in terms of Impact, Commit Volume, Impact, and Velocity. The subtle difference is that instead of including a PR-related metrics, like Velocity’s PRs Merged, they look at Velocity, which they measure by number of valuable commits per person.

GitPrime’s Project Timeline report best complements a management style that prioritizes tracking contributor performance over PR- and process-related metrics.

Goal Setting

Tl;dr: Both applications include robust goal-setting features. The approaches differ in the types of goal-setting capabilities provided.

The goal of adopting an Engineering Intelligence tool is to use the greater visibility found in metrics to drive positive change in your organization.

Both Velocity and GitPrime include target-setting reports, but whereas Velocity tracks progress in terms of success rates, GitPrime tracks averages in their goal-setting system.

Since high-performance in engineering is critical to business success, you can use Velocity’s Targets feature to measure, improve, and communicate progress using objective metrics that support departmental initiatives. This report serves as concrete data to inform any OKR or KPI-related conversation, while the ability to drill-down into outliers enables team members to diagnose why targets aren’t met.

Velocity

Within Velocity’s Targets feature, executives, leaders, and front-line managers can build a dashboard of reports that visualize progress toward goals in terms of success rates or averages.

  • Targets: Velocity has a first-class, highly structured goal-setting system that goes beyond simple averages, with percentile-based OKR-style goals. To meet this type of goal, a target percent of data points must be above or below an agreed-upon benchmark. For example: Keep 95% of Pull Requests under 250 lines of code.

When setting a goal, many leaders find that tracking averages over time doesn’t properly represent the progress that’s being made toward that goal.

If you’re tracking PR size, for example, a single, long-running PR might obscure the dozens of PRs that moved quickly through the pipeline. If you’re tracking Review Speed, a single neglected review inaccurately suggests inefficiencies in the review process.

Thus, Velocity’s Targets report is tailored to engineering leaders who acknowledge anomalies and believe that it’s acceptable for a few data points to be outside an expected target.

GitPrime

Instead of success rates, GitPrime tracks averages in their goal-setting systems.

  • Fundamentals: The core of GitPrime’s product are four fundamental metrics that they assert are signals of productivity: Active Days, Commits per Day, Impact and Efficiency. They offer dashboards that show the 30-day average, industry benchmarks, and custom targets you can set.

GitPrime’s Fundamentals report is most compatible with managers who prefer the more common approach of tracking averages. However, it is important to note that if you have an outlier in your data — maybe one particularly complicated PR required a lot of back and forth in Code Review — that outlier will throw off your average. This can make it difficult to see the overall trend, and inaccurately suggest inefficiencies.

Scope of Visibility

Tl;dr: If you want to evaluate your process from end-to-end, you’re better off going with Velocity, which was built specifically for CD. Conversely, GitPrime was built for coding efficiency with an emphasis on Code Review and doesn’t include data from before a PR is opened and when it is merged.

While most of the industry is actively adopting Continuous Delivery, few have set up any way to measure their progress.

To optimize or adopt CD processes, organizations need a complete, end-to-end picture of their engineering processes. Concrete metrics, such as those found within Velocity and GitPrime, are a prerequisite for ensuring success in this transition.

Velocity

Velocity is the only application in its category to shine a light on the entire software development process. Key metrics you need when measuring CD include: Cycle Time, Deploy Volume, Time to Open, Time to Review, and Time to Merge, the majority of which are not available in GitPrime.

Our objective is to eventually incorporate data from every important tool that an engineer touches.

Teams looking to optimize each part of their software delivery pipeline, not just Code Review, are better off going with Velocity.

GitPrime

GitPrime was originally built to improve coding efficiency and has since built Code Review features as an add-on. This leaves important parts of the software delivery processes obscure–such as what happens before a PR is opened or after it is merged.

Teams focused exclusively on optimizing their Code Review processes will benefit more from the granularity found in GitPrime’s Review Workflow report.

Surfacing Issues

Tl;dr: Velocity, with PR-related metrics at the core of the product, does a better job drawing attention (inside and outside of the app) to actual artifacts of work that could be stuck or problematic. GitPrime, with mostly people-focused metrics, draws attention to contributors who could be stuck or problematic.

Engineering is expected to continuously deliver business value to your organization, but a single bottleneck can hold up the entire team during any given sprint. The larger your team gets, the harder it becomes for you to discern what work is stuck in the pipeline and why.

Velocity and GitPrime take different approaches to identifying outliers or irregular work patterns.

Velocity

Velocity employs a variety of visualizations to help you find the root cause of any issue that might slow down your team:

  • Activity: This report displays the number and size of commits, merge commits, and PRs on a developer-by-developer or team-by-team basis over time. Scanning this page will enable you to see whether what developers are working on meets your expectations.
  • Pull Requests: This view keeps the WIP/contributor on top, but also shows you how far along each PR is from being merged. You can see at-a-glance which PRs are at-risk and who’s working on them. Click on an item to navigate to the original Pull Request in GitHub.

Your team is also able to spot issues outside the application through daily standup reports, available via email or Slack. Velocity, thus, isn’t an analytics tool for top-down management but for leaders wishing to keep the whole team on track.

GitPrime

GitPrime’s core product ties each issue to a contributor, which gives managers an easy way to determine who to go to when something goes wrong on a particular week or month. Only in the collaboration reports, available in higher tiers, is there insight into problematic work products, such as PRs.

Here’s where you’d look to find inefficiencies, bottlenecks, and stuck engineers:

  • Work Log: The work log is the application’s homepage. Similar to Velocity’s Activity Log, this page displays different types of work that a contributor produces. In addition to commits, merges, and PRs, they also display ticket comments, since GitPrime also offers an integration with JIRA.
  • Snapshot: This feature is basically an automated performance review of each contributor. It gives a summary of how their performance stacks up against their team members’, and plots them on a matrix that shows the speed (measured by Impact) and the quality (measured by Churn) of their work. A further breakdown can be seen below.
  • Spot Check: This feature displays how contributor performance has increased or decreased as compared to the last week or last month. At-a-glance you can quickly spot abnormalities and know who to approach for more information.

We recommend GitPrime for managers who prefer visibility into low-performance developers over visibility into stuck work.

Customization

Tl;dr: Velocity includes customizable reports that allow you ask questions of your data to derive more meaningful insights. GitPrime does not have custom reporting, but they do offer an API.

If you have unique requirements or track a unique metric, you might require a more flexible platform. Here’s how your two options compare.

Velocity

Velocity has an entire feature set dedicated to making the product more flexible for teams who work off the beaten path:

  • Analytics Report: Velocity exposes all of your Pull Request, Code Review and commits data for you to create custom reports. You simply pick your data set, determine how you want it summarized (by average, sum, etc.), and then how you’d like it displayed. You can choose from 9 different views, including line graphs, bar graphs, and area graphs.
  • Reports and Metric Permissions:Not all teams find the same analytics valuable, so Velocity gives users the ability to turn on and off whatever metrics or full features that they’d like. You can also control whether each metric can be segmented by team or individual, or only available at the organization level.
  • Contributor-wide, team-wide, or org-wide targets: Once you’re familiar with how your team performs week to week or month to month, you can set targets to push your team to improve along whatever criteria you’re prioritizing. You can attach tags and metadata to these targets, and they’ll be sent out to your team on a weekly basis.

Velocity is the best option for engineering organizations who’d like the flexibility to build any charts that aren’t already available out-of-the-box.

GitPrime

GitPrime does not have custom reporting, but they do offer an API in their Enterprise package for customers who have the resources to build out their own reports.

There is also portion of the application where users can set simple targets for the entire organization, teams, and contributors.

GitPrime is a good fit for customers who have the resources to build out their own reports.

Cost

Tl;dr: While pricing of the two products is competitive, GitPrime restricts more features in their lower tiers. Velocity offers more capabilities for less, and the flexibility of their platform allows for customizability irrespective of cost.

The two products do not differ much in terms of pricing, so if you’re operating within significant budget constraints, a built-it-yourself solution is probably most feasible. Otherwise, both products tier slightly differently, so make sure you’re getting the core features that are most important to your team.

Velocity

Velocity has four pricing packages based on team size, including a free option for teams of 10 or fewer. For teams of 10+, pricing starts at $449/seat per year. Each tier includes access to all metrics and reports (including the flexible Analytics report) and gives teams access to unlimited historical data.

The small and medium tiers are limited in number of repos (50 and 100, respectively), while the largest priced tier is not. The team reporting function, which lets you see metrics summarized on a team-by-team basis, is not available until the largest tier.

GitPrime

GitPrime has a more complex pricing system. They have 3 tiers with different features, and a sliding pricing scale, based on how many engineers are in your organization. Their pricing starts at $499, but they limit a lot of their features in the lower tiers.

The lowest tier does not include their “code review collaboration insights.” They also restrict the historical data they make available– 12 months for the first tier and 36 months for the second tier.

Different Strokes for Different Management Folks

Engineering excellence drives business performance. The teams that are excelling in the space are the ones that have the vernacular to talk about developer performance and the tools to improve it.

To this end, Velocity data serves three primary purposes. It’s used to:

  • Help improve team and developer performance,
  • Drive continuous improvement across engineering processes, and
  • Communicate engineering’s progress both within the department and to others in the organization.

Most importantly, Velocity has a few more tools to put your learnings into action. You can set up Slack and email alerts for irregular activity and you have a first-class targets system to encourage your team to improve.

Conversely, GitPrime’s main focus is individual performance, importing data from Git, which means their tool primarily works off of source-code level data, not collaborative work data.

GitPrime equips a manager to keep closer track of their engineers, so they have a clear idea of the strongest and weakest performers of the team. This approach is for hands-on managers who still want an active role in how their direct reports work.

Using Velocity to Identify Patterns of Top Performing Teams

In this time of global and economic uncertainty, it’s never been more important to have a quick way of knowing which engineering processes are working and which are broken.
Nov 18, 2020
7 min read

In this time of global and economic uncertainty, it’s never been more important to have a quick way of knowing which engineering processes are working and which are broken.

And while it can be tempting to focus on bottlenecks and struggling team members, it can be even more useful to look at the practices, behaviors, and culture of your strongest teams.

This post runs through a framework for using Velocity, our Software Engineering Intelligence (SEI)platform, to identify and scale the most successful habits of high performing teams.

Finding the Top 5% of Your Organization

While all engineering organizations might define success slightly differently, there are two metrics within Velocity that indicate an extremely proficient engineering team:

  • Throughput, measured in deploys or PRs merged, which indicates how much your developers are getting done.
  • Cycle Time, measured in hours from first commit to when a PR is merged to production, which indicates how fast your developers are getting that work done.

In Velocity’s Compare report, you can select both of these metrics and compare them across your organization to identify the teams that are merging the most the fastest.

Once you’ve identified your strongest teams using success metrics like Throughput and Cycle Time, you’ll want to dig into what has made them successful. For this, you’ll need different, diagnostic metrics.

Identifying the Best Practices of Your Strongest Engineering Team

You can think of your software delivery process in three phases:

  • Time to Open, or how long does development take.
  • Review Speed, or how long does work sit before getting picked up for review.
  • Time to Merge, or how long does the entire code review process take.

Typically, a strong engineering team will move faster in one of these stages.

Velocity makes it easy to look at these three metrics side-by-side. You can view them as bar graph clusters by week or by month.

Or, you can view these metrics by team.

Here, we can see that the top team — Gryffindor — is most distinguished by their extremely fast Review Speed. Although they have a long Time to Open and Time to Merge, this isn’t remarkable when looking at the other teams. The other teams (especially the Hogwarts team) frequently had work stuck in the review process.

Pair your quantitative analysis with qualitative information, and speak to the members of the Gryffindor team. Find out what makes their review process different from the other teams’ processes, and think about ways the other teams can apply those learnings.

DORA metrics are also useful to identify high performing teams within the organization.

Creating a Blueprint For Your Entire Organization

Now that you’ve identified your top-performing teams and their defining characteristics, you can create a blueprint for better processes across your organization.

One of our customers, a health data analytics solution,used Velocity following a Series B funding round to level-up the way they coached engineers at scale.

Their VP of Engineering had been brought on to help build out the engineering department. But after getting to know his teams, he realized that there wasn’t any consistency in how and when teams shipped features to end-customers.

The VP of Engineering worked with his engineering leads and product managers to identify agile practices that worked for his team, then shared them org-wide. Together, they created documentation and set up onboarding and mentoring around encouraging healthy coding habits at scale.

With stronger processes in place, the team was able to increase PR throughput 64%. With objective data and learnings from your highest performing teams, you’ll be able to replicate successful practices across your organization, and help boost productivity at scale.

Find out how Code Climate Velocity can help your team improve Cycle Time and PR Throughput by booking a demo.

Introducing Developer360: The First Full Picture Of Engineering Work

Velocity now features a comprehensive view of developer work-in-progress, performance, and skill development — all in one place.
Jul 23, 2020
7 min read

Velocity now features a comprehensive view of developer work-in-progress, performance, and skill development — all in one place.

The information required to lead an engineering team is increasingly scattered.

To answer even simple questions like ‘How is my team performing?’ and ‘Is our current sprint on track?’ an engineering manager may need to check as many as ten different systems — version control, project management, feature flags, DevOps tools, and incident management, to name a few. Even so, this fragmented means of information seeking often doesn’t provide a clear answer.

What’s more, a lack of visibility upstream can lead to significant negative effects downstream, like:

  • Broken sprints and missed milestones
  • Burnt-out developers and high employee turnover
  • Inability to actively manage developer performance and achieve excellence as a team

We believe that in order to create a culture that both motivates engineers and improves their ability to drive innovation, managers need a comprehensive picture of where their team members are succeeding and where they can improve.

Our mission at Code Climate is to empower leaders with tools to drive high-performance. Today, as the next step in this mission, we’re launching Velocity’s newest feature, Developer360, to enable managers to build elite organizations with data-driven insights.

Support engineering excellence with a comprehensive view of developer work, performance, and skills — all in one place.

What’s Developer360?

In order to empower their team to achieve excellence, every manager needs a quick way of knowing:

  • What are engineers on my team working on? Is anyone stuck right now?
  • How are junior team members developing their foundational skills? What challenges have they been facing?
  • What’s in each developer’s technical toolbox? Are there any language gaps on my team?

Developer360 gives you instant visibility into your developer’s active work, improvements along key metrics, and skills.

Identify High-Risk Pull Requests

Frontline managers typically rely on stand-ups to check in on work-in-progress. But despite great intentions, even the best engineering teams don’t always bring up issues early.

The Snapshot report brings potential WIP issues to your attention before they derail one of your sprints.

The all-new Velocity Feed (far right) provides a chronological visualization of all an engineer’s recent work, including commits, requested changes, and opened or closed PRs.

With a scan of this report, you can start each day already aware of what’s been taking up a developer’s attention as well as what challenges they’ve been facing.

Scope the Opportunity for Improvement

The more time engineering managers spend providing engineers with proper guidance, the more they’re investing in their team’s future.

The Foundations tab is a source of quantitative data managers can bring to coaching sessions. At a glance, a manager can see each contributor’s average over a given time period, how they’ve trended over that period, and their percentile performance.

Dig into a capacity metric like Impact, which measures the magnitude of changes to the codebase over a period of time. This metric can help you uncover high performers who may deserve recognition, or serve as an early warning sign that this team member may be in need of some coaching.

Incorporate this data into 1:1s and performance conversations to check your biases, come to a shared understanding of where strengths and weaknesses lie, and set quantitative, actionable goals.

Support Your Engineers’ Professional Growth

Part of an engineering manager’s job is knowing what coding languages each developer has been working in so that they can distribute upcoming work, track migrations, and support professional development.

The Skills tab provides a visual summary of a developer’s technical toolbox, so that managers can come to planning and coaching sessions already aware of what skills each engineer has mastered (and what they’re still learning).

Get a sense of an engineer’s language gaps, and work with them to improve their expertise.

Data-Driven Coaching for High-Performance Teams

Engineering is only as strong as its contributors, and as such, building a culture of excellence starts on the individual level. Establishing a complete and shared understanding of how contributors are performing on a micro level will allow you to level-up the way your team is working on a macro level.

When developers are empowered to perform at their best, the entire organization benefits:

  • Product is happy when changes can be deployed at a predictable pace.
  • Sales can more effectively drive revenue with new features to show prospects.
  • Stakeholders see value delivered to customers more frequently.

We’re excited to build on top of Developer360 in our mission to provide engineering leaders with the visibility required to level up their teams. This is just the start of our undertaking to establish Velocity as the single source of truth for software engineering.

Sign up for Velocity to drive high-performance on your team with a 360° view into developer work.

The Future of DevOps Data is Open

With Connectors, we’re helping the broader software development community build the first open source standards for all software engineering data.
Mar 5, 2020
7 min read

Introducing the first open platform for software engineering data

The data necessary to understand the entire software delivery process is increasingly fragmented. To ship a single feature, a team may take advantage of 10+ tools, from project management, to version control, CI/CD, feature flags and more.

At Code Climate, we’ve learned that combining data from different sources enables higher-level insights. For example, PagerDuty can tell you how often your developers are getting woken up—but it can’t tell you the impact on innovation. Adding Jira data enables you to understand the delays those incidents have on feature initiatives.

To unlock the full value from these data sources, we need:

  1. A standard language for understanding DevOps concepts
  2. A simple way to connect to the wide array of engineering tools

And that’s exactly what we’re excited to be announcing today.

We’re launching the Code Climate Engineering Data Platform with:

Each Connector requests data from a DevOps tool’s API and outputs a set of records conforming to the data schema. Here’s how it works:

To kick things off, we’re open sourcing Connector reference implementation for PagerDuty, CircleCI and Codecov, all written in Node.js.

We believe that the future of DevOps data is open, and today, we’re taking the first step towards making this vision a reality.

The Code Climate Community

We created the first extensible ecosystem for code quality four years ago. There are now 50+ static analysis and test coverage tools available, developed by thousands of community contributors, benefiting millions of engineers. Companies like GitLab have used it as the foundation for their code quality solutions.

With the new Engineering Data Platform, we’re helping the broader software development community build the first open standards for all engineering and DevOps data. We’d love your feedback or contributions on everything we are announcing today (it’s all in draft form!).

Here’s how you can get involved:

We look forward to building the next generation of Velocity and Quality features using this new, open standard. We’re also excited to see how others will take advantage of the Code Climate Engineering Data Platform to build products and tools we haven’t even thought of yet.

How Splice Used Engineering Measuring Tools to Unblock Deployment

The following article is based on a talk Juan Pablo Buriticá, VP of Engineering at Splice, gave at our annual Engineering Leadership Conference and a subsequent interview.
Oct 31, 2019
7 min read

The following article is based on a talk Juan Pablo Buriticá, VP of Engineering at Splice, gave at our annual Engineering Leadership Conference and a subsequent interview. Watch the full talk here, and see the slides here.

“The most shocking thing is that software engineering is one of the least rigorous practices that exists,” says Splice’s VP of Engineering, Juan Pablo Buriticá. After graduating with a degree in Pharmaceutical Chemistry, he eventually taught himself to code and transitioned to a career in software. Juan Pablo was astonished to find software engineering to be a much more fragmented discipline. Few standards exist around even the most common software production methods, like Agile and Continuous Delivery.

When managing a small team, the lack of industry standards was rarely an issue for Juan Pablo. Whenever something felt inefficient, he’d get the whole engineering team in a room, identify the source of friction, adjust, and move on. After Juan Pablo scaled his team from 15 to 80 distributed developers, however, all their processes broke. “I had to go back and fix the mess I created by growing the team so fast,” said Juan Pablo.

But fixing them wasn’t so easy anymore. So, Juan Pablo turned to engineering measuring tools and the Scientific Method.

Experiment 1: Applying Metrics to Engineering

Forming a Hypothesis

Before experimenting to determine which actions would yield improvement, Juan Pablo and the rest of the management team agreed that they needed to determine what specific outcome they were optimizing for. The team agreed that everything felt slow— but a gut feeling wasn’t enough to start making changes in the team.

They wanted to:

  • First, decide what they were working towards. They weren’t willing to settle for a vague anti-goal of “slowness”— they wanted a clear vision of what the team should look like.
  • Second, decide how they would measure progress. Within the newly agreed-upon context and direction of the team, they wanted a system to measure how far away they were from their goals.

Thus, their hypothesis was: A Defined Direction + A Focus on Success Metrics = Increased Tempo.

The product and engineering leadership spent an offsite deliberating on what was next for Splice. They came up with the following vision for the organization: Shipping working code is one of the fastest, safest, and most effective ways to learn and to test new ideas. This meant that engineers were confident enough in their processes and tooling to take risks. And they also felt able to mitigate issues when invariable something did break.

To test how they were doing and how far they had to go, they leveraged engineering measuring tools to investigate three metrics: Time to Merge, Deploy Frequency, and End to End Test Coverage. Combined, the team believed optimizing for these metrics would give their team confidence in the stability and speed of their processes.

Conducting the Experiment

Juan Pablo and the leadership team communicated this new vision and supporting metrics to the team. They were careful to note that this was an experiment designed to help improve collaborative processes— not a change in response to performance issues.

These are the goals they communicated:

  • Time to Merge: 100% of pull requests should be merged within 36 hours (or 3 days)
  • Deploy Frequency: All product teams had to deploy once a day
  • End to End Test Coverage: 100% of engineers had to write an end-to-end test in an improved testing environment

The specific targets they chose for each metric were a guess. “I picked 36 hours, because why not?” says Juan Pablo. The team was experimenting with metrics for the first time, so they had to start with a number. He predicted that enabling his team to track and measure these metrics alone would be enough to create change.

Analyzing the Results

After one quarter, Juan Pablo didn’t observe the results he anticipated.

Although one engineer did put in work to make staging less of a blocker to production, there were few other changes to how the team worked. Pull Requests were not being merged within 3 days, and product teams were not deploying once a day.

These metrics revealed that they hadn’t moved the needle, but didn’t reveal what to do about it.

Experiment 2: Applying Actionable* Metrics to Engineering

Forming a Hypothesis

Juan Pablo had a conviction that their direction was right, but he realized the metrics that they had chosen weren’t actionable. It wasn’t clear what could be done by any individual engineer or manager to improve how the process works. “I knew I needed better metrics and measurements,” Juan Pablo told us.

So he scoured the internet for all the reading material he could find. Two sources moved him toward finding more better measurements:

  • State of DevOps reports, which regularly surveys 1,000s of engineers to identify positive work patterns and the best indicators of improvement.
  • Accelerate, in which the authors of the DevOps reports distilled their findings after four years and identified four crucial metrics to measure and improve engineering tempo.

These resources were based on research that had been conducted over several years with scientific rigor— exactly what Juan Pablo was looking for.

One major pattern that the researchers promoted was to distinguish product design from product delivery. Juan Pablo had been thinking of all of product and engineering as a single entity— but the process could be separated into predictable and unpredictable portions of the workflow.

Product design and planning are, by nature, unpredictable. They often involve scoping work that has never been done before, so it often results in imprecise estimation of scope and effort. Delivery, on the other hand, can be made predictable. Engineers can ship changes incrementally, irrespective of the scope of the feature they’re working on.

Thus Juan Pablo’s new hypothesis was: Decoupling Engineering from Product + Actionable Deliverability Metrics = Increased Engineering Tempo. The metrics they chose were Cycle Time, Mean Time to Restore and Deploy Frequency.

With a new hypothesis and a “plan for the plan,” as Juan Pablo calls it, the engineering team was ready to try again.

Conducting the Experiment

Decoupling engineering from product would take some heavy lifting, so Juan Pablo put together a Production Engineering team. “Their job was to build the tooling, services, and expertise that enables teams to deliver and operate high quality, production-level software and services,” says Juan Pablo.

This team was responsible for owning improvement on key metrics:

  • Cycle Time (commit to production), targeting less than one hour.
  • Mean Time to Restore (MTTR), targeting less than one hour.
  • Deploy Frequency, targeting more than once a day.

To be able to track Cycle Time and Deploy Frequency, Juan Pablo found an engineering analytics tool, Velocity. Out-of-the-box, it shows three years of historical data, so Juan Pablo could measure how scale impacted the team, and whether they were trending in the right direction.

To start trending in the right direction, they had to work towards giving engineering more ownership over product delivery. Decoupling meant a few major transitions:

  • Engineers were responsible for deploying code, while product would decide when to release features to customers.
  • Testing is shifted left and becomes integrated into the developers’ responsibilities. Developers, therefore, don’t have to wait for a QA team to ship changes.
  • The department invests in more testing and monitoring tooling, so the team can ship with more confidence.

Over the next quarter, the Production Engineering team worked with the organization to bring down Cycle Time.

Analyzing the Results

At the end of that quarter, the results spoke for themselves. On his Velocity dashboard, Juan Pablo saw Cycle Time had decreased by 25%. Even more importantly, however, it had become consistent:

The team’s throughput had increased 3x without a significant change in headcount:

“We saw results—and we also finally had the language to talk about performance,” said Juan Pablo.

The actionable metrics Juan Pablo had discovered monitored within Velocity gave the whole team a means to communicate how far away they were from their goals. When an engineer was blocked for any reason, they could point to the effect it had on Cycle Time. This focus helped them solve the immediate problem of increasing tempo, but also equipped the team with the visibility to solve upcoming problems.

Building Scientific Rigor into Continuous Improvement

While the metrics and practices in Accelerate aren’t quite industry standards yet, the researchers have applied a level of scientific rigor that has yielded predictable results for organizations of all sizes. The DevOps report has shown that over the past 4 years, an increasing number of organizations are practicing Continuous Delivery. More of the industry is using engineering measuring tools to look at metrics like Cycle Time and Deploy frequency, and seeing tangible improvements in engineering speed.

Through these recent studies and his own research, Juan Pablo had the unbiased data to finally approach software engineering like a scientist.

Thanks to the hard work of the Splice engineering team and their investment in engineering measuring tools like Velocity, Juan Pablo told us: “We have created a culture of continuous systems and process improvement. We also have a language and a framework to measure this change.” Sign up for Velocity demo to see how your team can benefit from similar visibility.

Announcing Velocity 2.0: The Most Powerful Platform for Engineering Intelligence

Today, we’re proud to introduce Velocity 2.0, the most powerful Engineering Intelligence platform ever built. The all-new platform empowers any development team to eliminate bottlenecks and make lasting improvements to their team’s productivity.
Apr 23, 2019
7 min read

“I tried everything in the industry: planning vs. execution, burndown charts, story points, time estimation. Finally, this product opened my mind. ” –Boaz Katz, CTO at Bizzabo

When welaunched Velocityabout a year ago, we were driven by our theory that the happiest engineers work on the most productive teams and vice versa. Our analytics tool gave managers an understanding of engineering speed and exactly when slowdowns occur.

After getting the product into the hands of thousands of early users, we were thrilled to discover that customers wanted ways to apply their new insights. They had the visibility to see when something went wrong, but not yet the diagnostic tools to determine the best remedy.

We also uncovered a wide range of priorities and needs across engineering teams. Data that was insightful to a large team was less valuable to a small team. Metrics that revealed coaching opportunities to managers of engineers were less useful to managers of managers. We knew early on that we had to build flexibility into the heart of Velocity.

One year and hundreds of meaningful conversations later, we’ve completely revamped the product.

Today, we’re proud to introduce Velocity 2.0, the most powerful Engineering Intelligence platform ever built.The all-new platform empowers any development team to eliminate bottlenecks and make lasting improvements to their team’s productivity.

Here’s how it works.

Actionability with deeper insights and concrete goals

Velocity 2.0 gives users the ability to drill down to the individual unit of work or habit that makes up a trend. This enables engineering teams to understand the underlying drivers of their productivity, so they can work together to improve the speed of innovation and the quality of software delivered.

With Velocity 2.0, engineering leaders can empower the entire engineering organization to participate in a culture of continuous improvement:

  • Engineering Executivescan gauge engineering speed over months or quarters but also slice and dice that same data however they’d like. They can see ROI across various initiatives such as restructuring teams or investing in new technologies.
  • Engineering Managershave the visibility to see slowdowns and drill down into precisely why engineers are stuck. This helps them keep current sprints on track, and work with engineers to create optimizations that prevent future bottlenecks.
  • Individual Developerswill now have concrete success metrics to track their individual and team progress. This allows them to grow as engineers while supporting team and org objectives.

After engineering teams uncover opportunities for improvement, they can quickly translate them to action by setting measurable Targets within the application. They can then visualize and track progress to hit all their continuous improvement goals.

The industry’s first Targets feature lets you and each team member check in on your goal and how much progress you’ve made as a team.

Flexibility for teams to track what matters most

No two engineering teams are alike.

Some teams are heads-down, trying to ship as many features a possible before a target date, while others are trying to buckle down and prepare for scale. All-remote engineering teams require more communication and check-ins than teams in open offices. Velocity 2.0 is the only engineering analytics platform flexible enough to accommodate any software engineering team.

While Velocity 2.0 works right out of the box, it’s fully configurable. Users have the power to turn on and off whatever reports they care about and set their own definitions of success. Teams can customize:

  • Metrics:Any metrics that aren’t currently a focus for your team, you can simply turn off. If your team prefers to not measure activity metrics, like commits or pushes, turn those off to focus more on metrics that represent value delivered, like pull requests merged or deploy volume.
  • Algorithms:Build your own health checks based on the metrics you care about most. If “healthy” means a team that pushes small commits, gets through the review process quickly and deploys weekly, then Velocity 2.0 will adjust alerts based on those parameters.
  • Thresholds:Mark anything you consider a red flag for the org, a team or an individual, and Velocity will highlight anything that might merit your attention, such as a large pull request, an overburdened developer, or a troublesome code review.
  • Reports and Dashboards:Ask any questions that aren’t answered out of the box within the Reports Builder, then save within your own custom designed dashboards.
  • Permissions:Some teams prefer to keep reports on a need-to-know basis only, based on role or user. This is particularly useful in cases where new team members are still working on their engineering chops. Set limited permissions so only you and the team member knows how a metric is trending.

Velocity 2.0 allows you to gauge success by your own organization’s definition, not our’s.

The new completely programmable Health Check report enables you to see at a glance how your team is doing this iteration compared to the previous three.

The future of engineering analytics

Velocity 2.0 is just the next step on our mission to create a more data-driven engineering world. With more visibility and the tools to take action, every software engineering team can boost their speed innovation, which, in turn, allows us as an industry, to overcome some of our biggest challenges, faster.

We’re incredibly grateful to our early users whose feedback was integral to the development of Velocity 2.0. Here’s what some of them had to say about it:

“Velocity’s Reports Builder has helped our team gain new insights into any potential bottlenecks in our sprints allowing us to definitively track our team progress, accelerate code reviews, pull data to match insights in retros and one-on-ones, and ultimately ship value to our customers significantly faster.” – Jacob Boudreau, CTO at Stord

“Thanks to Velocity, we’ve been able to actually get to a point where we’re doing atomic commits, everyone is committing more frequently, and reviews are easier.” – Chelsea Worrel, Software Engineer at Tangoe

“I’ve never seen anything quite like Velocity.” –Josh Castagno, Engineering Manager at Springbuk

Velocity 2.0 is most powerful when you can see it with your own data. Request a demo here.

How to Advocate for Solving Invisible Technical Problems, According to Kickstarter’s Mark Wunsch

“Effective engineering leadership is being outcome-oriented,” says Kickstarter’s Mark Wunsch. When Mark first joined Kickstarter as VP of Engineering, one of his first decisions was to re-organize the engineering department, based on business-oriented objectives.
Mar 26, 2019
7 min read

“Effective engineering leadership is being outcome-oriented,” says Kickstarter’s Mark Wunsch.

When Mark first joined Kickstarter as VP of Engineering, one of his first decisions was to re-organize the engineering department, based on business-oriented objectives.

Each 3-7 person scrum team focused on a different “essence,” or a different customer-facing part of the product. This helped engineering understand their work from a business perspective. Mark tells us, “The objective is not to ask, ‘Is the code good?’ but to ask ‘Is Kickstarter good?’”

The engineers’ perspective, however, was difficult to communicate with leadership.

The engineers were constantly bogged down by problems that were invisible to non-technical colleagues, such as legacy code. Kickstarter has been around for about 10 years, so a big portion of the codebase was troublesome to work with. Mark told us, “To an engineer, it’s so obvious when a piece of code is brittle, but it’s really hard to advocate for putting engineering resources into solving technical debt.”

Mark decided to use metrics to further align engineering and leadership.

Diagnosing technical debt with data

Every developer knows that legacy code undoubtedly slows down engineering. But taking weeks away from shipping new features compromises how much new value the company is delivering to customers.

Before making a case for refactoring to leadership, Mark decided to do a deep dive into where technical debt was slowing down the team. He used engineering analytics tool Velocity to learn how each engineering team was working and where they might be getting stuck.

Mark started by looking at his team’s weekly throughput, as measured by merged pull requests. Whenever the throughput dipped significantly below their average, he’d know to investigate further.

Seeing a low Pull Requests/Merged at the end of the week can be a red flag a team was stuck.

Unlike subjective measures that are common on most engineering teams, like story points completed, Velocity metrics are represented by a concrete unit of work: the Pull Request. This enables Mark to objectively understand when a scrum team is really bogged down, compared to the last sprint or last month.

Once he spotted productivity anomalies, Mark would pull up a real-time report of his teams’ riskiest Pull Requests. Pull Requests that were open longer and had a lot of activity (comments and back-and-forths between author and reviewer) were at the top of the list.

An example of Velocity’s Work In Progress which shows the old and active PRs that may be holding up the team.

Because trickier parts of the applications tend to require more substantial changes, pull requests that are most “active” often point Mark to the most troublesome areas of the codebase.

After a few weeks of investigation, Mark was able to find concrete evidence for what his intuition was telling him. “The data showed that we were consistently slowed down because of legacy code,” said Mark.

Bringing transparency to engineering practices

During meetings with the executive team, Mark could now point to weeks with less output and prove that technical debt was impeding the engineering team from their primary business objective: continuously delivering new features.

To communicate how the team was doing, he’d present a Pull Request throughput chart with a trend line:

A Velocityreport showing Pull Requests Merged/Day, over the last 3 months.

This helped leadership visualize how much Kickstarter was growing in their engineering efficiency but also opportunities for further improvement.

Mark also shared Cycle Time (i.e., how quickly code goes from a developer’s laptop to being merged into master.)

A Velocity report that shows the fluctuation of Cycle time, over the last 3 months.

Cycle time was a great indicator of how much trouble it was to make a change to the codebase. High cycle time would often correspond to low output a day or two later, showing that some form of obstruction existed for a developer or team.

These two charts, along with Mark’s summary of his more technical findings, aligned all of leadership around scaling back on new features temporarily and dedicate more time to refactoring.

Bridging engineering and leadership

After spending time investigating what legacy code was slowing down the team, Mark was able to take a strategic approach to how they tackled technical debt.

Rather than jump on the opportunity to fix anything that looked broken, he could have teams focus on the biggest productivity blockers first. Engineers were happy because they had the time to rework the legacy code that was constantly slowing them down. Leadership was happy when they could see long-term improvements in engineering speed. Six months after refactoring, Kickstarter saw a 17% increase in Pull Requests merged and a 63% decrease in Cycle Time. It was a win-win-win.

Mark tells us, “Being able to talk about technical debt in the same way we talk about business metrics is incredibly powerful.”

If you want to learn exactly how much technical debt is slowing down your own engineering team, talk to one of our Velocity product specialists.

 Never Miss an Update

Get the latest insights on developer productivity and engineering excellence delivered to your inbox.