

One of the first and most essential uses of Velocity is to cut through the noise and help managers identify the signals of stuck engineers. This enables management to eliminate unnecessary check-ins, while still having the ability to unblock engineers and boost engineering productivity by stepping in to help an engineer who might hesitate to raise their hand.
Velocity provides visibility into:
Look out for four main behavioral patterns in Velocity to help address these concerns.
A quick scan of the Activity tab will help you identify developers who aren’t checking in code.
Head into the Team360 report, select the Activity tab, and look for team members with few or no commits, represented as purple circles.
In the example above, Hecate hasn’t committed in a couple of days. This could indicate that she is:
If you see a similar work pattern in your team’s Activity Log, you might want to check in to identify the bottleneck and help your developer get back on track.
Any developers who are committing, but not opening PRs, might be churning. Once again, go to the Activity tab in the Team360 report to see which engineers’ work appears to be blocked.
As noted in the key, Commits and Merged Commits are indicated by dark and light purple circles respectively and open PRs by light blue diamonds. You’ll want to look out for clusters with a high count of circles and a missing count of diamonds.
As you can see in the top row, Donalbain has been consistently committing code, but not opening any PRs.
This could be because he is:
Take this opportunity to dive in and identify the issue.
Long-running PRs may indicate that an engineer is stuck on that particular unit of work, or that they’re multi-tasking, causing delays for multiple PRs.
Investigate all open and active PRs in the Pull Requests report. (Note that if you look at this report in the morning, it might look bare, since it automatically shows “today’s” activity. In this case, use the date picker to extend to yesterday, or the past two days to see what’s in progress).
To surface the oldest PRs, sort by age by clicking on the “AGE” header. Pay close attention to anything that’s been open for over 72 hours.
A PR might be long-running because:
Finally, the Analytics tab is a good place to go to identify late-stage churn. You’ll want to run a query for Review Cycles, or the number of times a Pull Request has gone back and forth between the author and reviewer.
To obtain this report, select Review Cycles as your metric, and group by contributor. Run a query for the last week or two, and scroll to the bottom until you see the following bar graph visualization:
When Review Cycles are high, it may indicate:
With the right data, you can identify which of your team members are stuck right now, so you can help remove the roadblock and get things moving again.
If you want to boost engineering productivity, but don’t have a way to track and analyze your engineering metrics, reach out to find out more about our Software Engineering Intelligence platform, Velocity.

The authors of Accelerate surveyed over 23,000 individuals in 2,000 distinct software companies to uncover the methodologies that set top-performing organizations apart. Their research suggests that “speed and stability are outcomes that enable each other” and that any software organization can measure and improve these outcomes.
The Continuous Delivery (CD) best practices they recommend, such as keeping batch size small, automating repetitive tasks and investing in quick issue detection, all perpetuate speed and quality while instilling a culture of continuous improvement on the team.
While most of the industry is actively adopting CD, few have set up any way to measure their progress. Concrete metrics, such as those found within Velocity, are a prerequisite to ensuring success in this transition.
In this guide, we’ve outlined how you can use Velocity to answer:
There are two success metrics that can represent how “continuously” your organization is shipping: speed (Cycle Time), and throughput (Pull Requests Merged or Deploy Volume).
Start by looking at the Analytics report to see how well you’ve been improving on one of those metrics. We recommend looking back at least 90 days.
Cycle Time, or the time between an engineer’s first commit to merging to production, should be trending down. Good coding habits, such as working in small batches, should keep changesets moving through the process with little friction, while CI/CD tooling should automate a lot of late engineering or QA work that frequently blocks merging to production.
Here’s how you can read this trend:
Alternatively, you can use Pull Requests Merged or Deploys as your yardstick. In this case, you can invert how you interpret results. Increasing throughput is desired, while flat and decreasing trends are a red flag that your team’s new practices are not yet yielding better outcomes.
After understanding your overall engineering speed, you’ll want to investigate health metrics to find specific areas for improvement.
Velocity metrics can serve as strong proxies for the following Continuous Delivery practices:
In Velocity’s Compare report, you can look at these metrics across teams or for individuals to identify coaching opportunities or process improvements.
Click on a team to drill down and see performance on an individual basis:
Finally, get more context on a team or individual by seeing how they’ve performed on a specific metric, historically. The Analytics report lets you pick how far back you look and then drill down into any units of work that are dragging the average up or down.
Now that you have all the context for how your team is working, create a mental model of your software delivery pipeline to see where a unit of work is most likely to get stuck. This will help you prioritize where you should start making optimizations.
We recommend breaking your process into three portions:
You can look at these three metrics side by side, by selecting them in the Analytics report and viewing them as bar graph clusters by week or by month.
With this data, you’ll be able to determine which area of your process is worth a closer look. If multiple stages need attention, we recommend starting with the one that comes earliest in your development pipeline, as improvements at early stages can have an impact downstream.
To dig into your own data and start measuring your progress towards Continuous Delivery, sign up for a demo of our Software Engineering Intelligence platform, Velocity.

Velocity or GitPrime? Read a head-to-head analysis to decide which is best for your team.
In competitive markets, the viability of a business depends on engineering performance. In their 2020 study of 400+ enterprises across 12 industries, McKinsey concluded that engineering departments performing in the top quartile of the Developer Velocity Index (DVI) “outperform others in the market by four to five times.”
Historically, however, engineering has been a black box. The average company invests millions of dollars a year into the department, but most have no way of assessing the returns.
This is why many market-leading organizations, like Kickstarter, Gusto and VMWare, are starting to adopt Engineering Intelligence to get visibility into their software development workflows. Doing so has enabled them to effectively improve performance, boost Time to Market, and out-innovate competitors.
The two most popular engineering analytics platforms, Velocity and GitPrime (recently acquired by Pluralsight Flow), both offer transparency into engineering performance and process efficiency but differ in their approaches.
To help you make a decision about which approach to engineering metrics works best for your team, we put together a thorough head-to-head comparison of Velocity and GitPrime. Read the post through, or click on a link to skip to the section that’s most important to you.
Setting Up
Coaching
Tracking Progress
Goal Setting
Scope of Visibility
Surfacing Issues
Customization
Cost
Tl;dr: The setup process can be just as fast for both, GitPrime and Velocity, so you can be up and running as soon as your data imports.
First, you’ll want to know the time and effort it takes to get set up, so you can have an accurate expectation of how soon you’ll be up and running. Both analytics tools recognize the friction involved with process changes, so they’ve done their best to streamline this experience.
Start setting up Velocity by first signing in with your GitHub or Bitbucket account. Once you’re in, you’ll be prompted to add your repositories, so you can start seeing your engineering data in the app.
GitPrime has a similar setup process. You start by creating a new GitPrime account and then setting up integrations with whichever Git or product management tools you might be using.
GitPrime supports more version control systems than Velocity, and each has a slightly different workflow. You can import repos accessible over HTTPS or SSH from any server, or use OAuth to connect to your GitHub, GitLab, or Bitbucket organization.
From there, you’ll also have to organize your data. You won’t be able to assign repos to applications, but you organize them by tag. Contributors can similarly be hidden from reports, merged or assigned to teams.
Tl;dr: Velocity has a more robust set of coaching features than GitPrime. Whereas GitPrime offers a few metrics per developer, Velocity offers a 360 degree view that covers the day-to-day, week-to-week improvement, and long-term development.
A top priority that we often hear from organizations looking to invest in engineering analytics is the need to improve team and individual performance.
Velocity’s 360 reports combine all coaching features in one comprehensive report that provides a complete picture of developers’ and teams’ work habits. GitPrime reduces developer performance to a few key metrics, and offers more prescriptive guidelines.
Velocity’s Developer360 report gives managers instant visibility into your developer’s active work, improvements along key metrics, and skills.
The report includes four tabs:
Velocity’s Developer360 report focuses on objective metrics and does not presume what they may indicate. We recommend Velocity for teams who are looking to avoid reductive metrics.
GitPrime has two main reports for coaching developers:
GitPrime’s coaching reports are a fit for leaders who desire suggestions towards specific action based on how a given contributor is performing relative to their peers. For those who prefer GitPrime’s more prescriptive approach to coaching, however, we recommend keeping in mind that metrics don’t always paint a full picture.
For example, if you look at PR Throughput on this graph, you’ll see how many changes a given developer has shipped in contrast to his or her team members. But a data point on the top right of the graph doesn’t include the context that many of the deploys were relatively small in impact.
Tl;dr: Both tools provide at-a-glance dashboards that let you see trends over weeks, months or quarters. Velocity provides more PR-related metrics and has a real-time view into how you’re doing this sprint. These metrics allow you to evaluate progress across projects, sprints, and cohorts, making it possible to implement high-level process changes that can fundamentally improve the way your team works. GitPrime has more contributor-based metrics, which make it more difficult to help your entire team improve together.
The same insights that previously required hours of digging through repos and countless 1:1s are available at-a-glance in both analytics tools. But each application tracks “progress” slightly differently. Where Velocity makes it easy to track process-level metrics like Push Volume and compare progress across teams and time periods, GitPrime prioritizes reports that track metrics by individual contributor.
Velocity has two main features that allow for progress tracking:
Velocity makes it easier to do things like identify and learn from your highest-performing teams, or track the success of particular initiatives. For example, you might track new developers’ Deploy Volume to evaluate how they’re progressing with onboarding based on how much of their work is making it into the codebase. And if our standard reports don’t include the insights you need, you can use our customizable Analytics report to dig even deeper into your data.
Velocity’s progress tracking reports are most suitable for managers who interpret metrics as insights about the work, not the person.
GitPrime has its own report for progress tracking:
GitPrime’s Project Timeline report best complements a management style that prioritizes tracking contributor performance over PR- and process-related metrics.
Tl;dr: Both applications include robust goal-setting features. The approaches differ in the types of goal-setting capabilities provided.
The goal of adopting an Engineering Intelligence tool is to use the greater visibility found in metrics to drive positive change in your organization.
Both Velocity and GitPrime include target-setting reports, but whereas Velocity tracks progress in terms of success rates, GitPrime tracks averages in their goal-setting system.
Since high-performance in engineering is critical to business success, you can use Velocity’s Targets feature to measure, improve, and communicate progress using objective metrics that support departmental initiatives. This report serves as concrete data to inform any OKR or KPI-related conversation, while the ability to drill-down into outliers enables team members to diagnose why targets aren’t met.
Within Velocity’s Targets feature, executives, leaders, and front-line managers can build a dashboard of reports that visualize progress toward goals in terms of success rates or averages.
When setting a goal, many leaders find that tracking averages over time doesn’t properly represent the progress that’s being made toward that goal.
If you’re tracking PR size, for example, a single, long-running PR might obscure the dozens of PRs that moved quickly through the pipeline. If you’re tracking Review Speed, a single neglected review inaccurately suggests inefficiencies in the review process.
Thus, Velocity’s Targets report is tailored to engineering leaders who acknowledge anomalies and believe that it’s acceptable for a few data points to be outside an expected target.
Instead of success rates, GitPrime tracks averages in their goal-setting systems.
GitPrime’s Fundamentals report is most compatible with managers who prefer the more common approach of tracking averages. However, it is important to note that if you have an outlier in your data — maybe one particularly complicated PR required a lot of back and forth in Code Review — that outlier will throw off your average. This can make it difficult to see the overall trend, and inaccurately suggest inefficiencies.
Tl;dr: If you want to evaluate your process from end-to-end, you’re better off going with Velocity, which was built specifically for CD. Conversely, GitPrime was built for coding efficiency with an emphasis on Code Review and doesn’t include data from before a PR is opened and when it is merged.
While most of the industry is actively adopting Continuous Delivery, few have set up any way to measure their progress.
To optimize or adopt CD processes, organizations need a complete, end-to-end picture of their engineering processes. Concrete metrics, such as those found within Velocity and GitPrime, are a prerequisite for ensuring success in this transition.
Velocity is the only application in its category to shine a light on the entire software development process. Key metrics you need when measuring CD include: Cycle Time, Deploy Volume, Time to Open, Time to Review, and Time to Merge, the majority of which are not available in GitPrime.
Our objective is to eventually incorporate data from every important tool that an engineer touches.
Teams looking to optimize each part of their software delivery pipeline, not just Code Review, are better off going with Velocity.
GitPrime was originally built to improve coding efficiency and has since built Code Review features as an add-on. This leaves important parts of the software delivery processes obscure–such as what happens before a PR is opened or after it is merged.
Teams focused exclusively on optimizing their Code Review processes will benefit more from the granularity found in GitPrime’s Review Workflow report.
Tl;dr: Velocity, with PR-related metrics at the core of the product, does a better job drawing attention (inside and outside of the app) to actual artifacts of work that could be stuck or problematic. GitPrime, with mostly people-focused metrics, draws attention to contributors who could be stuck or problematic.
Engineering is expected to continuously deliver business value to your organization, but a single bottleneck can hold up the entire team during any given sprint. The larger your team gets, the harder it becomes for you to discern what work is stuck in the pipeline and why.
Velocity and GitPrime take different approaches to identifying outliers or irregular work patterns.
Velocity employs a variety of visualizations to help you find the root cause of any issue that might slow down your team:
Your team is also able to spot issues outside the application through daily standup reports, available via email or Slack. Velocity, thus, isn’t an analytics tool for top-down management but for leaders wishing to keep the whole team on track.
GitPrime’s core product ties each issue to a contributor, which gives managers an easy way to determine who to go to when something goes wrong on a particular week or month. Only in the collaboration reports, available in higher tiers, is there insight into problematic work products, such as PRs.
Here’s where you’d look to find inefficiencies, bottlenecks, and stuck engineers:
We recommend GitPrime for managers who prefer visibility into low-performance developers over visibility into stuck work.
Tl;dr: Velocity includes customizable reports that allow you ask questions of your data to derive more meaningful insights. GitPrime does not have custom reporting, but they do offer an API.
If you have unique requirements or track a unique metric, you might require a more flexible platform. Here’s how your two options compare.
Velocity has an entire feature set dedicated to making the product more flexible for teams who work off the beaten path:
Velocity is the best option for engineering organizations who’d like the flexibility to build any charts that aren’t already available out-of-the-box.
GitPrime does not have custom reporting, but they do offer an API in their Enterprise package for customers who have the resources to build out their own reports.
There is also portion of the application where users can set simple targets for the entire organization, teams, and contributors.
GitPrime is a good fit for customers who have the resources to build out their own reports.
Tl;dr: While pricing of the two products is competitive, GitPrime restricts more features in their lower tiers. Velocity offers more capabilities for less, and the flexibility of their platform allows for customizability irrespective of cost.
The two products do not differ much in terms of pricing, so if you’re operating within significant budget constraints, a built-it-yourself solution is probably most feasible. Otherwise, both products tier slightly differently, so make sure you’re getting the core features that are most important to your team.
Velocity has four pricing packages based on team size, including a free option for teams of 10 or fewer. For teams of 10+, pricing starts at $449/seat per year. Each tier includes access to all metrics and reports (including the flexible Analytics report) and gives teams access to unlimited historical data.
The small and medium tiers are limited in number of repos (50 and 100, respectively), while the largest priced tier is not. The team reporting function, which lets you see metrics summarized on a team-by-team basis, is not available until the largest tier.
GitPrime has a more complex pricing system. They have 3 tiers with different features, and a sliding pricing scale, based on how many engineers are in your organization. Their pricing starts at $499, but they limit a lot of their features in the lower tiers.
The lowest tier does not include their “code review collaboration insights.” They also restrict the historical data they make available– 12 months for the first tier and 36 months for the second tier.
Engineering excellence drives business performance. The teams that are excelling in the space are the ones that have the vernacular to talk about developer performance and the tools to improve it.
To this end, Velocity data serves three primary purposes. It’s used to:
Most importantly, Velocity has a few more tools to put your learnings into action. You can set up Slack and email alerts for irregular activity and you have a first-class targets system to encourage your team to improve.
Conversely, GitPrime’s main focus is individual performance, importing data from Git, which means their tool primarily works off of source-code level data, not collaborative work data.
GitPrime equips a manager to keep closer track of their engineers, so they have a clear idea of the strongest and weakest performers of the team. This approach is for hands-on managers who still want an active role in how their direct reports work.

In this time of global and economic uncertainty, it’s never been more important to have a quick way of knowing which engineering processes are working and which are broken.
And while it can be tempting to focus on bottlenecks and struggling team members, it can be even more useful to look at the practices, behaviors, and culture of your strongest teams.
This post runs through a framework for using Velocity, our Software Engineering Intelligence (SEI)platform, to identify and scale the most successful habits of high performing teams.
While all engineering organizations might define success slightly differently, there are two metrics within Velocity that indicate an extremely proficient engineering team:
In Velocity’s Compare report, you can select both of these metrics and compare them across your organization to identify the teams that are merging the most the fastest.
Once you’ve identified your strongest teams using success metrics like Throughput and Cycle Time, you’ll want to dig into what has made them successful. For this, you’ll need different, diagnostic metrics.
You can think of your software delivery process in three phases:
Typically, a strong engineering team will move faster in one of these stages.
Velocity makes it easy to look at these three metrics side-by-side. You can view them as bar graph clusters by week or by month.
Or, you can view these metrics by team.
Here, we can see that the top team — Gryffindor — is most distinguished by their extremely fast Review Speed. Although they have a long Time to Open and Time to Merge, this isn’t remarkable when looking at the other teams. The other teams (especially the Hogwarts team) frequently had work stuck in the review process.
Pair your quantitative analysis with qualitative information, and speak to the members of the Gryffindor team. Find out what makes their review process different from the other teams’ processes, and think about ways the other teams can apply those learnings.
DORA metrics are also useful to identify high performing teams within the organization.
Now that you’ve identified your top-performing teams and their defining characteristics, you can create a blueprint for better processes across your organization.
One of our customers, a health data analytics solution,used Velocity following a Series B funding round to level-up the way they coached engineers at scale.
Their VP of Engineering had been brought on to help build out the engineering department. But after getting to know his teams, he realized that there wasn’t any consistency in how and when teams shipped features to end-customers.
The VP of Engineering worked with his engineering leads and product managers to identify agile practices that worked for his team, then shared them org-wide. Together, they created documentation and set up onboarding and mentoring around encouraging healthy coding habits at scale.
With stronger processes in place, the team was able to increase PR throughput 64%. With objective data and learnings from your highest performing teams, you’ll be able to replicate successful practices across your organization, and help boost productivity at scale.
Find out how Code Climate Velocity can help your team improve Cycle Time and PR Throughput by booking a demo.

The information required to lead an engineering team is increasingly scattered.
To answer even simple questions like ‘How is my team performing?’ and ‘Is our current sprint on track?’ an engineering manager may need to check as many as ten different systems — version control, project management, feature flags, DevOps tools, and incident management, to name a few. Even so, this fragmented means of information seeking often doesn’t provide a clear answer.
What’s more, a lack of visibility upstream can lead to significant negative effects downstream, like:
We believe that in order to create a culture that both motivates engineers and improves their ability to drive innovation, managers need a comprehensive picture of where their team members are succeeding and where they can improve.
Our mission at Code Climate is to empower leaders with tools to drive high-performance. Today, as the next step in this mission, we’re launching Velocity’s newest feature, Developer360, to enable managers to build elite organizations with data-driven insights.
Support engineering excellence with a comprehensive view of developer work, performance, and skills — all in one place.
In order to empower their team to achieve excellence, every manager needs a quick way of knowing:
Developer360 gives you instant visibility into your developer’s active work, improvements along key metrics, and skills.
Frontline managers typically rely on stand-ups to check in on work-in-progress. But despite great intentions, even the best engineering teams don’t always bring up issues early.
The Snapshot report brings potential WIP issues to your attention before they derail one of your sprints.
The all-new Velocity Feed (far right) provides a chronological visualization of all an engineer’s recent work, including commits, requested changes, and opened or closed PRs.
With a scan of this report, you can start each day already aware of what’s been taking up a developer’s attention as well as what challenges they’ve been facing.
The more time engineering managers spend providing engineers with proper guidance, the more they’re investing in their team’s future.
The Foundations tab is a source of quantitative data managers can bring to coaching sessions. At a glance, a manager can see each contributor’s average over a given time period, how they’ve trended over that period, and their percentile performance.
Dig into a capacity metric like Impact, which measures the magnitude of changes to the codebase over a period of time. This metric can help you uncover high performers who may deserve recognition, or serve as an early warning sign that this team member may be in need of some coaching.
Incorporate this data into 1:1s and performance conversations to check your biases, come to a shared understanding of where strengths and weaknesses lie, and set quantitative, actionable goals.
Part of an engineering manager’s job is knowing what coding languages each developer has been working in so that they can distribute upcoming work, track migrations, and support professional development.
The Skills tab provides a visual summary of a developer’s technical toolbox, so that managers can come to planning and coaching sessions already aware of what skills each engineer has mastered (and what they’re still learning).
Get a sense of an engineer’s language gaps, and work with them to improve their expertise.
Engineering is only as strong as its contributors, and as such, building a culture of excellence starts on the individual level. Establishing a complete and shared understanding of how contributors are performing on a micro level will allow you to level-up the way your team is working on a macro level.
When developers are empowered to perform at their best, the entire organization benefits:
We’re excited to build on top of Developer360 in our mission to provide engineering leaders with the visibility required to level up their teams. This is just the start of our undertaking to establish Velocity as the single source of truth for software engineering.
Sign up for Velocity to drive high-performance on your team with a 360° view into developer work.

The data necessary to understand the entire software delivery process is increasingly fragmented. To ship a single feature, a team may take advantage of 10+ tools, from project management, to version control, CI/CD, feature flags and more.
At Code Climate, we’ve learned that combining data from different sources enables higher-level insights. For example, PagerDuty can tell you how often your developers are getting woken up—but it can’t tell you the impact on innovation. Adding Jira data enables you to understand the delays those incidents have on feature initiatives.
To unlock the full value from these data sources, we need:
And that’s exactly what we’re excited to be announcing today.
We’re launching the Code Climate Engineering Data Platform with:
Each Connector requests data from a DevOps tool’s API and outputs a set of records conforming to the data schema. Here’s how it works:
To kick things off, we’re open sourcing Connector reference implementation for PagerDuty, CircleCI and Codecov, all written in Node.js.
We believe that the future of DevOps data is open, and today, we’re taking the first step towards making this vision a reality.
We created the first extensible ecosystem for code quality four years ago. There are now 50+ static analysis and test coverage tools available, developed by thousands of community contributors, benefiting millions of engineers. Companies like GitLab have used it as the foundation for their code quality solutions.
With the new Engineering Data Platform, we’re helping the broader software development community build the first open standards for all engineering and DevOps data. We’d love your feedback or contributions on everything we are announcing today (it’s all in draft form!).
Here’s how you can get involved:
We look forward to building the next generation of Velocity and Quality features using this new, open standard. We’re also excited to see how others will take advantage of the Code Climate Engineering Data Platform to build products and tools we haven’t even thought of yet.

The following article is based on a talk Juan Pablo Buriticá, VP of Engineering at Splice, gave at our annual Engineering Leadership Conference and a subsequent interview. Watch the full talk here, and see the slides here.
“The most shocking thing is that software engineering is one of the least rigorous practices that exists,” says Splice’s VP of Engineering, Juan Pablo Buriticá. After graduating with a degree in Pharmaceutical Chemistry, he eventually taught himself to code and transitioned to a career in software. Juan Pablo was astonished to find software engineering to be a much more fragmented discipline. Few standards exist around even the most common software production methods, like Agile and Continuous Delivery.
When managing a small team, the lack of industry standards was rarely an issue for Juan Pablo. Whenever something felt inefficient, he’d get the whole engineering team in a room, identify the source of friction, adjust, and move on. After Juan Pablo scaled his team from 15 to 80 distributed developers, however, all their processes broke. “I had to go back and fix the mess I created by growing the team so fast,” said Juan Pablo.
But fixing them wasn’t so easy anymore. So, Juan Pablo turned to engineering measuring tools and the Scientific Method.
Before experimenting to determine which actions would yield improvement, Juan Pablo and the rest of the management team agreed that they needed to determine what specific outcome they were optimizing for. The team agreed that everything felt slow— but a gut feeling wasn’t enough to start making changes in the team.
They wanted to:
Thus, their hypothesis was: A Defined Direction + A Focus on Success Metrics = Increased Tempo.
The product and engineering leadership spent an offsite deliberating on what was next for Splice. They came up with the following vision for the organization: Shipping working code is one of the fastest, safest, and most effective ways to learn and to test new ideas. This meant that engineers were confident enough in their processes and tooling to take risks. And they also felt able to mitigate issues when invariable something did break.
To test how they were doing and how far they had to go, they leveraged engineering measuring tools to investigate three metrics: Time to Merge, Deploy Frequency, and End to End Test Coverage. Combined, the team believed optimizing for these metrics would give their team confidence in the stability and speed of their processes.
Juan Pablo and the leadership team communicated this new vision and supporting metrics to the team. They were careful to note that this was an experiment designed to help improve collaborative processes— not a change in response to performance issues.
These are the goals they communicated:
The specific targets they chose for each metric were a guess. “I picked 36 hours, because why not?” says Juan Pablo. The team was experimenting with metrics for the first time, so they had to start with a number. He predicted that enabling his team to track and measure these metrics alone would be enough to create change.
After one quarter, Juan Pablo didn’t observe the results he anticipated.
Although one engineer did put in work to make staging less of a blocker to production, there were few other changes to how the team worked. Pull Requests were not being merged within 3 days, and product teams were not deploying once a day.
These metrics revealed that they hadn’t moved the needle, but didn’t reveal what to do about it.
Juan Pablo had a conviction that their direction was right, but he realized the metrics that they had chosen weren’t actionable. It wasn’t clear what could be done by any individual engineer or manager to improve how the process works. “I knew I needed better metrics and measurements,” Juan Pablo told us.
So he scoured the internet for all the reading material he could find. Two sources moved him toward finding more better measurements:
These resources were based on research that had been conducted over several years with scientific rigor— exactly what Juan Pablo was looking for.
One major pattern that the researchers promoted was to distinguish product design from product delivery. Juan Pablo had been thinking of all of product and engineering as a single entity— but the process could be separated into predictable and unpredictable portions of the workflow.
Product design and planning are, by nature, unpredictable. They often involve scoping work that has never been done before, so it often results in imprecise estimation of scope and effort. Delivery, on the other hand, can be made predictable. Engineers can ship changes incrementally, irrespective of the scope of the feature they’re working on.
Thus Juan Pablo’s new hypothesis was: Decoupling Engineering from Product + Actionable Deliverability Metrics = Increased Engineering Tempo. The metrics they chose were Cycle Time, Mean Time to Restore and Deploy Frequency.
With a new hypothesis and a “plan for the plan,” as Juan Pablo calls it, the engineering team was ready to try again.
Decoupling engineering from product would take some heavy lifting, so Juan Pablo put together a Production Engineering team. “Their job was to build the tooling, services, and expertise that enables teams to deliver and operate high quality, production-level software and services,” says Juan Pablo.
This team was responsible for owning improvement on key metrics:
To be able to track Cycle Time and Deploy Frequency, Juan Pablo found an engineering analytics tool, Velocity. Out-of-the-box, it shows three years of historical data, so Juan Pablo could measure how scale impacted the team, and whether they were trending in the right direction.
To start trending in the right direction, they had to work towards giving engineering more ownership over product delivery. Decoupling meant a few major transitions:
Over the next quarter, the Production Engineering team worked with the organization to bring down Cycle Time.
At the end of that quarter, the results spoke for themselves. On his Velocity dashboard, Juan Pablo saw Cycle Time had decreased by 25%. Even more importantly, however, it had become consistent:
The team’s throughput had increased 3x without a significant change in headcount:
“We saw results—and we also finally had the language to talk about performance,” said Juan Pablo.
The actionable metrics Juan Pablo had discovered monitored within Velocity gave the whole team a means to communicate how far away they were from their goals. When an engineer was blocked for any reason, they could point to the effect it had on Cycle Time. This focus helped them solve the immediate problem of increasing tempo, but also equipped the team with the visibility to solve upcoming problems.
While the metrics and practices in Accelerate aren’t quite industry standards yet, the researchers have applied a level of scientific rigor that has yielded predictable results for organizations of all sizes. The DevOps report has shown that over the past 4 years, an increasing number of organizations are practicing Continuous Delivery. More of the industry is using engineering measuring tools to look at metrics like Cycle Time and Deploy frequency, and seeing tangible improvements in engineering speed.
Through these recent studies and his own research, Juan Pablo had the unbiased data to finally approach software engineering like a scientist.
Thanks to the hard work of the Splice engineering team and their investment in engineering measuring tools like Velocity, Juan Pablo told us: “We have created a culture of continuous systems and process improvement. We also have a language and a framework to measure this change.” Sign up for Velocity demo to see how your team can benefit from similar visibility.

“I tried everything in the industry: planning vs. execution, burndown charts, story points, time estimation. Finally, this product opened my mind. ” –Boaz Katz, CTO at Bizzabo
When welaunched Velocityabout a year ago, we were driven by our theory that the happiest engineers work on the most productive teams and vice versa. Our analytics tool gave managers an understanding of engineering speed and exactly when slowdowns occur.
After getting the product into the hands of thousands of early users, we were thrilled to discover that customers wanted ways to apply their new insights. They had the visibility to see when something went wrong, but not yet the diagnostic tools to determine the best remedy.
We also uncovered a wide range of priorities and needs across engineering teams. Data that was insightful to a large team was less valuable to a small team. Metrics that revealed coaching opportunities to managers of engineers were less useful to managers of managers. We knew early on that we had to build flexibility into the heart of Velocity.
One year and hundreds of meaningful conversations later, we’ve completely revamped the product.
Today, we’re proud to introduce Velocity 2.0, the most powerful Engineering Intelligence platform ever built.The all-new platform empowers any development team to eliminate bottlenecks and make lasting improvements to their team’s productivity.
Here’s how it works.
Velocity 2.0 gives users the ability to drill down to the individual unit of work or habit that makes up a trend. This enables engineering teams to understand the underlying drivers of their productivity, so they can work together to improve the speed of innovation and the quality of software delivered.
With Velocity 2.0, engineering leaders can empower the entire engineering organization to participate in a culture of continuous improvement:
After engineering teams uncover opportunities for improvement, they can quickly translate them to action by setting measurable Targets within the application. They can then visualize and track progress to hit all their continuous improvement goals.
The industry’s first Targets feature lets you and each team member check in on your goal and how much progress you’ve made as a team.
No two engineering teams are alike.
Some teams are heads-down, trying to ship as many features a possible before a target date, while others are trying to buckle down and prepare for scale. All-remote engineering teams require more communication and check-ins than teams in open offices. Velocity 2.0 is the only engineering analytics platform flexible enough to accommodate any software engineering team.
While Velocity 2.0 works right out of the box, it’s fully configurable. Users have the power to turn on and off whatever reports they care about and set their own definitions of success. Teams can customize:
Velocity 2.0 allows you to gauge success by your own organization’s definition, not our’s.
The new completely programmable Health Check report enables you to see at a glance how your team is doing this iteration compared to the previous three.
Velocity 2.0 is just the next step on our mission to create a more data-driven engineering world. With more visibility and the tools to take action, every software engineering team can boost their speed innovation, which, in turn, allows us as an industry, to overcome some of our biggest challenges, faster.
We’re incredibly grateful to our early users whose feedback was integral to the development of Velocity 2.0. Here’s what some of them had to say about it:
“Velocity’s Reports Builder has helped our team gain new insights into any potential bottlenecks in our sprints allowing us to definitively track our team progress, accelerate code reviews, pull data to match insights in retros and one-on-ones, and ultimately ship value to our customers significantly faster.” – Jacob Boudreau, CTO at Stord
“Thanks to Velocity, we’ve been able to actually get to a point where we’re doing atomic commits, everyone is committing more frequently, and reviews are easier.” – Chelsea Worrel, Software Engineer at Tangoe
“I’ve never seen anything quite like Velocity.” –Josh Castagno, Engineering Manager at Springbuk
Velocity 2.0 is most powerful when you can see it with your own data. Request a demo here.

“Effective engineering leadership is being outcome-oriented,” says Kickstarter’s Mark Wunsch.
When Mark first joined Kickstarter as VP of Engineering, one of his first decisions was to re-organize the engineering department, based on business-oriented objectives.
Each 3-7 person scrum team focused on a different “essence,” or a different customer-facing part of the product. This helped engineering understand their work from a business perspective. Mark tells us, “The objective is not to ask, ‘Is the code good?’ but to ask ‘Is Kickstarter good?’”
The engineers’ perspective, however, was difficult to communicate with leadership.
The engineers were constantly bogged down by problems that were invisible to non-technical colleagues, such as legacy code. Kickstarter has been around for about 10 years, so a big portion of the codebase was troublesome to work with. Mark told us, “To an engineer, it’s so obvious when a piece of code is brittle, but it’s really hard to advocate for putting engineering resources into solving technical debt.”
Mark decided to use metrics to further align engineering and leadership.
Every developer knows that legacy code undoubtedly slows down engineering. But taking weeks away from shipping new features compromises how much new value the company is delivering to customers.
Before making a case for refactoring to leadership, Mark decided to do a deep dive into where technical debt was slowing down the team. He used engineering analytics tool Velocity to learn how each engineering team was working and where they might be getting stuck.
Mark started by looking at his team’s weekly throughput, as measured by merged pull requests. Whenever the throughput dipped significantly below their average, he’d know to investigate further.
Seeing a low Pull Requests/Merged at the end of the week can be a red flag a team was stuck.
Unlike subjective measures that are common on most engineering teams, like story points completed, Velocity metrics are represented by a concrete unit of work: the Pull Request. This enables Mark to objectively understand when a scrum team is really bogged down, compared to the last sprint or last month.
Once he spotted productivity anomalies, Mark would pull up a real-time report of his teams’ riskiest Pull Requests. Pull Requests that were open longer and had a lot of activity (comments and back-and-forths between author and reviewer) were at the top of the list.
An example of Velocity’s Work In Progress which shows the old and active PRs that may be holding up the team.
Because trickier parts of the applications tend to require more substantial changes, pull requests that are most “active” often point Mark to the most troublesome areas of the codebase.
After a few weeks of investigation, Mark was able to find concrete evidence for what his intuition was telling him. “The data showed that we were consistently slowed down because of legacy code,” said Mark.
During meetings with the executive team, Mark could now point to weeks with less output and prove that technical debt was impeding the engineering team from their primary business objective: continuously delivering new features.
To communicate how the team was doing, he’d present a Pull Request throughput chart with a trend line:
A Velocityreport showing Pull Requests Merged/Day, over the last 3 months.
This helped leadership visualize how much Kickstarter was growing in their engineering efficiency but also opportunities for further improvement.
Mark also shared Cycle Time (i.e., how quickly code goes from a developer’s laptop to being merged into master.)
A Velocity report that shows the fluctuation of Cycle time, over the last 3 months.
Cycle time was a great indicator of how much trouble it was to make a change to the codebase. High cycle time would often correspond to low output a day or two later, showing that some form of obstruction existed for a developer or team.
These two charts, along with Mark’s summary of his more technical findings, aligned all of leadership around scaling back on new features temporarily and dedicate more time to refactoring.
After spending time investigating what legacy code was slowing down the team, Mark was able to take a strategic approach to how they tackled technical debt.
Rather than jump on the opportunity to fix anything that looked broken, he could have teams focus on the biggest productivity blockers first. Engineers were happy because they had the time to rework the legacy code that was constantly slowing them down. Leadership was happy when they could see long-term improvements in engineering speed. Six months after refactoring, Kickstarter saw a 17% increase in Pull Requests merged and a 63% decrease in Cycle Time. It was a win-win-win.
Mark tells us, “Being able to talk about technical debt in the same way we talk about business metrics is incredibly powerful.”
If you want to learn exactly how much technical debt is slowing down your own engineering team, talk to one of our Velocity product specialists.