
As part of this year’s Engineering Leadership Summit: Virtual Edition, we spoke to Edith Harbaugh, CEO and Co-Founder of LaunchDarkly. She discussed Progressive Delivery and the importance of Cycle Time. Below is an excerpt from the fireside chat portion of Edith’s session, edited for length and clarity.
Hillary Nussbaum, Content Marketing Manager, Code Climate: Welcome to the closing session of this year’s Engineering Leadership Summit, featuring a conversation with Edith Harbaugh, CEO and Co-Founder of LaunchDarkly. Edith and I will be talking about continuous delivery, specifically talking a little bit about what she wrote in the foreword of our upcoming book, The Engineering Leader’s Guide to Cycle Time. Let’s jump right in. Edith, do you mind introducing yourself? Tell us a little bit about what you do and how you got to where you are today.
Edith Harbaugh, CEO and Co-Founder of LaunchDarkly: Hey, I’m Edith Harbaugh, I’m CEO and Co-Founder of LaunchDarkly, a feature management platform. We have customers all over the world, like Atlassian, BMW, who rely on us to get the right features to the right customer at the right time. And we do this at massive scale — we serve trillions, trillions with a T, of features every day.
That’s great. Tell me a little bit about what software development and delivery were like when you first started, and how things got to where they are today, and then where LaunchDarkly fits into all that.
So I started back in the ’90s, and I always loved software because it was a way to bring stuff to life, it was a way to create something from scratch and get it out to the world. The way software was built back in the ’90s was very different than it is now, and that was because you had to physically ship software. So you would have a team of engineers at Microsoft, for example, who would work for three years, and they would release it to manufacturing and print all the bits on a bunch of disks, and then ship them out. And you would actually go to the store and you would buy a box of software, and then you would sit at your computer and you would feed all those disks into your computer, and that would create Microsoft Excel or Microsoft Word. This was pretty cool because then you could use Microsoft Excel, which I still love.
It also created a lot of habits around software — when it went to manufacturing, it had to be perfect. Somebody’s going to buy a 25-disk box of Microsoft Word and then install it on their computer, and it better be good because they’re not going to do that every day. They might not even update it every year. You might have service packs where somebody would go and get an update, but most people are fine if it works okay.
Microsoft had a very small bug in Excel where if you did some esoteric computation, they were off by some fraction, and that was actually a huge deal. So because software was so hard to get, there was a lot of pressure all through the release train to make it perfect. This has to be right, we only get one chance. We only get one chance to get this out to our customers and this has to be absolutely right, we don’t get a second chance.
How exactly do you feel that impacted things moving forward? Even as people started to change their delivery model, do you still see holdovers from that today?

The massive sweeping change that happened — and it’s still happening, to be honest — is from packaged software to software as a service in the cloud. I’m looking at my computer right now, it doesn’t even have a slot for disks, it just doesn’t work that way anymore. You get your software from the cloud, virtually. And what this has really changed is that it’s okay to move a little bit faster, because you can change it. The whole idea that we’re going to have Windows 3.7 Service Pack 6 is just gone now, now you can get a release whenever you want.
And how that’s changed software is it’s really freeing and liberating in a weird sort of way. Think of the poor engineers of the ’90s, when you only had a three year window to work and then ship. It got very political, it got very hard, because you’re like, “I have to make a guess about what people will want three years from now. If I’m wrong, I won’t have a chance to fix it.” So there’s just a lot of polishing and then an inability to adjust on the fly. If you were a year and a half into your release cycle and you realized you were completely wrong, it was just really hard to scrap that. So the biggest change has been with Agile and software in the cloud, that you have a lot more freedom. You could do a week long sprint and say, “Hey, let’s get this out to market. Let’s test this, let’s do this.”
And where do Continuous Delivery and Progressive Delivery fit into all that? Agile is a step past waterfall, but I would say that Continuous Delivery takes that even a step further. How do you see that all fitting together?
So Continuous Delivery was a step past agile in that it was this idea that instead of doing a release every year or every six months, that you could just release whenever you wanted. Jez Humble and David Farley really popularized this theory with a book called Continuous Delivery, where it’s like, okay, let’s make it so you can release any time you want. This was extremely popular and extremely scary at the same time. With big sites like Facebook, it just gave them this extreme competitive advantage. If you can ship 10 times an hour, or 50 times an hour, or 100 times an hour, and your competitors are shipping once a quarter, of course you’re going to have better software. You can fine tune things, you can dial them in, you can get them right. The downside to Continuous Delivery, and why we came up with Progressive Delivery, is that it’s terrifying. What I just said about how you can ship 10 times, 50 times an hour — if you’re a big bank or an insurance company, that’s absolutely horrifying. I’ve got customers who are depending on me, and I don’t want that. Or if you’re an airline, for example, you have regulations, you’ve got people in the field, you cannot have people just check in 50 times an hour and cross your fingers and hope that everything works out.
Even Facebook has moved away. Their mantra used to be, “move fast and break things,” now it’s “move fast with stable infrastructure.” How they got there was Progressive Delivery, which is basically having a framework which decouples the deploy, which is pushing out code, from the release, which is who actually gets it. And with Progressive Delivery, what you have is you have the ability to say, “Okay, I have this new amazing feature for looking at your bank balance, which makes it much clearer where the money is going.” And instead of pushing it out to everybody and crossing your fingers and being like, “Please let people not have negative balances,” let me deploy this code safely, and then turn it on, maybe, for my own internal QA team, my own internal performance testing team, and make sure I’m not overloading my systems, and let me roll this out maybe just for people in Michigan, because they might also have Canadian currency, because they’re so close to Canada. Let me just do some very methodical and thoughtful releases. So you get all the benefits of Continuous Delivery, but you have this progression of features.
What are the benefits to those more frequent releases? Obviously, as you’ve said, you can move more quickly, you don’t have a three year lag on what people want, but what are the other advantages?
You can move more quickly forward, you could also move more quickly back. Because if, from the very beginning, you say, “I am going to think carefully about how to segment this feature so that I can release it discreetly to some people,” that also means that you can roll it back very easily. So the way we used to build code was you used to have this massive, huge, monolithic code and you pushed it out, and then if something broke, you couldn’t fix it. You just had to take the entire release back, which was very, very painful. But if you say, “Okay, I have this new feature for bank balances and I’m just going to push this out alone, and if it’s not working and buggy, I can just turn it off.” That’s really awesome, that gives you, as a developer, a lot of stress relief.
Yeah. And to look at it again from the developer perspective, how can you empower developers to move quickly and to implement Continuous Delivery practices so it’s not just top down, someone telling everyone to move faster, but developers actually being in control and having the ability to move quickly?
I think it’s a restructuring of breaking down units of work into smaller and smaller and smaller units. If I want to move quickly, I need to break down this release that used to take six months into the smallest unit that I can do in a week? And then here’s a bitter secret I learned the hard way, some features don’t need to get built. Some features, if you get two weeks into it and you put it out to the world, and the world says, “You know, we don’t really care about that at all,” it’s much easier just to say, “Hey, we had these bright hopes for this feature, but we don’t need to release it. And if this was a six month project, we can save five and a half months of work.”
Making a small increment of progress and figuring out, “Hey, you know what? We really thought everybody wanted this, but they don’t.” Or, “Hey, we have this six month plan, and once we started going this way, we figured out that we really need to go a little more that way.” And if you are doing a whole six month release, you’re just basically drawing a line. If you’re doing it in two week sprints, you could be like, “Hmm,” and end up where you really want to be. Or even just say, “Hey, you know what? We don’t need to do this. This was a bad idea.” And it’s a lot easier, politically, to realize that two weeks in, than six months in.
Absolutely. It seems like you’d give the team a little bit more control over direction as well. And so given that a lot of teams have gone remote now, either permanently or temporarily — does the importance of Continuous Delivery change at all on a fully remote team? And then, are there particular considerations, is it harder, easier, how does it look?

Continuous and Progressive Delivery are even more important than ever. A lot of the things that you could do when you’re all in the same office, you just can’t do anymore. You can’t just yell out, “Hey, is everybody ready to ship right now?” You could try to do that on a Slack, but people might be getting coffee or taking care of a kid. So you have to have even more controls about what is happening when, what is being shipped out, and who has rights to see that. And you also may need to be able to react very quickly. I keep coming back to financial customers, because in the last six months, they have had, I think, 30 years of progress crushed into six months, in terms of people not being able to go into banks anymore. Suddenly everyone wants to check their balance at home, wire money at home, or transfer money. Before the answer was kind of, “Just go to your branch.” You don’t have that anymore, you have to be online. It’s not a choice, it’s not an either-or anymore. You absolutely have to have online support for a lot of functions.
The same thing is happening, sadly, to a lot of functions all over the world, where it used to be like, “Just get to your office fax machine.” You have to have a truly digital function for everything right now.
Yeah, it’s interesting. So it’s a change, not just for the developers, but a change for the end user, and it’s even more important that this innovation is happening, and it’s happening quickly.
Yeah. It’s happening extremely quickly. A lot of processes used to be half digital, half paper and now, you have to be fully digital.
Do you have any tips for teams that are working to coordinate while they’re distributed and working to implement Progressive Delivery?
Take it in small steps. I talked in the beginning about Microsoft transforming from a three year release to releasing every week or so, and I also said you need to do this very quickly, but you also need to do it right. So get comfortable with some low-risk features first. Don’t say, “Hey, we’re going to move from a three-month cycle to a daily cycle,” that’s terrifying. But if you’re at a three-month cycle, look at, “Hey, are there some low risk features that we could try to do in a week-long sprint? Is there something that we can carefully detach and figure out a basic rhythm before we get more into this? Let’s figure out the issues that we have, whether they be communication, tools, or process, on something where if it fails, of course failure’s never awesome, but it’s not hugely not awesome.”
I mentioned that you wrote the foreword to the Engineering Leader’s Guide to Cycle Time. Where does Cycle Time fit into all this? Why is it important when you’re trying to move to Continuous or Progressive Delivery?
Cycle Time is so important. The smaller the batches of work you do, the more impact each batch can have. If everything takes six months, you just don’t have many opportunities, you just don’t have many at bats. If you break it down so that you can say, “Okay, let’s try something new every month,” even that alone is a 6X improvement every month. If you break it down further, you just get more and more chances.

And the really seductive part of Continuous Delivery for a developer is that it’s really fun to see your stuff out in the world. If you’re an engineer, like I was in the ’90s, and you’re grinding away for months on a project, you don’t really get much of a feedback loop. You get your manager saying, “Hey, keep working,” but you don’t get any real validation from the customer. So if you can break down your Cycle Time so that you can say, “Every week, this is out in the hands of our customer, and they tweeted that they loved it, or they put in a support ticket that they had some issues but they liked it and wanted to work with us,” that’s really, really fun for a developer.
The worst thing for a developer is, number one, the worst thing is you ship something and it just totally breaks. The second worst is you work really hard on something and then nobody cares, and you’re like, “Well, I gave up seeing my kid’s practice, or I gave up this happy hour with a friend because I was trying to get this done, and nobody cares.” Which is just hard. So if you’re in this shorter Cycle Time as a developer, where you’re like, “Okay, we worked hard this week and we got it into the hands of the customer, and here’s the five tickets they already opened about how to improve it,” that actually feels really good. It feels like I am helping somebody.


Navigating the world of software engineering or developer productivity insights can feel like trying to solve a complex puzzle, especially for large-scale organizations. It's one of those areas where having a cohesive strategy can make all the difference between success and frustration. Over the years, as I’ve worked with enterprise-level organizations, I’ve seen countless instances where a lack of strategy caused initiatives to fail or fizzle out.
In my latest webinar, I breakdown the key components engineering leaders need to consider when building an insights strategy.
At the heart of every successful software engineering team is a drive for three things:
These goals sound simple enough, but in reality, achieving them requires more than just wishing for better performance. It takes data, action, and, most importantly, a cultural shift. And here's the catch: those three things don't come together by accident.
In my experience, whenever a large-scale change fails, there's one common denominator: a lack of a cohesive strategy. Every time I’ve witnessed a failed attempt at implementing new technology or making a big shift, the missing piece was always that strategic foundation. Without a clear, aligned strategy, you're not just wasting resources—you’re creating frustration across the entire organization.

Sign up for a free, expert-led insights strategy workshop for your enterprise org.
The first step in any successful engineering insights strategy is defining why you're doing this in the first place. If you're rolling out developer productivity metrics or an insights platform, you need to make sure there’s alignment on the purpose across the board.
Too often, organizations dive into this journey without answering the crucial question: Why do we need this data? If you ask five different leaders in your organization, are you going to get five answers, or will they all point to the same objective? If you can’t answer this clearly, you risk chasing a vague, unhelpful path.
One way I recommend approaching this is through the "Five Whys" technique. Ask why you're doing this, and then keep asking "why" until you get to the core of the problem. For example, if your initial answer is, “We need engineering metrics,” ask why. The next answer might be, “Because we're missing deliverables.” Keep going until you identify the true purpose behind the initiative. Understanding that purpose helps avoid unnecessary distractions and lets you focus on solving the real issue.
Once the purpose is clear, the next step is to think about who will be involved in this journey. You have to consider the following:
It’s also crucial to account for organizational changes. Reorgs are common in the enterprise world, and as your organization evolves, so too must your insights platform. If the people responsible for the platform’s maintenance change, who will ensure the data remains relevant to the new structure? Too often, teams stop using insights platforms because the data no longer reflects the current state of the organization. You need to have the right people in place to ensure continuous alignment and relevance.
The next key component is process—a step that many organizations overlook. It's easy to say, "We have the data now," but then what happens? What do you expect people to do with the data once it’s available? And how do you track if those actions are leading to improvement?
A common mistake I see is organizations focusing on metrics without a clear action plan. Instead of just looking at a metric like PR cycle times, the goal should be to first identify the problem you're trying to solve. If the problem is poor code quality, then improving the review cycle times might help, but only because it’s part of a larger process of improving quality, not just for the sake of improving the metric.
It’s also essential to approach this with an experimentation mindset. For example, start by identifying an area for improvement, make a hypothesis about how to improve it, then test it and use engineering insights data to see if your hypothesis is correct. Starting with a metric and trying to manipulate it is a quick way to lose sight of your larger purpose.
The next piece of the puzzle is your program and rollout strategy. It’s easy to roll out an engineering insights platform and expect people to just log in and start using it, but that’s not enough. You need to think about how you'll introduce this new tool to the various stakeholders across different teams and business units.
The key here is to design a value loop within a smaller team or department first. Get a team to go through the full cycle of seeing the insights, taking action, and then quantifying the impact of that action. Once you've done this on a smaller scale, you can share success stories and roll it out more broadly across the organization. It’s not about whether people are logging into the platform—it’s about whether they’re driving meaningful change based on the insights.
And finally, we come to the platform itself. It’s the shiny object that many organizations focus on first, but as I’ve said before, it’s the last piece of the puzzle, not the first. Engineering insights platforms like Code Climate are powerful tools, but they can’t solve the problem of a poorly defined strategy.
I’ve seen organizations spend months evaluating these platforms, only to realize they didn't even know what they needed. One company in the telecom industry realized that no available platform suited their needs, so they chose to build their own. The key takeaway here is that your platform should align with your strategy—not the other way around. You should understand your purpose, people, and process before you even begin evaluating platforms.
To build a successful engineering insights strategy, you need to go beyond just installing a tool. An insights platform can only work if it’s supported by a clear purpose, the right people, a well-defined process, and a program that rolls it out effectively. The combination of these elements will ensure that your insights platform isn’t just a dashboard—it becomes a powerful driver of change and improvement in your organization.
Remember, a successful software engineering insights strategy isn’t just about the tool. It’s about building a culture of data-driven decision-making, fostering continuous improvement, and aligning all your teams toward achieving business outcomes. When you get that right, the value of engineering insights becomes clear.
Want to build a tailored engineering insights strategy for your enterprise organization? Get expert recommendations at our free insights strategy workshop. Register here.
Andrew Gassen has guided Fortune 500 companies and large government agencies through complex digital transformations. He specializes in embedding data-driven, experiment-led approaches within enterprise environments, helping organizations build a culture of continuous improvement and thrive in a rapidly evolving world.

Most organizations are great at communicating product releases—but rarely do the same for process improvements that enable those releases. This is a missed opportunity for any leader wanting to expand “growth mindset,” as curiosity and innovation is as critical for process improvement as it is product development.
Curiosity and innovation aren’t limited to product development. They’re just as essential in how your teams deliver that product. When engineering and delivery leaders share what they’re doing to find efficiencies and unclog bottlenecks, they not only improve Time to Value — they help their peers level up too.
Below is a template leaders can use via email or communication app (Slack, Microsoft Teams) to share process changes with their team. I’ve personally seen updates like this generate the same level of energy as product announcements—complete with clap emojis👏 and follow-up pings like “Tell me more!” Even better, they’re useful for performance reviews and make great resume material for the leads who author them (excluding any sensitive or proprietary content, of course).
Subject: [Experiment update]
[Date]
Experiment Lead: [Name]
Goal: [Enter the longer term goal your experiment was in service of]
Opportunity: [Describe a bottleneck or opportunity you identified for some focused improvement]
Problem: [Describe the specific problem you aimed to solve]
Solution: [Describe the very specific solution you tested]
Metric(s): [What was the one metric you determined would help you know if your solution solved the problem? Were there any additional metrics you kept track of, to understand how they changed as well?]
Action: [Describe, in brief, what you did to get the result]
Result: [What was the result of the experiment, in terms of the above metrics?]
Next Step: [What will you do now? Will you run another experiment like this, design a new one, or will you rollout the solution more broadly?]
Key Learnings: [What did you learn during this experiment that is going to make your next action stronger?]
Please reach out to [experiment lead’s name] for more detail.
Subject: PR Descriptions Boost Review Speed by 30%
March 31, 2025
Experiment Lead: Mary O’Clary
Goal: We must pull a major capability from Q4 2024 into Q2 2025 to increase our revenue. We believe we can do this by improving productivity by 30%.
Opportunity: We found lack of clear descriptions were a primary cause of churn & delay during the review cycle. How might we improve PR descriptions, with information reviewers need?
Problem: Help PR Reviewers more regularly understand the scope of PRs, so they don’t need to ask developers a bunch of questions.
Solution: Issue simple guidelines for what we are looking for PR descriptions
Metric(s): PR Review Speed. We also monitored overall PR Cycle Time, assuming it would also improve for PRs closed within our experiment timeframe.
Action: We ran this experiment over one 2 week sprint, with no substantial changes in complexity of work or composition of the team. We kept the timeframe tight to help eliminate additional variables.
Result: We saw PR Review Speed increase by 30%
Next Step: Because of such a great result and low perceived risk, we will roll this out across Engineering and continue to monitor both PR Review Speed & PR Cycle Time.
Key Learnings: Clear, consistent PR descriptions reduce reviewer friction without adding developer overhead, giving us confidence to expand this practice org-wide to help accelerate key Q2 2025 delivery.
Please reach out to Mary for more detail.
My recommendation is to appoint one “editor in chief” to issue these updates each week. They should CC the experiment lead on the communication to provide visibility. In the first 4-6 weeks, this editor may need to actively solicit reports and coach people on what to share. This is normal—you’re building a new behavior. During that time, it's critical that managers respond to these updates with kudos and support, and they may need to be prompted to do so in the first couple of weeks.
If these updates become a regular ritual, within ~3 months, you’ll likely have more contributions than you can keep up with. That’s when the real cultural shift happens: people start sharing without prompting, and process improvement becomes part of how your org operates.
I’ve seen this work in large-scale organizations, from manufacturing to healthcare. Whether your continuous improvement culture is just getting started or already mature, this small practice can help you sustain momentum and deepen your culture of learning.
Give it a shot, and don’t forget to celebrate the wins along the way.
Jen Handler is the Head of Professional Services at Code Climate. She’s an experienced technology leader with 20 years of building teams that deliver outcome-driven products for Fortune 50 companies across industries including healthcare, hospitality, retail, and finance. Her specialties include goal development, lean experimentation, and behavior change.

Output is not the same as impact. Flow is not the same as effectiveness. Most of us would agree with these statements—so why does the software industry default to output and flow metrics when measuring success? It’s a complex issue with multiple factors, but the elephant in the room is this: mapping engineering insights to meaningful business impact is far more challenging than measuring developer output or workflow efficiency.
Ideally, data should inform decisions. The problem arises when the wrong data is used to diagnose a problem that isn’t the real issue. Using misaligned metrics leads to misguided decisions, and unfortunately, we see this happen across engineering organizations of all sizes. While many companies have adopted Software Engineering Intelligence (SEI) platforms—whether through homegrown solutions or by partnering with company that specializes in SEI like Code Climate—a clear divide has emerged. Successful and mature organizations leverage engineering insights to drive real improvements, while others collect data without extracting real value—or worse, make decisions aimed solely at improving a metric rather than solving a real business challenge.
From our experience partnering with large enterprises with complex structures and over 1,000 engineers, we’ve identified three key factors that set high-performing engineering organizations apart.
When platform engineering first emerged, early innovators adopted the mantra of “platform as a product” to emphasize the key principles that drive successful platform teams. The same mindset applies to Software Engineering Intelligence (SEI). Enterprise organizations succeed when they treat engineering insights as a product rather than just a reporting tool.
Data shouldn’t be collected for the sake of having it—it should serve a clear purpose: helping specific users achieve specific outcomes. Whether for engineering leadership, product teams, or executive stakeholders, high-performing organizations ensure that engineering insights are:
Rather than relying on pre-built dashboards with generic engineering metrics, mature organizations customize reporting to align with team priorities and business objectives.
For example, one of our healthcare customers is evaluating how AI coding tools like GitHub Copilot and Cursor might impact their hiring plans for the year. They have specific questions to answer and are running highly tailored experiments, making a custom dashboard essential for generating meaningful, relevant insights. With many SEI solutions, they would have to externalize data into another system or piece together information from multiple pages, increasing overhead and slowing down decision-making.
High-performing enterprise organizations don’t treat their SEI solution as static. Team structures evolve, business priorities shift, and engineering workflows change. Instead of relying on one-size-fits-all reporting, they continuously refine their insights to keep them aligned with business and engineering goals. Frequent iteration isn’t a flaw—it’s a necessary feature, and the best organizations design their SEI operations with this in mind.
Many software engineering organizations focus primarily on code-related metrics, but writing code is just one small piece of the larger business value stream—and rarely the area with the greatest opportunities for improvement. Optimizing code creation can create a false sense of progress at best and, at worst, introduce unintended bottlenecks that negatively impact the broader system.
High-performing engineering organizations recognize this risk and instead measure the effectiveness of the entire system when evaluating the impact of changes and decisions. Instead of focusing solely on PR cycle time or commit activity, top-performing teams assess the entire journey:
For example, reducing code review time by a few hours may seem like an efficiency win, but if completed code sits for six weeks before deployment, that improvement has little real impact. While this may sound intuitive, in practice, it’s far more complicated—especially in matrixed or hierarchical organizations, where different teams own different parts of the system. In these environments, it’s often difficult, though not impossible, for one group to influence or improve a process owned by another.
One of our customers, a major media brand, had excellent coding metrics yet still struggled to meet sprint goals. While they were delivering work at the expected rate and prioritizing the right items, the perception of “failed sprints” persisted, creating tension for engineering leadership. After further analysis, we uncovered a critical misalignment: work was being added to team backlogs after sprints had already started, without removing any of the previously committed tasks. This shift in scope wasn’t due to engineering inefficiency—it stemmed from the business analysts' prioritization sessions occurring after sprint commitments were made. A simple rescheduling of prioritization ceremonies—ensuring that business decisions were finalized before engineering teams committed to sprint goals. This small yet system-wide adjustment significantly improved delivery consistency and alignment—something that wouldn’t have been possible without examining the entire end-to-end process.
There are many frameworks, methodologies, and metrics often referenced as critical to the engineering insights conversation. While these can be useful, they are not inherently valuable on their own. Why? Because it all comes down to strategy. Focusing on managing a specific engineering metric or framework (i.e. DORA or SPACE) is missing the forest for the trees. Our most successful customers have a clear, defined, and well-communicated strategy for their software engineering insights program—one that doesn’t focus on metrics by name. Why? Because unless a metric is mapped to something meaningful to the business, it lacks the context to be impactful.
Strategic engineering leaders at large organizations focus on business-driven questions, such as:
Tracking software engineering metrics like cycle time, PR size, or deployment frequency can be useful indicators, but they are output metrics—not impact metrics. Mature organizations go beyond reporting engineering speed and instead ask: "Did this speed up product releases in a way that drove revenue?"
While challenging to measure, this is where true business value lies. A 10% improvement in cycle time may indicate progress, but if sales remain flat, did it actually move the needle? Instead of optimizing isolated metrics, engineering leaders should align their focus with overarching business strategy. If an engineering metric doesn’t directly map to a key strategic imperative, it’s worth reevaluating whether it’s the right thing to measure.
One of our retail customers accelerated the release of a new digital capability, allowing them to capture additional revenue a full quarter earlier than anticipated. Not only did this directly increase revenue, but the extended timeline of revenue generation created a long-term financial impact—a result that finance teams, investors, and the board highly valued. The team was able to trace their decisions back to insights derived from their engineering data, proving the direct connection between software delivery and business success.
Understanding the broader business strategy isn’t optional for high-performing engineering organizations—it’s a fundamental requirement. Through our developer experience surveys, we’ve observed a significant difference between the highest-performing organizations and the rest as it relates to how well developers understand the business impact they are responsible for delivering. Organizations that treat engineers as task-takers, isolated from business impact, consistently underperform—even if their coding efficiency is exceptional. The engineering leaders at top-performing organizations prioritize alignment with strategy and avoid the distraction of tactical metrics that fail to connect to meaningful business outcomes.
Learn how to shift from micro engineering adjustments to strategic business impact. Request a Code Climate Diagnostic.

Code Climate has supported thousands of engineering teams of all sizes over the past decade, enhancing team health, advancing DevOps practices, and providing visibility into engineering processes. According to Gartner®, the Software Engineering Intelligence (SEI) platform market is expanding as engineering leaders increasingly leverage these platforms to enhance productivity and drive business value. As pioneers in the SEI space, the Code Climate team has identified three key takeaways from partnerships with our Fortune 100 customers:
The above takeaways have prompted a strategic shift in Code Climate’s roadmap, now centered on enterprise organizations with complex engineering team structure and workflows. As part of this transition, our flagship Software Engineering Intelligence (SEI) platform, Velocity, is now replaced by an enhanced SEI platform, custom-designed for each leader and their organization. With enterprise-level scalability, Code Climate provides senior engineering leaders complete autonomy over their SEI platform, seamlessly integrating into their workflows while delivering the customization, flexibility, and reliability needed to tackle business challenges.
Moreover, we understand that quantitative metrics from a data platform alone cannot transform an organization, which is why Code Climate is now a Software Engineering Intelligence Solutions Partner—offering five key characteristics that define our approach
"During my time at Pivotal Software, Inc., I met with hundreds of engineering executives who consistently asked, “How do I improve my software engineering organization?” These conversations revealed a universal challenge: aligning engineering efforts with business goals. I joined Code Climate because I'm passionate about helping enterprise organizations address these critical questions with actionable insights and data-driven strategies that empower engineering executives to drive meaningful change." - Josh Knowles, CEO of Code Climate
Ready to make data-driven engineering decisions to maximize business impact? Request a consultation.

Today, we’re excited to share that Code Climate Quality has been spun out into a new company: Qlty Software. Code Climate is now focused entirely on its next phase of Velocity, our Software Engineering Intelligence (SEI) solution for enterprise organizations

I founded Code Climate in 2011 to help engineering teams level up with data. Our initial Quality product was a pioneer for automated code review, helping developers merge with confidence by bringing maintainability and code coverage metrics into the developer workflow.
Our second product, Velocity, was launched in 2018 as the first Software Engineering Intelligence (SEI) platform to deliver insights about the people and processes in the end-to-end software development lifecycle.
All the while, we’ve been changing the way modern software gets built. Quality is reviewing code written by tens of thousands of engineers, and Velocity is helping Fortune 500 companies drive engineering transformation as they adopt AI-enabled workflows.
Today, Quality and Velocity serve different types of software engineering organizations, and we are investing heavily in each product for their respective customers.
To serve both groups better, we’re branching out into two companies. We’re thrilled to introduce Qlty Software, and to focus Code Climate on software engineering intelligence.
Over the past year, we’ve made more significant upgrades to Quality and our SEI platform, Velocity, than ever before. Much of that is limited early access, and we’ll have a lot to share publicly soon. As separate companies, each can double down on their products.
Qlty Software is dedicated to taking the toil out of code maintenance. The new company name represents our commitment to code quality. We’ve launched a new domain, with a brand new, enhanced edition of the Quality product.
I’m excited to be personally moving into the CEO role of Qlty Software to lead this effort. Josh Knowles, Code Climate’s General Manager, will take on the role of CEO of Code Climate, guiding the next chapter as an SEI solutions partner for technology leaders at large, complex organizations.
We believe the future of developer tools to review and improve code automatically is brighter than ever – from command line tools accelerating feedback loops to new, AI-powered workflows – and we’re excited to be on that journey with you.
-Bryan
CEO, Qlty Software

Technology is evolving very quickly but I don't believe it's evolving as quickly as expectations for it. This has become increasingly apparent to me as I've engaged in conversations with Code Climate's customers, who are senior software engineering leaders across different organizations. While the technology itself is advancing rapidly, the expectations placed on it are evolving at an even faster pace, possibly twice as quickly.
There's Generative AI, such as Copilot, the No-code/Low-code space, and the concept of Software Engineering Intelligence (SEI) platforms, as coined by Gartner®. The promises associated with these tools seem straightforward:
However, the reality isn’t as straightforward as the messaging may seem:
When I joined Code Climate a year ago, one recurring question from our customers was, "We see our data, but what's the actionable next step?" While the potential of these technologies is compelling, it's critical to address and understand their practical implications. Often, business or non-technical stakeholders embrace the promises while engineering leaders, responsible for implementation, grapple with the complex realities.
Software engineering leaders now face increased pressure to achieve more with fewer resources, often under metrics that oversimplify their complex responsibilities. It's no secret that widespread layoffs have affected the technology industry in recent years. Despite this, the scope of their responsibilities and the outcomes expected from them by the business haven't diminished. In fact, with the adoption of new technologies, these expectations have only increased.
Viewing software development solely in terms of the number of features produced overlooks critical aspects such as technical debt or the routine maintenance necessary to keep operations running smoothly. Adding to that, engineering leaders are increasingly pressured to solve non-engineering challenges within their domains. This disconnect between technical solutions and non-technical issues highlights a fundamental gap that can't be bridged by engineering alone—it requires buy-in and understanding from all stakeholders involved.
This tension isn't new, but it's becoming front-and-center thanks to the promises of new technologies mentioned above. These promises create higher expectations for business leaders, which, in turn, trickle down to engineering leaders who are expected to navigate these challenges, which trickle down to the teams doing the work. Recently, I had a conversation with a Code Climate customer undergoing a significant adoption of GitHub Copilot, a powerful tool. This particular leader’s finance team told her, "We bought this new tool six months ago and you don't seem to be operating any better. What's going on?" This scenario reflects the challenges many large engineering organizations face.
Here's how Code Climate is helping software engineering leaders take actionable steps to address challenges with new technology:
In addition, we partner with our enterprise customers to experiment and assess the impact of new technologies. For instance, let's use the following experiment template to justify the adoption of Copilot:
We believe offering Copilot to _______ for [duration] will provide sufficient insights to inform our purchasing decision for a broader, organization-wide rollout.
We will know what our decision is if we see ______ increase/decrease.
Let’s fill in the blanks:
We believe offering Copilot to one portfolio of 5 teams for one quarter will provide sufficient insights to inform our purchasing decision for a broader, organization-wide rollout.
We will know what our decision is if we see:
Andrew Gassen leads Code Climate's enterprise customer organization, partnering with engineering leaders for organization-wide diagnostics to identify critical focus areas and provide customized solutions. Request a consultation to learn more.