

Successful software projects are always changing. Every new requirement comes with the responsibility to determine exactly how the new or changed behaviors will be codified into the system, often in the form of objects.
For the longest time, when I had to change a behavior in a codebase I followed these rough steps:
Simple, right? Eventually I realized that this simple workflow leads to messy code.
The temptation when changing an existing system is to implement the desired behavior within the structure of the current abstractions. Repeat this without adjustment, and you’ll quickly end up contorting existing concepts or working around legacy behaviors. Conditionals pile up, and shotgun surgery becomes standard operating procedure.
One day, I had an epiphany. When making a change, rather than surveying the current landscape and asking “How can I make this work like I need?”, take a step back, look at each relevant abstraction and ask, “How should this work?”.
The names of the modules, classes and methods convey meaning. When you change the behavior within them in isolation, the cohesion between those names and the implementation beneath them may begin to fray.
If you are continually ensuring that your classes work exactly as their names imply, you’ll often find that the change in behavior you seek is better represented by adjusting the landscape of types in your system. You may end up introducing a collaborator, or you might simply need to tweak the name of a class to align with it’s new behavior.
This type of conscientiousness is difficult to apply rigorously, but like any habit it can be built up over time. Your reward will be a codebase maintainable years into the future.

One struggle for software development teams is determining when it is appropriate to refactor. It is quite a quandary.
Refactor too early and you could over-abstract your design and slow down your team. YAGNI! You’ll also make sub-optimal design decisions because of the limited information you have.
On the other hand, if you wait too long to refactor, you can end up with a big ball of mud. The refactoring may have grown to be a Herculean effort, and all the while your team has been suffering from decreased productivity as they tiptoe around challenging code.
So what’s a pragmatic programmer to do? Let’s take a look at a concrete set of guidelines that can try to answer this question. Generally, take the time to refactor now if any of the following are true:
Small refactorings are like making a low cost investment that always pays dividends. Take advantage of that every time.
“If I pass on doing the refactoring now, how long would it take to do later?”
If it would take less than a day to perform later, there is less urgency to do it now. It means that if a change needs to be made later, you can be confident you won’t be stuck in the weeds for days on end to whip the code into a workable state in order to implement the feature or bug fix.
Conversely, if passing on the refactoring creates a risk of digging technical debt that would take more than a day to resolve, it should probably be dealt with immediately. If you wait, that day could become two, or three or four days. The longer the time a refactoring takes, the less likely it is to ever be performed.
So it’s important to limit the technical debt you carry to issues that can be resolved in short order if they need to be. Violate this guideline, and you increase the risk of having developers feel the need to spend days cleaning things up, a practice that is sure to (rightly) make your whole organization uneasy.

It’s a new year and that makes it a great time to tackle some high value development chores that get your team started on the right foot. Each of the 10 quick development tasks below can all be done in a few hours at most, and will immediately start paying dividends on your teams productivity.
We’ll start with an easy one. If your team is anything like ours, it creates branches and pull requests on a regular basis. The problem comes when they get abandoned or merged (perhaps via a rebase) and the branch and/or pull request gets left behind for all eternity. Code Climate Quality only has three people working on it, but we have over 100 branches and a dozen open PRs across our three main repositories – almost none of those need to be looked at ever again.
Get all the developers in a room and go through Git branches and open pull requests in alphabetical order. Give anyone a chance to claim any branch they want around, but terminate any unclaimed branches with extreme prejudice. You’ll be left with a lot less clutter in your repos and GitHub views.
Time required: 15 minutes as a team. If you hold a daily stand-up, do it immediately following.
Deploys are like a mutex in your development pipeline and it pays to keep them as fast as possible. There’s no excuse for a deploy that takes longer than 30 seconds, and in many cases you can optimize them to be significantly faster than that without too much effort. The main Code Climate Rails app can usually be deployed in less than 5 seconds.
Often the culprit is simply doing a lot of unnecessary work (updating dependencies with NPM or Bundler, copying large directories, or precompiling assets) on every deploy. Making better use of Git (and its ability to know exactly what is changing on each deploy) can eliminate these extra steps except when they are needed yielding big performance wins.
in October we described the upgrades we made to our Capistrano-based deployment process. Give that post a read and apply the techniques to your own project. Everyone on your team will be happier – well, except for the fact that you’ll take away one of their legitimate excuses for slacking off.
Time required: Three hours.
Your code has a style whether you define it or not, and chances are if you have more than one programmer on your team that your natural styles will differ in at least a few ways. Ideally, you wouldn’t be able to tell the difference (at least stylistically) between code written by any of your team members.
Rather than waging coding style war in diffs and GitHub comments, take some time as a development team to ratify a style guide for your organization. It doesn’t really matter what rules you choose, just that you all agree to follow them. The most important principle is:
Proposed changes to the team’s style guides will be litigated out-of-band.
This helps avoid the problem of developers wasting time reformatting code every time they start working in a new file or code reviews that devolve into style nit picking sessions. After you ratify your style guide, pick a date about a month in the future to review it as a team and discuss potential changes. You could even create a GitHub repository to store it, and discuss change proposals in pull requests.
There’s no need to start from scratch. Feel free to crib from other widely available style guides — for example, Airbnb’s JavaScript style guide might be a good starting point for you. GitHub itself also publishes their style guide, which has sections on Ruby, JavaScript and even the voice and tone to use on the site.
Time required: One hour as a team.
Quick: How long do you have to wait for tests to run in order to give you 95% confidence that your latest code change is safe to ship to production? If it’s any longer than a few minutes, your team is paying a hefty tax each and every time they sit down at a keyboard.
Rigorously applying the principles of the testing pyramid will yield a fast test suite that still gives strong confidence the application is functioning as desired. What if your main test suite is already too slow? You’ve got a few options:
Remember that even a few seconds shaved off your build will save you several developer-hours over the course of the year.
Time required: Up to four hours upfront.
It seemed like 2013 was a particularly rough year for security on the Web, with a string of high profile breaches as startups and established companies alike. Now is a great time to take a step back and look at your code and operations to ensure you’re taking the appropriate precautions to avoid become a victim.
Although common Web application security risks transcend development tools, the detection of vulnerabilities and appropriate remedies tends to be a framework and language specific. For Rails apps, we recommend running Brakeman, an open source vulnerability detection scanner based on static analysis, and bundler-audit. If you’ve never run tools like this against your code base, it’s very likely they’ll uncover at least one issue worth addressing, making it time well spent.
If you add your Rails repo to Code Climate Quality, Code Climate’s Security Monitor feature will notify your team when new security vulnerabilities are introduced.
If you know of tools for other programming languages that are helpful, please share them in the comments.
Time required: 15 minutes to run a scan.
Lots of us have a TODO on our list to extract some useful component of one of our applications and release it as an open source project. The turn of the calendar is a great time to take that opportunity.
Why open source code? There are lots of reasons. First, the act of extracting the code often tends to lead to cleanup and better definition of the boundaries, making the component more maintainable. Then, once the code is published, you may very well get contributions. For example, the short time since we released our codeclimate-test-reporter gem, we’ve already received a few valuable contributions in the form of pull requests. Those were features we had on our list, and we saved time because other generous individuals spent their time improving our code! Finally, open source is a great way to get your organization’s name out into the community, which certainly doesn’t hurt when it comes time to grow your team with hiring.
Often developers get hung up on the need for code being exemplary before they share it. While it’s understandable that no one wants to be represented by messy code, remember that the sooner you flip the OSS bit the sooner other people can start benefiting from your work and making contributions. Plenty of the important OSS projects you use every day are messy codebases, but I bet your are happier that they are available as-is.
In short, obey the squirrel!
Time required: Up to four hours.
Developers in 2014 benefit from access to a wide range of services that strive to solve a variety of pain points that we used to have to roll up our sleeves and tackle on our own. Continuous integration (CI) is one of those pains, and it can be a big one.
Rather than operating your own CI server (like Jenkins), consider trying out a hosted alternative. They’ve advanced significantly over the past couple years and usually the time spent to make the switch is quickly paid back by not having to perform maintenance tasks like software updates and adding capacity on your own. Further, most of these tools have whole teams working every day to improve the continuous integration tooling. That’s just not something most dev shops can afford to do – at least until you’re Facebook-sized.
There are a lot of vendors competing in the CI space these days, but to give you some starting points, Code Climate has partnerships with Semaphore, Solano Labs (now part of GE Digital) and Travis CI.
Once you’re up and running, as a final step be sure to test intentionally breaking the build to ensure you are properly notified.
Time required: Two hours.
Good READMEs lay a foundation for getting developers up and running with a codebase. However, unless they are actively maintained, they start to drift away from the reality of the underlying project.
We like to begin our READMEs with a one sentence summary of the role of the codebase. Beyond that, we may include some additional context about how the code fits into the overall architecture, but we try to keep this brief. The meat of the README is comprised of install steps and any instructions necessary to execute the code. (Hopefully the install steps are few, as we try to automate as much as possible.)
Take a few minutes to glance at the READMEs of your most important codebases (or write them if they don’t exist or are empty). Most importantly, remove outdated information that could send your future teammates on a wild goose chase. No documentation is better than wrong documentation. But if you have the time while you’re in there, sync up the content of the README with the context in your head.
Protip: Whenever a new developer joins your team, have them follow the README instructions to get set up. Hopefully it should go smoothly, but should they run into any missing our outdated documentation, have them commit README updates before they move on.
Time required: 30 minutes.
Here’s one that I’ve gone an embarrassingly long time in my development career without addressing. Searching through log files sucks. Searching through log files across three applications and eight servers leveraging all of the power of Terminal.app’s tabs functionality sucks even more. There’s no reason for this.
Today it’s simple to get up and running with log aggregation tools, either hosted (like Papertail or Loggly) or on-premise (with Logstash and Kibana. Both systems can be setup to accept log messages already flowing through rsyslog or, with a bit more configuration, being written to log files on disk.
Either way, once you’re done, you’ll have a single place for your developers and operations people to go to search logs for valuable clues when diagnosing issues. This is a lifesaver in emergencies. Once you’ve got a handle on the basics, you can play around with fancier techniques like using values emitted in log files to push data into a time series database like Librato Metrics and archiving old logs to S3 (because storage is cheap and who knows if you’ll need them).
Time required: Two hours.
Every day we see experienced, well meaning teams struggling to achieve their code maintainability goals. A big part of the problem is that you can’t tell whether your code is getting better or worse simply by pulling up your project in GitHub.
Developers these days leverage increasing visibility to tackle all sorts of thorny problems from failing tests to slow databases. Code Climate Quality lets you see how your code is changing from day-to-day in a way that’s clear, timely and actionable. Derek Hopper, from Ideal Project Group, writes about the impact Code Climate has had for them:
We’ve eliminated numerous code smells. We’ve properly extracted business logic which transformed F classes into A classes. We have better test coverage for the important pieces of our application. The application is more stable overall.
If you haven’t tried it yet, we offer a 14-day free trial and we’d be happy to talk with you about how you can get off on the right foot with Code Climate in 2014.
Time required: Less than 15 minutes to get the results.

David Byrne, in the seminal Talking Heads song “Once in a Lifetime,” sings, “How did I get here?” The meaning of those lyrics have been fodder for some great discussion, but I think there’s one interpretation which has been sorely missing from that discourse — that the narrator is a software developer. The software developer, reflecting back on his work, is wondering how he accumulated so much technical debt.
“Letting the days go by, let the water hold me down.”
The water which holds him down is an obvious reference to what can happen to a software developer over time, caught under the weight of their own codebase. But what Byrne is singing about here isn’t strictly technical debt. Technical debt is the outcome. He’s singing about something else, about “letting the days go by,” about letting your codebase slip away from you.
Byrne, although he did not know it at the time (after all, even the phrase technical debt hadn’t been invented yet) was singing about “technical drift”. Technical drift is a phenomenon that impacts a lot of well-meaning software development teams. It’s a phenomenon that occurs when a development team continuously fails to recognize and adapt to change, causing the concepts in the software’s domain and the concepts in code to start to slowly “drift” apart from one another, creating dissonance which ultimately leads to technical debt.

One of the goals of software development is to write software that most closely reflects the the domain in which we’re working. This creates more understandable code that is easy reason and discuss, both within the tech team and with the rest of the organization. But it’s important to remember that the domain is continually shifting. We’re receiving new requirements based on new information.
If we don’t adapt to these changes, a gap will appear between our domain and software, and the gap becomes our friend, technical debt.
While the signs of technical drift may be hard to spot, technical debt, the result, starts to become readily apparent.
In some cases, a software development team, or a team leader, recognizing that the gap has become too large, comes together and decides “enough is enough”. What is needed is Design —Big Design. The team stops feature development, buckles down, and refactors their code and architecture to more cleanly map to their domain.

The problem with Big Design is that it’s a “stop the world” operation. Feature development stops, and an anxious queue begins to build up waiting for the development team to free up. Furthermore, it grants false hope that such a thing might not be necessary again, but if the pattern continues, it often becomes inevitable.
If you’re not taking an active role in refactoring your application every week chances are you’re experiencing “technical drift”. Having a problem space that is continually changing necessitates continuous reworking of how your application is designed — what Agile practitioners call “iterative design”. When you’re able to do this effectively, you can round off the peaks and avoid some of the pitfalls of Big Design.

Note that the “Code” and “Domain” lines never intersect. We must accept that we do not yet have enough information to make some kinds of design decisions. But we can get close.
Understanding technical drift is more of a mindset than a particular practice, but it can help put into context why it’s important to remain vigilant, to call attention to code and domain mismatches, and to advocate for accurate and precise names wherever possible. Gaps which you’re creating may be well understood today, but as your team changes, and time progresses, that understandability suffers. Make the investment now, double down and pay attention.
Years later, you don’t want to look back in horror on your code and say to yourself “my god, what have I done?"

The Code Climate team relies on many Open Source tools to help our application give the best feedback to our customers. These tools often depend on ideas with fascinating histories, and investigating these histories can teach us how to use these tools properly. In this post we’d like to focus on the origins of one of the main features of Code Climate Quality – measuring code complexity. We’ll look at the original paper that introduced the idea, and discuss how we discovered that understanding the role of intuition in quantifying code complexity is crucial to correctly interpreting complexity measurements.
While Code Climate Quality doesn’t use this exact measurement, the history of quantifying the complexity of a computer program can be traced to an algorithm known as “cyclomatic complexity.” This concept was introduced in “A Complexity Measure,” a 1976 paper by Thomas J. McCabe, a United States Department of Defense employee who was involved in many large scale programming and programming management efforts during his career. As is the case with most enduring concepts in computer science and software engineering, the problems that motivated McCabe’s original work are still relevant today. The text of the paper begins:
“There is a critical question facing software engineering today: How to modularize a software system so the resulting modules are both testable and maintainable?”
While we have more ideas about modularity, testability, and maintainability than we did in 1976, this is still at the heart of what makes programming complex and challenging for modern programmers, product managers, stakeholders, and more. In order to answer this critical question, McCable makes the claim that:
“What is needed is a mathematical technique that will provide a quantitative basis for modularization and allow us to identify software modules that will be difficult to test or maintain.”
Charged with the task of how to identify complex programs in order to reduce testing time and maintenance costs, McCabe takes a page from his experience with graph theory and provides a framework for determining the complexity of a computer program based on the idea of graph theoretic complexity.

The details of the algorithm aren’t terribly important, but the basic idea is that the connectedness of a graph relates to its complexity, and that the notion of complexity is independent of size. Here’s the author’s description of the strategy:
“The overall strategy will be to measure the complexity of a program by computing the number of linearly independent paths, control the “size” of programs by setting an upper limit to these paths (instead of using just physical size), and use the cyclomatic complexity as the basis for a testing methodology.”
When McCabe speaks of “linearly independent paths,” he is essentially referring to possible paths of execution that running a given piece of code can generate. In modern terms, this means that conditional statements and assignments will lead to higher cyclomatic complexity scores, and that limiting possible paths within methods will leader to lower scores. Let’s take a look at some JavaScript code that will illustrate this principle:
// Example 1 function myFunction(param){ var flags = []; if(param == 0){ flags.push(0); } if(param > 0){ flags.push(param); } return flags; } // Example 2 - simplified function myFunction(param){ return [param]; }
In the first function we can see that there are unnecessary conditional statements that cloud the intent of the (admittedly trivial) function. By removing these if statements and compacting the function to its essentials, we intuitively have a less complex function. By that token, the cyclomatic complexity score would be lower.
While the concept of applying graph theory to the structure of computer programs is novel and would have alone made this paper a landmark, the true measure of its genius lies in the desire of the author to “illustrate the correlation between intuitive complexity and graph-theoretic complexity.” In other words, the author was aware that the intuition of a programmer with respect to their notion of complexity is a powerful one that is worth preserving, and instead of seeking an algorithm that programmers could lean on, he sought one that would confirm what they already believed.
Modern static analysis tools deal with a larger volume of code than McCabe probably ever imagined possible, and so their goals have to be somewhat aligned with modern practice. Tools like Code Climate Quality that analyze entire code bases are going to be more powerful in the historical view than in the moment – there is simply too much information to consume to be able to review a quality report every time new code is pushed to a repository. Instead, the onus is on modern tools to provide a glimpse into how things are changing.Essentially, for a tool to be applicable, it must be trustworthy, and to be trustworthy, it must conform to the underlying assumptions that engineers have about the material that they work with: code.
For Code Climate Quality and other code quality tools to measure their success, they should look to see if they are, for the most part, conforming to the intuition of developers. Where they’re not doing so, they should be opening interesting conversations that help programmers get to the heart of the designs they are implementing. Can you think of examples when static analysis has confirmed or contradicted your intuition? Leave us some examples in the comments below and we’ll share them.
Stay tuned for the next post in which we’ll explore some of the visualizations McCabe used to express code complexity, and look at some real world code examples to see how lowering cyclomatic complexity can make code intuitively less complex.
Works Cited McCabe, T.J. A Complexity Measure IEEE Transactions on Software Engineering, Vol. SE-2 No.4, December 1976. Department of Defense, National Security Agency.

We often have to work on code that doesn’t have good test coverage. This creates a number of problems. The first problem is that if you don’t have good test coverage, it’s hard to know whether your code changes will break other parts of the application, so you need to have a strategy for handling regressions.
The second problem is even more troublesome. Generally, code that doesn’t have good test coverage is also badly designed. One of the big benefits of test driving your code is that it moves you towards a range of good practices. Most of the time, when you test drive code you’ll write your code “outside in” – focusing on the interface that the test needs to validate before thinking about the implementation you’ll need to deliver. It also makes it more likely that you’ll create classes with a narrow responsibilities that are loosely coupled, as the excessive setup required for testing tightly coupled code will quickly move you towards reducing your coupling. So if you’re working with code that doesn’t have good test coverage, most of the time it will be harder to write tests for and more tightly coupled than test driven code.
Finally, because of the the first two issues, the chances are that when changes have been made to the project in the past, developers will have made the smallest possible changes consistent with getting their new feature working rather than refactoring and cleaning up the code every time they touched it. Because of this it’s likely to have a high degree of technical debt, making it even harder to work with.

When confronted with code that doesn’t have good test coverage, it’s important not to try to “boil the ocean” with unit tests. It never makes sense to take a couple of weeks (or months) to try to get the code coverage up across the entire app. So, what is the answer? When you need to work with a big ball of mud, where should you start?
A good starting point is to take some time with the product owner/business analyst/business stakeholder to really clarify the key user journeys. Ask, “what are the most important things that your key audiences need to be able to do through the app?” Then create a handful of concrete scenarios for each user journey and write automated acceptance tests for them. For a web app you’d probably use a tool like Cucumber, RSpec and Capybara or Selenium to create these “smoke tests”. They don’t guarantee that your app is working correctly, but they should catch most of the large, systematic problems.
Next, test drive all of new code. That way you have confidence in the new functionality that you are adding to the system. If necessary, you might need to write a thin anti-corruption layer to provide a clean interface to code against for integration level testing.
Finally, whenever you find a bug, start by writing a failing test at an appropriate level (ideally a unit test). Then confirm that once the bug is fixed, the test passes.
If you’re working with a team that is not used to test driving code, take the time to make sure that they’re able to run the test suite locally and pair with them to get them used to test driving new functionality. Also make sure to set up continuous integration to your version control system so you’ll quickly get notified if any of the tests break.
Working with legacy code with limited test coverage is hard, but by following the ideas above it should be easier to get to grips with the code base. And over time, you’ll notice that the test coverage in the areas you care about – where you make most of your changes – will start to become reasonable. Usually within 6-12 months you end up with pretty good coverage in the parts of the app that really matter.
Peter Bell is Founder and CTO of Speak Geek, a contract member of the GitHub training team, and trains and consults regularly on everything from JavaScript and Ruby development to devOps and NoSQL data stores.

Today we’re proud to announce that we’re bringing Code Climate Quality more in line with how many of you work — you can now see how each change affects the quality and security of your code before it’s merged into main. You no longer have to wait until a security vulnerability is in main before eradicating it from your code base. If you use feature branches, you can ensure the quality is up to your team’s standards before merging. When reviewing a refactoring, you can easily visualize the impact to the codebase on a single page.
In short, you can merge with confidence. Create a Code Climate account start getting analysis of your Pull Requests right now.
There are three new, key features that work together to make this possible:
Let’s look at each of them.
We’re happy to announce that Code Climate will automatically analyze your GitHub Pull Requests. Simply open a Pull Request and Code Climate will inform GitHub when our analysis is ready for you (if you’ve used or seen Travis CI’s build status on GitHub, we use the same exact mechanism):

Branch testing and Pull Request support is included in all our currently offered plans at the Team level and above. We’ll be rolling this feature out incrementally over the next few days, and we will let you know when it’s activated for your account.
Note: Due to some technical considerations around analyzing cross-repo PRs, supporting Pull Requests for our free-for-OSS repos will take some extra time but is high on our list.
Once we’ve finished analyzing your pull request/branch, you’ll be able to see the results in our new Compare view. It’s a focused representation of the important changes in your branch. You’ll see the overall change to your repo’s GPA, how your classes or files grades have changed and where code smells have been fixed, introduced or gotten worse:

Even if you don’t use GitHub Pull Requests, you can start getting feedback on your branches immediately. Start by clicking on the new “Branches” tab for each of your repositories:

Push the “Analyze” button for any branches you care about, briefly sit tight, and within a few minutes the “View Comparison” button will be available to send you to a Compare view.
First-class Pull Request support has been our most requested feature over the last year, and we’re thrilled to be delivering it to you. We’ve been using these new features while we’ve been building them (I know, very meta) and we’re really enjoying the quicker feedback loop.
We hope you enjoy it as much as we have. We’d love to hear what you think!

Today we’re thrilled to announce that Code Climate is taking a giant step forward and launching support for JavaScript. Writing maintainable, bug-free JavaScript presents all of the challenges (and then some) of writing high quality Ruby, and now you can take advantage of Code Climate’s automated code reviews for your client-side JS and Node.js projects.

Code Climate launched out of the Ruby community, with the goal of helping developers ship quality code faster, regardless of the language they are using. With advances to client-side programming (Ember.js, Backbone, etc.) and the fast growth of Node.js on the server-side, now more than ever JavaScript is used to express critical business logic. That code needs to be free from defects, and it needs to be maintainable over the long term. As a first class language in a developer’s modern toolkit, JavaScript should have the tooling to match.
Create a Code Climate account and add your first JavaScript project today.
JavaScript projects on Code Climate benefit from the extensive work we’ve done over the past few years to make a tool that provides the most actionable, timely feedback on code changes including:
In addition, because code linting is often important to ensure consistent, predictable execution of JavaScript across browsers, we’ve built in configurable JSHint checks. Here’s an example from Node.js:

(JSHint is configured by checking in a .jshintrc file into your repository. If you already have one, we’ll use it automatically.)
To add private projects to Code Climate, you’ll first need to create an account. As of today, new repositories added to your Code Climate dashboard will have a dropdown menu to choose which language you would like us to analyze:

Note: Right now we are able to support one language per repository, but this is something that we will be improving in the future. As many Rails projects leverage JavaScript extensively, we want you to be able to see the effects of all of the changes to your codebase on each commit, and we are working to make that easier.
As always, Code Climate is free for open-source projects on GitHub. Add your OSS JavaScript projects today.

As modern software developers we have many tools at our disposal to help us feel confident that the code we produce is high-quality and defect free. Among the most valuable of these are tools that aid in inspecting source code written by another developer, or what we commonly call code review.
Often developers think the main reason for code review is to find bugs, but a 2013 study produced by by Alberto Bacchelli and Christian Bird of Microsoft Research concluded that other outcomes are more prevalent – and possibly more valuable!
Bacchelli and Bird surveyed 165 managers and 873 programmers at Microsoft, interviewed 17 developers with various degrees of experience and seniority across 16 separate product teams, and manually reviewed the content of 570 comments from Microsoft CodeFlow, an internal interactive tool that creates a central location where the discussion of code and the subsequent fixes can be discussed either in real-time or asynchronously.
What they found was that while the top motivation of developers, managers, and testers in performing code review is to find bugs, the outcome of most reviews is quite different.

The ‘Motivations’ chart: the ranked motivation categories from the developer segment of the interviews.

The ‘Outcomes’: Ranked categories extracted from a sampling of code review data.
As can be seen from these charts of Motivations and Outcomes, the top motivation for the largest number of developers was “Finding defects,” yet the topic most commonly discussed in code reviews ostensibly pertains to “code improvements:” comments or changes about code in terms of readability, commenting, consistency, dead code removal, etc.
So if code review isn’t giving us what we want, why do we keep doing it? The response of one senior developer in the study sums it up well:
“[code review] also has several beneficial influences: (1) makes people less protective about their code, (2) gives another person insight into the code, so there is (3) better sharing of information across the team, (4) helps support coding conventions on the team, and…(5) helps improve the overall process and quality of code.”
Bacchelli and Bird conclude that, while there’s a significant difference between the expectations and outcomes of code review, this isn’t a bad thing. While we don’t always get the exact value from code reviews that we expected, we often end up getting quite a bit more.
Turns out the Stones were right: You can’t always get what you want, but you just might find you get what you need.
In addition to a modicum of somewhat superficial bug fixes, teams get benefits such as “knowledge transfer, increased team awareness, and improved solutions to problems” from participating in modern code reviews. Additionally, the idea that code review is good at “educating new developers about code writing” is a compelling point.
Bacchelli and Bird recommend embracing these unexpected outcomes, rather than trying to re-focus code reviews on finding bugs, and let code review policies be guided with the explicit intent to improve code style, find alternative solutions, increase learning, and share ownership.
They also recommend automating enforcement of team style and code conventions to free reviewers to look for deeper more subtle defects – something we agree with wholeheartedly!
All quotes and figures are from Alberto Bacchelli and Christian Bird, “Expectations, Outcomes, and Challenges of Modern Code Review”, May 2013, Proceedings of the International Conference on Software Engineering.