Code Reviews (70+ Best Code Reviews Hacks 2019)

Code Reviews and Inspections 2018

Code Reviews and Inspections 2018

Our objective with Inspections is to reduce the Cost of Quality by finding and removing defects earlier and at a lower cost. While some testing will always be necessary, we can reduce the costs of the test by reducing the volume of defects propagated to test.

 

When you catch bugs early, you also get fewer compound bugs. Compound bugs are two separate bugs that interact: you trip going downstairs, and when you reach for the handrail it comes off in your hand.

 

Here’s a shocker: your main quality objective in software development is to get a working program to your user that meets all their requirements and has no defects. That’s right: your code should be perfect.

 

It meets all the user’s requirements and it has no errors in it when you deliver it. Impossible? Can’t be done? Well, software quality assurance is all about trying to get as close to perfection as you can—albeit within time and budget.

 
reviews
 
Software quality is usually discussed from two different perspectives: the user’s and the developer’s. From the user’s perspective, quality has a number of characteristics—things that your program must do in order to be accepted by the user—among which are the following:
  • Correctness: The software has to work, period.
  • Usability: It has to be easy to learn and easy to use.
  • Reliability: It has to stay up and be available when you need it.
  • Security: The software has to prevent unauthorized access and protect your data.
  • Adaptability: It should be easy to add new features.

 

From the developer’s perspective, things are a bit different. The developer wants to see the following:

  • Maintainability: It has to be easy to make changes to the software.
  • Portability: It has to be easy to move the software to a different platform.
  • Readability: Many developers won’t admit this, but you do need to be able to read the code.
  • Understandability: The code needs to be designed in such a way that a new developer can understand how it all hangs together.

 

Testability: Well, at least the testers think your code should be easy to test. Code that’s created in a modular fashion, with short functions that do only one thing, is much easier to understand and test than code that is all just one big main() function.

 

Software Quality Assurance (SQA) has three legs to it:

  • Testing: Finding the errors that surface while your program is executing, also known as dynamic analysis.
  • Debugging: Getting all the obvious errors out of your code—the ones that are found by testing it.
  • Reviews: Finding the errors that are inherently in your code as it sits there, also known as static analysis.
 
inspaction
 
Many developers—and managers—think that you can test your way to quality. You can’t. As we saw in the last blog, tests are limited. You often can’t explore every code path, you can’t test every possible data combination, and often your tests themselves are flawed. Tests can only get you so far.

 

As Edsger Dijkstra famously said, “. . . program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.”

 

Reviewing your code—reading it and looking for errors on the page—provides another mechanism for making sure you’ve implemented the user’s requirements and the resulting design correctly.

 

In fact, most development organizations that use a plan-driven methodology will not only review code, they’ll also review the requirements document, architecture, design specification, test plan, the tests themselves, and user documentation—in short, all the work products produced by the software development organization.

 

Organizations that use an agile development methodology don’t necessarily have all the documents just mentioned, but they do have requirements, user stories, user documentation, and especially code to review. This blog focuses on reviewing your code.

 

Walkthroughs, Reviews, and Inspections

Walkthroughs, Reviews, and Inspections

Testing alone is not a particularly effective way of finding errors in your code. In many cases, the combination of unit testing, integration testing, and system testing will only find about 50% or so of the errors in your program.

 

But if you add some type of code review (reading the code to find errors) to your testing regimen you can bring that percentage up to 93–99% of all the errors in your code. Now that’s an objective to shoot for.

 

Three types of reviews are typically done: walkthroughs, code reviews, and inspections. These three work their way up from very informal techniques to very formal methodologies. The reviews are typically done either right after you’ve got a clean compile of your code and before you unit test, or right after you finish your unit testing.

 

It’s better to do the reviews right after unit testing. Then you’ve got your changes made, you’ve got a clean compile, and you’ve done the first round of testing. That’s a great time to have someone else take a look at your code.

 

Walkthroughs

Walkthroughs

Walkthroughs, also known as desk checks or code reads, are the least formal type of review. Walkthroughs are normally used to confirm small changes to the code, say a line or two, that you’ve just made to fix an error.

 

If you’ve just added a new method to a class, or you’ve changed more than about 25 or 30 lines of code, don’t do a walkthrough. Do a code review instead (discussed next).

 

Walkthroughs involve two or at most three people: the author of the code and the reviewer. The author’s job in a walkthrough is to explain to the reviewer what the change is supposed to do and to point out where the change was made.

 

The reviewer’s job is to understand the change and then read the code. Once the reviewer reads the code, they make one of two judgments: either they agree that the change is correct, or they don’t. If not, the author has to go back, fix the code again, and then do another walkthrough.

 

If the reviewer thinks the change is correct, then the author can integrate the changed code back into the code base for integration testing.

 

If you’re using an agile methodology and you’re pair programming, a code walkthrough will happen naturally as you are implementing a task. The driver is writing the code, and the navigator is looking over their shoulder, checking for errors and thinking ahead.

 

In this case, it’s acceptable to use a walkthrough for a larger piece of code, but for a complete task—or better yet, for each user story that’s implemented—you should do a code review or an inspection. I talk more about reviews and agile methodologies in subsequent sections.

 

Code Reviews

Code Reviews

Code reviews are somewhat more formal than a walkthrough. Code reviews are what most software developers do. You should always do a code review if you’ve changed a substantial amount of code, or if you’ve added more than just a few lines of new code to an existing program.

 

As mentioned, agile programmers may do code reviews when they finish a user story. Code reviews are real meetings.

 

There are usually between three and five attendees at a code review. The people who attend a code review should each bring a different perspective to the meeting. 

 

The moderator of the code review is usually the author. It’s the moderator’s job to call the meeting, send out the work to be reviewed well before the meeting time, and to run the code review meeting. The moderator may also take notes at the meeting.

 

There should be one or more developers at the meeting—someone who’s working on the same project as the author. This person will bring detailed knowledge of the project to the meeting and assume that perspective.

 

There should be a tester at the code review. This person brings the testing perspective and not only reads the code being reviewed but thinks about ways in which the code should be tested.

 
inspaction_person
 
 

Finally, there should be an experienced developer present who’s not on the same project as the author. This person is the disinterested third party who represents the quality perspective.

 

Their job at the code review is to understand the code and get the author to explain the changes clearly. This person provides a more strategic vision about the code and how it fits into the project.

 

Oh, and no managers are allowed at code reviews. The presence of a manager changes the dynamics of the meeting and makes the code review less effective. People who might be willing to honestly critique a piece of code among peers will clam up in the presence of a manager; that doesn’t help find errors. No managers, please.

 

The objective of a code review is to find errors in the code. It’s not to fix them. Code reviews are informal enough that some discussion of fixes may occur, but that should be kept to a minimum.

 

Before the code review meeting, all the participants should go over the materials sent out by the moderator and prepare a list of errors they find. This step is critical to making the review meeting efficient and successful. Do your homework!

 
Team

This list should be given to the moderator at the beginning of the meeting. The author (who may also be the moderator) goes through the code changes, explaining them and how they either fix the error they were intended to fix or add the new feature that was required.

 

If an error or a discussion leads the review meeting off into code that wasn’t in the scope of the original review—stop!

 

Be very careful about moving off into territory that hasn’t been pre-read. You should treat any code not in the scope of the review as a black box. Schedule another meeting instead. Remember, the focus of the code review is on a single piece of code and finding errors in that piece of code. Don’t be distracted.

 

A computer and projector are essential at the code review so that everyone can see what’s going on all the time. A second computer should be used so that someone (usually the author) can take notes about errors found in the code.

 

A code review should not last more than about two hours or review more than about 200–500 lines of code because everyone’s productivity will begin to suffer after that amount of time or reading.

 

After the code review, the notes are distributed to all the participants and the author is charged with fixing all the errors that were found during the review. If you run out of time, schedule another review.

 

Although metrics aren’t required for code reviews, the moderator should at least keep track of how many errors were found, how many lines of code were reviewed, and if appropriate, the severity of each of the errors. These metrics are very useful to gauge productivity and should be used in planning the next project.

 

Code Inspections

Code Inspections

Code inspections are the most formal type of review meeting. The sole purpose of an inspection is to find defects in a work product. Inspections can be used to review planning documents, requirements, designs, or code—in short, any work product that a development team produces.

 

Code inspections have specific rules regarding how many lines of code to review at once, how long the review meeting must be, and how much preparation each member of the review team should do, among other things.

 

Inspections are typically used by larger organizations because they take more training, time, and effort than walkthroughs or code reviews. They’re also used for mission- and safety-critical software where defects can cause harm to users.

 

Michael Fagan invented the most widely known inspection methodology in 1976. Fagan’s process was the first formal software inspection process proposed and, as such, has been very influential.

 

Most organizations that use inspections use a variation of the original Fagan software code inspection process. Code inspections have several very important criteria, including the following:

 

  • Inspections use checklists of common error types to focus the inspectors.
  • The focus of the inspection meeting is solely on finding errors; no solutions are permitted.
  • Reviewers are required to prepare beforehand; the inspection meeting will be canceled if everyone isn’t ready.
  • Each participant in the inspection has a distinct role.

 

  • All participants have had inspection training.
  • The moderator is not the author and has had special training in addition to the regular inspection training.
  • The author is always required to follow up on errors reported in the meeting with the moderator.
  • Metrics data is always collected at an inspection meeting.

 

Inspection Roles

Inspection Roles

The following are the roles used in code inspections:

 

Moderator: The moderator gets all the materials from the author, decides who the other participants in the inspection should be, and is responsible for sending out all the inspection materials and scheduling and coordinating the meeting. Moderators must be technically competent; they need to understand the inspection materials and keep the meeting on track.

 

The moderator schedules the inspection meeting and sends out the checklist of common errors for the reviewers to peruse. They also follow up with the author on any errors found in the inspection, so they must understand the errors and the corrections.

 

Moderators attend an additional inspection-training course to help them prepare for their role.

 

Author: The author distributes the inspection materials to the moderator. If an Overview meeting is required, the author chairs it and explains the overall design to the reviewers.

 

Overview meetings are discouraged in code inspections because they can “taint the evidence” by injecting the author’s opinions about the code and the design before the inspection meeting.

 

Sometimes, though, if many of the reviewers are unfamiliar with the project, an Overview meeting is necessary. The author is also responsible for all rework that’s created as a result of the inspection meeting. During the inspection, the author answers questions about the code from the reviewers, but does nothing else.

 

Reader: The reader’s role is to read the code. Actually, the reader is supposed to paraphrase the code, not read it. Paraphrasing implies that the reader has a good understanding of the project, its design, and the code in question.

 

The reader doesn’t explain the code, only paraphrases it. The author should answer any questions about the code. That said, if the author has to explain too much of the code, that’s usually considered a defect to be fixed; the code should be refactored to make it simpler.

 

Reviewers: The reviewers do the heavy lifting in the inspection. A reviewer can be anyone with an interest in the code who is not the author. Normally, reviewers are other developers from the same project.

 

As in code reviews, it’s usually a good idea to have a senior person who’s not on the project also be a reviewer. There are usually between two and four reviewers in an inspection meeting.

 

Reviewers must do their pre-reading of the inspection materials and are expected to come to the meeting with a list of errors that they have found. This list is given to the recorder.

 

Recorder: Every inspection meeting has a recorder. The recorder is one of the reviewers and is the one who takes notes at the inspection meeting. The recorder merges the defect lists of the reviewers and classifies and records errors found during the meeting. They then prepare the inspection report and distribute it to the meeting participants.

 

If the project is using a defect management system, then it’s up to the recorder to enter defect reports for all major defects from the meeting into the system.

 

Managers: As with code reviews, managers aren’t invited to code inspections.

 

Inspection Phases and Procedures

Inspection Phases and Procedures

Fagan inspections have seven phases that must be followed for each inspection:

  • \ 1.\ Planning
  • \2.\ The Overview meeting
  • \ 3.\ Preparation
  • \4.\ The Inspection meeting
  • \5.\ The Inspection report
  • \ 6.\ Rework
  • \7.\ Follow up

 

Planning

In the Planning phase, the moderator organizes and schedules the meeting and picks the participants. The moderator and the author get together to discuss the scope of the inspection materials—for code inspections, typically 200–500 uncommented lines of code will be reviewed. The author then distributes the code to be inspected to the participants.

 

The Overview Meeting

meeting

An Overview meeting is necessary if several of the participants are unfamiliar with the project or its design and they need to come up to speed before they can effectively read the code. If an Overview meeting is necessary, the author will call it and run the meeting.

 

The meeting itself is mostly a presentation by the author of the project architecture and design. As mentioned, Overview meetings are discouraged, because they have a tendency to taint the evidence. Like the Inspection meeting itself, Overview meetings should last no longer than two hours.

 

Preparation

In the Preparation phase, each reviewer reads the work to be inspected. Preparation should take no more than two or three hours. The amount of work to be inspected should be between 200–500 uncommented lines of code or 30–80 pages of text. A number of studies have shown that reviewers can typically review about 125–200 lines of code per hour.

 

In Fagan inspections, the Preparation phase is required. The Inspection meeting can be canceled if the reviewers haven’t done their preparation. The amount of time each reviewer spent in preparation is one of the metrics gathered at the Inspection meeting.

 

The Inspection Meeting

The Inspection Meeting

The moderator is in charge of the Inspection meeting. Their job during the meeting is to keep the meeting on track and focused. The Inspection meeting should last no more than two hours.

 

If there is any material that has not been inspected at the end of that time, a new meeting is scheduled. At the beginning of the meeting, the reviewers turn in their list of previously discovered errors to the recorder.

 

During the meeting the reader paraphrases the code, and the reviewers follow along. The author is there to clarify any details and answer any questions about the code—and otherwise does nothing.

 

The recorder writes down all the defects reported, their severity, and their classification. Solutions to problems are strongly discouraged. Participants are encouraged to have a different meeting to discuss solutions.

 

We should look for a minute at defect types and severity as reported in a Fagan inspection. Fagan specifies only two types of defects: minor and major. Minor defects are typically typographic errors, errors in documentation, small user interface errors, and other miscellany that don’t cause the software to fail.

 

All other errors are major defects. This is a bit extreme. Two levels are usually not sufficient for most development organizations. Most organizations will have at least a five-level defect structure:

 

\1.\ Fatal: Yes, your program dies; can you say core dump?

 

\2.\ Severe: A major piece of functionality fails, and there is no workaround for the user. Say, in a first-person shooter game the software doesn’t allow you to re-load your main weapon and doesn’t let you switch weapons in the middle of a fight. That’s bad.

 

\3.\ Serious: The error is severe, but there’s a workaround for the user. The software doesn’t let you re-load your main weapon, but if you switch weapons and then switch back you can re-load.

 

\4.\ Trivial: A small error—incorrect documentation or something like a minor user interface problem. For example, a text box is 10 pixels too far from its prompt in a form.

 

\5.\ Feature request: A brand new feature for the program is desired. This isn’t an error; it’s a request from the user (or marketing) for new functionality in the software. In a game, this could be new weapons, new character types, new maps or surroundings, and so on. This is version.

 

In most organizations, software is not allowed to ship to a user with known severity 1 and 2 errors still in it. But severity 3 errors really make users unhappy, so realistically, no known severity 1 through 3 errors are allowed to ship. Ideally, of course, no errors ship, right?

 

In a Fagan inspection meeting, it’s usually up to the recorder to correctly classify the severity of the major defects found in the code. This classification can be changed later. In the Fagan inspection process, all severity 1 through 3 defects must be fixed.

 

Inspection Report

Inspection Report

Within a day of the meeting, the recorder distributes the Inspection report to all participants. The central part of the report is the defects that were found in the code at the meeting.

 

The report also includes metrics data, including the following:

  • Number of defects found
  • Number of each type of defect by severity and type
  • Time spent in preparation; total time in person-hours and time per participant
  • Time spent in the meeting; clock time and total person-hours
  • Number of uncommented lines of code or pages reviewed

 

Rework and Follow-up

follow

The author fixes all the severity 1 through 3 defects found during the meeting. If enough defects were found, or if enough refactoring or code changes had to occur, then another Inspection is scheduled. How much is enough?

 

Amounts vary. McConnell says 5% of the code, but this author has typically used 10% of the code inspected. So, if you inspected 200 lines of code and had to change 20 or more of them in the rework, then you should have another Inspection meeting.

 

If it’s less than 10%, the author and the moderator can do a walkthrough. Regardless of how much code is changed, the moderator must check all the changes as part of the follow-up. As part of the rework, another metric should be reported—the amount of time required by the author to fix each of the defects reported.

 

This metric plus the number of defects found during the project are critical to doing accurate planning and scheduling for the next project. This metric is easier to keep track of if developers use a defect tracking system.

 

Reviews in Agile Projects

Reviews in Agile Projects

Lets face it: the preceding sections on walkthroughs, code reviews, and inspections don’t seem to apply to agile projects at all. They seem like heavyweight processes that might be used in very large projects, but surely not in XP or Scrum or Lean Development, right?

 

The last thing we need during a Scrum sprint is a meeting every time we finish a task and want to integrate the code. Well, it turns out that doing reviews in agile projects is a pretty good idea and seems to work well.

 

Lets remember what the Agile Manifesto says agile developers value:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan
  •  

 

Over the last 40 years or so, quite a bit of research has shown that code reviews produce software with fewer defects, which aligns nicely with the agile emphasis on working software. What could be more interactive than software developers collaborating on the code and making real-time improvements?

 

Code reviews also fully support agile tenets by promoting the development of working software, collaboration, and interaction among teams, continuous attention to technical excellence, and the ability to respond to change—all while maintaining a high level of quality. The only question is: “How do you do code reviews in an agile project?”

 

First of all, let’s change the name. Instead of talking about walkthroughs or code reviews, let’s talk about peer code reviews instead. This emphasizes the fact that in our agile project peers do the reviewing of code.

 

Remember that a typical agile team has members with a wide variety of skills. There are developers, designers, testers, writers, architects, and, of course, the customer. Also remember that one of the hallmarks of an agile team is that they are self-organizing.

 

In this case what we want is for anyone on the team to be able to be in a peer code review. This spreads the knowledge of the code around, just as pair programming does, and it gives everyone on the team more skills and knowledge; remember that collective code ownership is a trait of agile methodologies.

 

Secondly, why do you need a meeting? You’ll hold the peer code review after the code has been written (or fixed) and after all the unit tests have been run. Whoever is to participate in the peer code review will need to read the code before the code review.

 

In addition, if your project is using pair programming, there have already been two sets of eyes on the code and the design and the requirements already. What are the chances that in a code review meeting you’ll uncover more major defects that would require fixing?

 

It turns out that the probability of finding more major defects is pretty low. According to a research study by Lawrence Votta7, code inspection meetings add only about 4% more defects to the list than those already brought to the meeting by the participants. In other words, the meetings may not add much.

 

Also remember the ultimate purpose of a peer code review: producing working software. In all agile processes, anything that detracts from the goal of producing working software is to be shunned, and meetings take time away from producing working software.

 

So, where is the case for having a peer code review? Well, the case is the research that says that code reviews to find new defects in code, and if we can do a peer code review without slowing down the flow of a sprint that would be a good thing. Also remember that one of the reasons for agile processes is that the earlier you find defects, the cheaper they are to fix.

 

A last idea here is that the team should allocate part of everyone’s time to doing peer code reviews when they’re doing the task estimations at the beginning of an iteration or a sprint. Making peer code reviews a part of the culture and the work effort will make it easier to convince developers to fit it into their day.

 

How to Do an Agile Peer Code Review

 Agile Peer Code Review
There are several ways that you can do a peer code review without having a long, drawn-out meeting and without requiring lots of heavyweight documentation and reporting. Here are a few suggestions:

 

Pair programming + 1: If your project is already doing pair programming, then you’re halfway to a peer code review already. The suggestion here is to just add one more person at the end and go over the code with that person one more time before you integrate it.

 

That’s all. You can give the new person a heads-up and have them read the code beforehand, or you can just drag them over to your computer and do it immediately.

 

Over the shoulder: This is like the walkthrough we visited at the beginning of this blog and is pretty much what you do in the pair programming + 1 activity. We put it here for those who aren’t using pair programming.

 

Once again, the suggestion here is to just add one more person at the end and go over the code with that person one more time before you integrate it.

 

That’s all. You can give the new person a heads-up and have them read the code beforehand, or you can just drag them over to your computer and do it immediately.

 

E-mail review: In this case, you email one or more of your colleagues a link to the code and ask them to read it and provide comments. Assuming that your team has built code review time into all your task estimations, this should be something that everyone on the team is on board with.

 

Defect Tracking Systems

Defect Tracking Systems

Most software development organizations and many open source development projects will use an automated defect tracking system to keep track of defects found in their software and to record requests for new features in the program.

 

Popular free and open source defect tracking systems include Bugzilla (www. bugzilla.org), YouTrack (www.jetbrains.com/youtrack/), Jira (Jira | Issue & Project Tracking Software | Atlassian), Mantis (www.mantisbt.org), and Trac (https://trac.edgewall.org).

 

Defect tracking systems keep track of a large amount of information about each defect found and entered. A typical defect tracking system will keep track of at least the following:

  • The number of the defect (assigned by the tracking system itself)
  • The current state of the defect in the system (Open, Assigned, Resolved, Integrated, Closed)
  • The fix that was made to correct the error
  • The files that were changed to make the fix
  • What baseline the fix was integrated into
  • What tests were written and where they’re stored (ideally, the tests are stored along with the fix)
  • The result of the code review or inspection

 

Defect tracking systems assume that at any given time, a defect report is in some state that reflects where it is in the process of being fixed. A typical defect tracking system can have upwards of ten states for each defect report.

 

In brief, all defects start out as New. They are then assigned to a developer for Analysis. The developer decides whether the reported defect is

  • A duplicate of one already in the system.
  • Not a defect and so should be rejected.
  • A real defect that should be worked on by someone.
  • A real defect whose resolution can be postponed to a later date.

 

Defects that are worked on are eventually fixed and move to the Resolved state. The fix must then be subjected to a code review. If the code review is successful, the defect fix is then Approved.

 

From Approved, the fix is scheduled for integration into the next baseline of the product, and if the integration tests of that baseline are successful, the defect is Closed.

 

Defect Tracking in Agile Projects

Defect Tracking in Agile Projects

Once again we’re at a place where we realize that a lot of what we’ve said so far about defect tracking is pretty heavyweight, and so we ask, “How does this apply to agile projects?” Well, first of all, we can ask ourselves which defects we want to track and when do those defects occur?

 

When defects occur can be divided up into before and after an iteration, and before and after the product release. Which defects occur can be those that affect the customer and that they care about, and those that the customer doesn’t care about. Let’s break these all up.

 

Defects that are found before the end of an iteration or a sprint are ones we can easily fix. These will normally be found either via unit test failures, during peer code reviews, or by the customer when they’re testing an intermediate product build.

 

These defects are typically fixed immediately, or if they uncover some other problem, like in the requirements, they can be made into new tasks that are added to the product or sprint backlog.

 

Defects that are found after the end of an iteration or sprint, but before the final product release, should probably be made into new tasks that must be added to the backlog for the next iteration. These defects can also lead to refactoring or new tasks that reflect changing requirements.

 

Defects that are found after product release are all errors that customers find and report. Here the decision of whether to fix them depends on whether the customer cares about the error or not.

 

If the customer does care, then the error should be tagged and tracked, added to the product backlog, and fixed in a subsequent release of the product. If the customer doesn’t care, then just ignore it.

 

This leads us to the problem of who fixes defects found in the product code.

 

If the error is found during development (during an iteration or a sprint and before product release), then the development team, in consultation with the customer, should decide whether the error should be fixed.

 

If yes, then the development team should fix it by making it a task and adding it to the backlog for the next iteration or sprint. If no, then everyone just moves on.

 

If the defect is found after the product release, then it’s likely that the development team has moved on to another project and may even have dispersed into several projects.

 

This calls for the creation of a separate support team whose job it is to evaluate and fix errors in released code. Ideally, people on this support team will rotate in and out from the company’s development teams so that some institutional memory of the project is present on the support team.

 

Recommend