Contents |
|
The Mechanics
I was not terribly amazed when I was asked (and later confronted to elaborate) on how to do a code review. As I alluded to earlier, this is a well-understood process that no one has done before. It took a lot of brave effort, in the friendly gathering of Sputnik, for someone to actually admit they didn’t really know how code reviews work.
Cool, so let’s talk about the mechanics of code review.
I have worked in a variety of software industries ranging from games to military flight simulators to aircraft navigation systems and to boring stuff like bank statement report generators and configuration management tools. And each experience where code review was part of the development life-cycle, there was always success, of varying degrees to be sure, but each project reaped benefits. And notice, I did say “success” as each job gained through use of code review and none suffered. More importantly, however, each team member gained in skill, in capacity, in proficiency, in creativity, in interactive skills, and in modesty.
For the description of the mechanics, I’m going to draw on my most successful work experience where a large group of young engineers incorporated code (and design) review as part of the daily diet. Yep, that’s daily. Imagine a process where you meet (eek!) nearly everyday and it saves time!
Here’s how it works in summary: someone (in our case often two someones) writes a small pile of software according to some documented set of requirements (the output of a previous design review), compiles it and performs some initial testing, then prints out onto paper (this is very important) copies of the source code for each reviewer and gives the reviewers a day or two to review it. At a scheduled meeting, a small group of multi-talented individuals gather and point out problems, list things to be changed, ask and answer questions, and ultimately (by iteration of code-test-and-review) the implementation is approved according to a well-defined set of criteria. Sounds simple enough. Its repetitive and it gets very mechanical (and that’s a good thing).
The Players
The Implementor: This is the individual who’s software is being reviewed. I’ll belay the obvious code ownership question for the moment.
The Reviewer(s): Typically this is more than one person, the number varying depending on the software’s content and its overall importance. The number of reviewers also varies depending on how many iterations a piece of code has been through, generally the number drops as the software matures. Reviewers are programmers, designers, managers, senior and junior people alike.
The Software Manager: In game parlance, that would be either the Lead Programmer or the Technical Director whose responsibility it is to track and schedule code development and the review process. Note that the software manager is not the sole reviewer though they tend to be involved in most reviews.
The Materials
The Source Code: This is the source code, batch files, grammars, external references (books, technical papers), test cases, anything relevant to the application’s functioning, creation, or validation. And it needs to be distributed on paper. Yes, kill a few trees, save your project (that’s from a tree-loving Oregonian who knows a renewable resource when he sees one).
A Red Pen: Each reviewer must use a red pen to mark corrections on paper. Legible, pertinent comments are the goal since a meeting (ha!) is not always required for an effective code review, but good ink is.
The Design Requirements: More properly categorized as an approval criteria, design documents that specify requirements is essential for the implementor and the reviewers. This is what defines what the implementor is to (or should have) created. Without the goal well defined for each participant, there will be no consensus on whether a piece of software does all of what it needs to do.
The Approval Criteria: This consists of coding standards, list of abbreviations, documented coding idioms, validation tests, and possibly a Software Quality Assurance Manual. As I’ll point out below, how pedantic you want to be is up to each group, but regardless, there must be a well understood set of criteria by which the reviewers can test the software against.
When To Schedule A Code Review
Anyone can schedule a code review. Generally its at the behest of the Software Manager. But when is that?
Typically, we found that 500, one thousand, or as much as two thousand lines of new code marks a good time for a review. The initial review needs to be meaty enough to represent all the complexities of the software component under development. At the same time, the team needs to be careful that the approach used in the code is scrutinized before it becomes too large and difficult to change. This is an iterative process, so take it in small bites. How much code to review during each iteration will vary by group, by subject, and the skill level of the implementor.
It was common during my most successful project for each programmer to have some subset of their software under review at least once a week. Two weeks between reviews wasn’t uncommon for any particular software unit. But at three weeks, the implementor themselves would be screaming for a review. Getting good, timely feedback is essential to the entire process and coding in isolation for too long is dangerous.
Another scheduling consideration is the completion of functionality. As pieces come on line, don’t delay their review. Verify and approve software as being complete as often as you can -- everyone gets a boost from small successes. We found, also, that people tended to stall out and fail to move onto the next task until they were reviewed and approved.
If an individual seems to have a particularly sticky algorithm, can’t solve a problem, can’t decide an implementation approach, or has a history of defect creation -- schedule them more often. The review can be used to solve problems as well as to identify them.
As a software unit matures, meaning that its functionality is nearing completion, test suites are firming up, and the software has been reviewed a number of times, we found that reviews occurred more often with the benefit that the review time dropped dramatically. Also, with each iteration, the number of reviewers necessary to get good coverage declined. A final review might consist of the implementor and the Software Manager putting a checkmark on a schedule sheet.
Who Should Participate In The Review
Certainly the implementor and often the Lead Programmer (a.k.a. Software Manager) should participate in the review, but also a selection of other programmers. Seldom does any piece of software require the entire programming team. It depends on the scope and importance of the unit being created.
The Software Manager should attempt to get enough skill and experience coverage to recognize and discover software defects. He also is trying to schedule junior programmers as reviewers so they learn during the process. Senior and skilled individuals are encouraged to expand their knowledge and to pick up new methods and to teach.
The designer or author of the requirements is an excellent participant in the earliest stages of review as the initial implementation often ferrets out design flaws or deficiencies. It is not uncommon that a design is rejected and sent back for refinement in the first review.
One of our underlying goals in scheduling review participants was to provide insurance, not just in software quality, but in the case where a programmer leaves the team. Hence, we often assigned a software unit to a primary implementor and a second programmer. The primary was responsible for the software and for partitioning the unit so two people could write components of the same unit. Thus with constant review, there is no piece of software that only one person understands. If not a second programmer, then we would assign a primary reviewer who would participate in the unit’s development until its completion. Thus the second programmer or the primary reviewer could pick up where the dearly departed programmer left off and with little delay.
No piece of software should be comprehended by only one person. There is no implementation mystery big enough that it cannot be described and comprehended by at least two people within an effective review process. Lights should start flashing red if the phrase “only Guru A knows how that works” is uttered during the project.
The Code Review Meeting
Schedule the code review meeting for one or two days after handing out the source code to each reviewer. Make sure each reviewer has a recent copy of the design requirements of the software unit being reviewed. They will need ample time to comprehend and inspect the work. And, as always, start the meeting on time.
Our meetings were always driven by one of the reviewers, the primary reviewer, and not the implementor. The primary reviewer would then be tasked with keeping the review focused, making sure that all issues were sufficiently covered, that each reviewer and the implementor had opportunity to state opinions, facts, observations, what-have-you. The primary reviewer was charged with the authority to resolve any disputes, should there be any.
The meeting would start with the primary reviewer asking the implementor if there were anything they wanted to point out or highlight about the unit under review. Then there was usually a request for general comments by any of the reviewers. Often times there would be a change that would need to be made throughout the unit, so it would get dealt with up front and only one time.
Eventually, the review participants would go through the software, page by page, routine by routine, and note any deficiencies or defects or concerns. The implementor would, on a clean copy of the source code, mark down any changes or comments in red. This was the master copy of the reviewed unit which served as the guide for any repairs.
Often times only the first pass required a routine by routine inspection. In later iterations, the master copy of the source code would be used as a guide to check off and verify that all changes were successfully completed by the implementor. Verifying (and just recalling) the changes is extremely important. For each meeting, a new master copy listing required changes would be generated.
We seldom took meeting minutes and let the master copy serve that purpose. Be sure to date the master copy and note who the reviewers were. We kept the master copies in software folders (an Uncle Sam requirement) and used them as reference. All the reviewer’s copies were shredded, but the recycle bin is probably just as good. Disposing of out-of-date material is simple but extremely important house cleaning.
If there were action items as a result of a review, either the implementor or the primary reviewer would forward them to the Software Manager. These might include changes needed to the Design Document, changes needed to other software units, or just about anything.
The meeting would typically conclude with the implementor suggesting how long it would take to make changes, whether new components would be considered in the next review, and whether any further reviews were necessary at all.
Mold The Method to the Talent
During my soapbox moments, I have been asked if junior programmers should review code of a senior programmer. Yes. And the opposite? Yes. Should managers be involved? Yes. Besides being focused on bug hunting, the review process is also a concentrated learning and teaching session which has longer lasting affects than a seminar or classroom course. Managers and senior people need training and re-enforcement as well, so open the box and expose the software. Egos will fade, modesty will reign, and everyone will grow and benefit.
Avoid having the same people review the same person. Mix it up, use a round robin process, and crossbreed the skill sets.
As time goes by, the strengths and weaknesses of the reviewer (and the programmer) will become apparent. Some reviewers are good at picking on identifier names, others on initial case problems, missing tests cases, missing or misspelled words in comment blocks, what-have-you. Recognize these talents and blind spots and employ them or address them by crafty scheduling.
So how picky do reviews get? You might have noticed I mentioned “identifier names” and misspelled comments. That sounds scary. This sounds like a lot of work. How could this possibly save time? Egads!
That’s where it becomes quite obvious that each development team or company must decide what is important to scrutinize in the review. It is utterly necessary that each programming team grow and apply their own standards. If you don’t care about function names, don’t. If you don’t care about copyright headers in source files, then don’t. If you do care about a “default” clause for every “switch” statement, then make that part of your coding standard. The point is that as a group, you define your own expectations, make them clear, document them, and then live up to them. As a group you can change and alter your level of quality and how much you can ask of your team. Tailor the expectations to the talent you have and conscientiously decide what your approval criteria will consist of.
My recommendation (and druthers) is that everything in the source code is open to scrutiny. If you can’t justify a 3,000 line function, its name, the “goto” you used, then those problems need to be fixed. If your reviewers can’t understand an algorithm, that’s a good indicator a different approach is needed, even if it seems to work.
One of our touchstone gauges was “Will a maintenance programmer, 5 years from now, someone you will never meet, be able to fix or enhance this module based on the source code and design documents?” If not, the source code is “broken” and needs to be inked in red and reviewed again during the next iteration.
Another utterly important aspect of code review is the responsibility reviewers have toward the process. It requires commitment but it also requires flexibility. The intent is to prevent defect creation and to recognize them, not to set up road blocks.
Another adage we followed was “If you can’t offer a better alternative, then don’t offer one.” In other words, be selective in what you can’t live with -- just because you don’t like the approach used doesn’t mean its not valid. Though you are responsible as a reviewer to note problems, if the software functions sufficiently, that’s good enough. Don’t waste time making every piece of software perfect. We’re writing games, not landing airplanes.
Here’s another healthy realization: Keep in mind that with five programmers, for example, there will be five or more excellent solutions to any given problem, none of them alike. In this case, 80% of the software in your game will not be written the way you would do it. Lose 4 out of 5 arguments and have faith in your fellow programmer’s ability to meet the design requirements. The code review is meant to guarentee successful completion of the coding task, not to define or write the code, that’s the responsibility (and authority) of the implementor.
With time, every group I’ve been involved with develops their own methods and standards of quality. This happens quickly. Each group also eventually tunes itself. After reviewing Programmer B’s work for 6 months, you’ll understand their coding method quite naturally and your comprehension of their work will increase and with reduced effort.
After more time, a group tends to narrow in on a “team method” or style of implementation. In other words, a solution written by one programmer will use the same style and structure any other programmer of the team would have used -- even down to using the same identifier names. It is a bit eerie when the team consistently uses the same names for similar purposes in different software units, but that is a mark of success when members of the collective use similar schemes. The defect count will drop dramatically.
________________________________________________________