Download PDF version of this article PDF

Coding Smart: People vs.Tools
DONN M. SEELEY, WIND RIVER SYSTEMS

Tools can help developers be more productive, but they’re no replacement for thinking.

Cool tools are seductive. When we think about software productivity, tools naturally come to mind. When we see pretty new tools, we tend to believe that their amazing features will help us get our work done much faster. Because every software engineer uses software productivity tools daily, and all team managers have to decide which tools their members will use, the latest and greatest look appealing.

Software engineers and managers often undervalue the “mental toolkit” we use every day. Your brain can have a far bigger impact on productivity than any tools you can buy or download. Programming languages, compilers, debuggers, integrated development environments (IDEs), static code analyzers, dynamic code analyzers, performance monitors, test frameworks, project management systems, and defect tracking systems: All are fine and useful, but without clear goals and effective work practices you might as well be pounding rocks together.

I have observed that our community spends more time and money on software tools than on training and developing skills. If I reflect on my own experiences, however, a lot of the most useful skills I picked up over the years were not taught in school or at corporate training sessions. Many of them are so simple that they should be obvious choices, but I am continually surprised at how many engineers and managers haven’t caught on to them. I wonder how many other “obvious” skills I’ve missed out on?

SETTING CLEAR GOALS

Early in my career, I wrote an accounting package in Icon, a wonderful new tool that I had just discovered, which has many features not available in Fortran, C, or other languages familiar to me. My manager had asked me to write a program that pulled together billable connection time, CPU usage, disk consumption, and other numbers—and to summarize them with a charge-back value. He didn’t specify a programming language, so I decided to use Icon.

As you might imagine, the project took longer than I expected. I learned a lot about Icon, although that wasn’t a priority of my manager’s (in fact, he had no idea I was using it). Because Icon had an interpreter, building and testing my code was faster than it would have been with C, but I was so much slower at writing Icon than C that the improvement in buildtime didn’t make much difference. Also, Icon’s features for processing tokens were more advanced than those of C, so I spent a lot of time writing and rewriting code as I learned new Icon idioms. When I finished the project, I was the only person in the office who knew Icon—and I knew it only reasonably well. I was also the only person who could maintain the software package. When I moved on, the software couldn’t be fixed or upgraded and someone else would have to write a new accounting package. Icon was a fine tool, well suited to the task I gave it, but the result was decreased productivity.

The problem? I had the wrong goal. My main goal should have been to write software that my group could maintain, not to learn Icon. Because I never used Icon for a subsequent project, in the end this accounting package was basically a big waste of time for both my manager and me.

There are a couple of lessons here. The obvious one is that if you have the wrong goal, it doesn’t matter how cool the tool is. The less obvious, but equally important, lesson is that you write code for other people—not just for compilers.

I spend approximately one percent of my coding time designing and writing original code from scratch and easily spend the remaining 99 percent fixing existing code. Code maintenance is the biggest chunk of my coding time; to increase my productivity substantially I need to fix bugs faster or deal with code that has fewer bugs. If the code I’m fixing wasn’t written so that other people could understand it, then I’m likely going to spend a lot of time on it. Thus, whether I write code from scratch or fix existing code, my main goal must be to reduce my maintenance load. If this is not a goal, then the best productivity tools in the world will only waste my time more efficiently.

Of course, poorly chosen or unclear goals might reduce productivity in other ways. Here are a few examples of how productivity suffers from the wrong goals:

Some of these examples are more likely to affect managers, whereas others have more impact on engineers. It’s not really code productivity if the code will never be used or if it will lead to maintenance nightmares.

USING EFFECTIVE WORK PRACTICES

As an undergraduate in computer science, I had an interesting style of programming. I would take my assignment over to a keypunch machine (they were antiques even in those days, little did I know) and start punching away somewhat aimlessly. The operator would take my deck and run it through the Algol or Pascal compiler and hand me a long list of errors. I would fix some of the errors and try again and again until it compiled. When the program more or less ran, I would try to figure out what it was doing wrong and fix it. This usually took a very long time and consumed many punch cards. A lot of trees died in the process.

I had rotten tools and a rotten programming style. Throughout my student career, however, my productivity tools gradually improved. I started coding by sitting in front of a “visual” editor and typing somewhat aimlessly into a source file. I would run the Pascal or C compiler and repeatedly fix the errors until the program again basically worked. Then I would hack around, adding features and fixing bugs, building and rebuilding. I would add lots of print statements to show me which parts of the program weren’t working.

As far as I knew, in those days all of my friends used exactly this coding technique, which I call “hack-it-until-it-works.” It was very time-consuming (i.e., low productivity). The finished code often contained nasty latent bugs. New tools let me hack faster, but my code was still atrocious.

As my skills changed, my code became less atrocious. Through trial and error, observation, and imitation, I somehow managed to pick up some good work practices. Most of them are rather simple and can be performed without cutting-edge tools; this has come in handy when I’ve been forced to use other people’s tools. Some were suggested to me in school, though I ignored those at the time. The majority were learned on the job while observing people whom I respected.

After a while, I discovered something remarkable. My programs had far fewer compilation errors. Small programs sometimes worked correctly the first time—this was a shock after all those years of hack-it-until-it-works. When I later returned to these programs, I actually understood how they worked. And when I tried fixing other people’s programs, the fixes were successful and the code was readable. I’m still not the best programmer in the world, but my productivity has skyrocketed since the early stages of my career.

Here is a list of some of the good work practices I picked up throughout my career, presented in a rough, decreasing order of importance. What works for me might not always work for you, but some of these practices are so easy and obvious that I’m often baffled when I discover that my colleagues don’t use them.

1. Keep full development notes and debugging logs. I use lab notes extensively. Now that this practice is ingrained, I can’t imagine how I survived before. My notes are all online so I can search them using pattern-matching tools, and because I type faster than I can write. This practice gave me the single biggest boost to my productivity, reducing the amount of time spent trying to remember what I did a year ago/a month ago/a week ago/yesterday. Always record your breakthroughs and dead ends so you won’t have to reinvent them. Lab notes are an amazing productivity tool that many good programmers never use.

Closely related to lab notes is the debugging log. You can keep track of debugging notes in a variety of ways: Annotate your debugging experience by hand; cut and paste debugging material into a lab-notes file; run a line-oriented debugger inside a scripting environment; or do all three! With logs, you should never need to re-create a debugging situation just to see the same data, and you should be able to search the logs for particular names or values later.

2. Use outlines. High-level structure is not an emergent property of code. The probability of high-level structure appearing in your code if you don’t start with it is similar to the probability of monkeys banging on typewriters reproducing Shakespeare. Bitter experience has made me a top-down programmer; I now always sketch out high-level organization and data structures before attempting to attack any coding project. But I still frequently see code that was clearly hacked until it (sort of) worked.

A former colleague once attacked a significant operating system problem by directly editing each source file in turn, adding code to manipulate data structures and map data. He fixed all the compilation errors and started debugging. The new code smashed his disk contents repeatedly. After a month, he wasn’t getting any closer, so he threw out the old “prototype” and started over, using exactly the same programming technique but learning from experience. The newer code smashed his disk contents repeatedly, although less often than the old code. Debugging the newer code was still difficult and frustrating. After a while, he also set that code aside and went off to do something else. His code never became part of the system. His net productivity was zero for the entire time he was working on this project.

An outline would have helped in this situation. Here are the basics needed to create one:

For new code

  1. Settle upon algorithms for the new functionality.
  2. If helpful, prepare a high-level flowchart or a state machine.
  3. Give names and layouts to new data structures.
  4. Identify new programming interfaces (code).
  5. Collect new programming interfaces and put them into functional groups.
  6. Look for opportunities to use common code.
  7. Look for opportunities to simplify the programming interfaces.
  8. Produce a checklist of new source files.

For old code

  1. Identify functional units requiring modification.
  2. Skim through the old code and find key paths.
  3. Specify changes to old data structures and interfaces.
  4. Check for compatibility issues.
  5. Create a checklist of source files needing modification.

All of this work should take place before adding or modifying a single line of code. With outline in hand, you will have not only the high-level structure of the code in front of you as you begin coding, but also a concrete goal to work toward.

Notice how outlines produce a sort of fractal expansion into more outlines. Eventually these outlines will start to look like code. That brings up the subject of pseudocode.

3. Write pseudocode. I’m a pretty decent programmer; I understand a number of programming languages forward and backward. But programming languages are not natural languages—and I think in a natural language. Pseudocode is an intermediate step between my natural language and the target programming language. It helps me organize my thoughts so that they’re easy to read and convert to a programming language. When I’m working on someone else’s code and having some trouble understanding the logic, I convert the code into pseudocode to test my understanding and improve the readability. Even tiny projects benefit from pseudocoding—“Aha, so that’s how it works!”

I know very few other engineers who use pseudocode, so here are some basics:

High-level pseudocode

allocate a new element for the given priority
insert it into the global list in numeric order by priority

Low-level pseudocode

allocate a new element
if we fail,
   report an error
initialize the priority field
initialize the thread queue
loop over the global list
   if the priority of an element exceeds the new priority,
      stop
if we didn’t find a higher-ranking element,
   insert the new element at the end of the list
otherwise
   insert the element before the higher-ranking element

C code (using <sys/queue.h> macros)

if ((new_prioq = malloc(sizeof (*new_prioq))) == NULL)
   return (-1);
new_prioq->priority = new_priority;
STAILQ_INIT(&new_prioq->threads);
TAILQ_FOREACH(prioq, &prioq_head, chain)
   if (prioq->priority > priority)
      break;
if (prioq == NULL)
   TAILQ_INSERT_TAIL(&prioq_head, new_prioq, chain);
      else
TAILQ_INSERT_BEFORE(prioq, new_prioq, chain);

You don’t have to write complete sentences, but your pseudocode should allow you to read and understand the function you’re coding in a comfortable idiom, and closely map to statements in the target programming language.

4. Study prior to work. Someone else has probably worked on your problem before. Maybe even you.

This practice is the opposite of “not invented here.” If someone did it before and did it reasonably well, it’s a waste of time and productivity to do it again. The world doesn’t need another hashed database access routine, and your manager doesn’t want to see you writing it or debugging it. Even if you decide that you would have done the same job somewhat differently, you will still benefit by knowing what other implementations are out there and how they work.

Check local documentation. On my system, I can run:

$ man -k hash
[...]
hash (3) - hash database access method
[...]
$

Check the Web. Ask Google about extensible hash functions. Refine the search if you get too many hits. The Web can be an amazing productivity tool. Read manuals and use online help. For tough problems, read journals, proceedings, and papers. Many journals and proceedings are online, too. Don’t reinvent, reuse.

5. Use code review. Compilers can tell you whether code uses correct syntax and sometimes whether you’ve used constructs that are signs of common errors. What a compiler can’t tell you is whether the code does what you meant it to do. That’s what reviewers are for; they provide a second pair of eyes to read through your code. When your code has been in front of your face for a while, you lose perspective. Some organizations now use formal inspections, which is wonderful, but even a simple read-through by a friend can catch loads of goofs. Humility is a useful skill to develop when subjecting your code to any review, as you inevitably will be amazed at how many things you can screw up. It may be impractical to have absolutely every line of code reviewed, but it’s essential to submit at least your high-level design and critical paths.

I recently wrote some memory management unit (MMU) support for a new model of a supported processor. Formal inspection turned up 32 defects. Most of the defect reports requested comments or other cosmetic changes that would make the code easier to read. This feedback is really useful: Up to now, no one had laid eyes on this code except me. A compiler would never have come up with a defect list like this. Of course, I’ve never been embarrassed when a compiler found defects in my code, but I had to learn not to be embarrassed when a reviewer found problems in my code. Not taking this personally really pays off. By the time my code is built and tested, it will have far fewer defects than if it had not been reviewed, and someone else will be able to understand and maintain the code.

6. Use revision control. Revision control is recognized as such a virtue that almost every project uses it today. It’s a great example of how good tools really can make a difference. Many people, however, still don’t use revision control effectively. In my experience, the worst problems are created by engineers who don’t want to check in their code until it’s perfect and by those who resist making informative log comments. The first instance is a version of the backup problem: People who don’t make frequent backups are putting their productivity at great risk because they may have to re-create their work. The second instance is almost as bad: If you can’t figure out why someone made a change that broke a program, you can’t fix it.

Here’s a bad revision-control log entry:

resolved SPR 78560

And here’s a better revision-control entry:

Fixed a problem in malloc() that caused it to return storage
that was not naturally aligned. Resolves SPR 78560.
Reviewed by Chris Torek.

7. Use good and consistent coding style. The best coding style is one used in the rest of the project, which ensures that people who will have to fix the code can easily read it. It took me years to shake my obsession that my personal, idiosyncratic coding style for C was so wonderful that I could use it for everything. After all, the compiler couldn’t care less about my coding style. The problem was, other people tripped over my code. It was slow to read, which reduced their productivity. I now try precisely mimicking the coding style used by the program I’m fixing or—for original coding—the style that’s being used by my software team. When you code, you’re writing for an audience of people, not computers. If not, you might just as well write in binary.

No surprise, I still have some obsessions about coding style. High on my list is that people should do their best to write code everyone can read directly rather than code that requires explanatory comments for every line. Sometimes you’re using PowerPC assembly language, for example, and even adding lots of macros can’t make it readable. If you are writing in a typical high-level programming language, however, you can choose your own names for operations (functions) and data (variables). If possible, local variables that serve the same purpose in different functions should have the same name—not just in the same source file—across the entire project. Productivity goes down if people have to scratch their heads when they read your code.

This list is somewhat abridged, as I’ve knocked a dozen items off it, but you get the idea. In every case, tools can help, but without a commitment to good practices from the people using the tools, the tools can be ineffective or even useless.

WHAT TOOLS CAN AND CAN’T DO FOR PRODUCTIVITY

After all this talk about what people can do to improve their own productivity, what can I say about the best way to use tools?

1. The best tools are those you use effectively. Regardless of how many tools you have or how very cool they are, the best tools are the ones you and your team use effectively. With a complex tool, most people use only a subset of its features, but it’s the effectiveness of that subset that counts. (Unused features are just a waste of the tool designer’s productivity.) Many tool sets are perfectly adequate for the tasks that programmers do, so adding features probably won’t change how programmers work.

Let’s look at one of the best and easiest-to-use tools: compiler warnings. The compiler that I currently use can produce a variety of warnings, some more relevant than others. Many programmers, however, are so annoyed by false positives, or by code changes needed to make particular warnings go away, that they disable all warnings. I have purchased and downloaded various pieces of source code that produced voluminous warnings when compiled, but only a certain number of the warnings ever pointed out serious problems in the code. Compiler warnings, however, are an effective tool only if you leave them turned on. My feeling, which has changed over the years, is that good compiler warnings are so valuable that defensive programming is worthwhile to avoid false-positive compiler warnings. I changed a practice of mine and as a result increased the effectiveness of one of my principal tools.

I was once asked to port an enormous program and encountered some obnoxious compilation difficulties in a module. That module was written in C++ and made heavy use of templates, which were a cutting-edge feature at the time. The code was almost unreadable. The programmer, who was clearly more sophisticated than I was with C++, probably had experience in a number of peculiar C++ features. My C++ compiler and debugger weren’t current enough to handle the new template syntax, however, so I was forced to upgrade both of them. I was then able to build the program, but the module failed. I spent a lot of time trying to figure out whether the compiler or the code was at fault. The code didn’t really need the cutting-edge features, but the features were cool and sophisticated. My productivity went way down because I was forced to live on the cutting edge. Although you can argue about which of the myriad of evolving features of C++ are worthwhile, the fact remains that using cutting-edge features in a production program is always a really bad idea. The new features were simply not effective for me.

Sometimes you are forced to make effective use of tools that you didn’t choose. I’m a senior programmer, so I’m lucky to be able to specify a lot, but not all, of the tools that I use. Most of the time when you get involved with a new job or a new project, the team members already know which operating system, programming language, compiler, debugger, etc., they will use. Someone else has already chosen your tools for you, so you need to make those tools effective for you in your work. Of course, you can always recommend better tools to your manager, but if you are waiting for better tools to arrive, your productivity is nil.

It’s possible, however, to go too far the other way. A team or its management may constantly upgrade to the latest and greatest tools in the belief that new features increase productivity. If the programmers never become familiar enough with each new tool, the new tools may not be effective.

2. How much do bad tools really hurt? A few tool sets are actively evil. The classic Ed editor error message was one example: It printed ? for virtually every user error. A good tool should promote novices’ learning and yet also be extremely efficient for experts. I have seen many examples of bad tools that leaned too far in either of these directions. Some tools were so concerned about holding my hand that I was never able to use them efficiently. I find that word processors frequently offend in this way. Other tools are so focused on expert usage (or usage by the program’s author) that they are completely opaque to me. In either case, I am unlikely to end up adopting the tool, and if forced to use it, I will probably never be efficient with it. Such tools hurt productivity.

Some tools are simply broken in some technical way—they don’t work as advertised. Those tools waste productivity because of the time you spend writing bug reports (or shouting at the screen). When an editor crashes every day, leaving you without the last hour of work, your productivity goes way down (and you look for a new editor). A system that I worked with had a translucent filesystem feature, which was very convenient for building releases. (Using a translucent filesystem is like drawing on tracing paper: It provides a view of a read-only filesystem that you can add to and change without affecting the underlying data.) It worked most of the time, but very occasionally botched things badly. We had to re-create data to recover from the errors, which took a long time. The productivity improvement from the feature was outweighed by the productivity drop from its bugs, so we abandoned it for our release production system.

My general experience has been that truly bad tool sets are weeded out fairly quickly. Software engineering teams are pretty good about letting members know about bad tools. Sometimes a job requirement makes it impossible to avoid a bad tool. In that case, you must develop strategies for coping with the tool because you will lose even more productivity by (passive-aggressively) refusing to work with it.

Far more often I find myself dealing with mediocre tools rather than actively evil ones, and I can generally find ways to work reasonably efficiently with them. Therefore, my answer to whether bad tools hurt is, “Yes, but...”

3. Can good tools hurt as well as help? This is a tricky question. If a tool really is good, then in theory it shouldn’t hurt your productivity. Sometimes a tool is good at what it does, but it’s not good for the people who must use it. One tool may run fast but cause users much grief preparing data and interpreting errors, whereas another tool runs slower but is easier to use. Productivity is measured by how much work your team gets done, and that includes people time. People time is usually more expensive than computer time, and with an awkward tool the people time may dominate the tool runtime to such an extent that the tool’s speed is irrelevant.

Sometimes a good tool can be used ineffectively. Replacing an old mediocre tool with a fancy new one will cause productivity to drop while your team masters the new tool. If that productivity loss isn’t recovered over the project’s lifetime, the new tool may be a net productivity loss, even if it is a good tool. Emacs is better than Ed, but if you ask your team to switch editors in the middle of a project, you may miss your deadline.

Some features of new tools may look good but are actually ineffective. For example, a new IDE that reduces compile/debug/edit turnaround is optimizing the wrong problem. Programmers whose time is dominated by the compile/debug/edit cycle are using inefficient work practices—they need to use practices that reduce the number of defects before they reach the compiler. By the time you reach the debugger, the battle for productivity may have already been lost. Similarly, a visualization tool may produce really cool plots, but if the user doesn’t understand and internalize the view, the tool is ineffective. (See Edward R. Tufte’s famous The Visual Display of Quantitative Information, 2nd ed., Graphics Press, 2001.)

In summary, good tools can hurt if you don’t take into account how people use them.

4. Why are people always asking for more and fancier features in tools? In part, it’s a cultural issue. We like cool toys. That’s part of the reason why we do what we do. In part, it’s a marketing issue. Companies have to improve their products so they can continue making money on them. Similarly for open source software, contributors constantly have to “improve” the software to demonstrate their competence and coolness. The company or open source project then markets the cool new improved software to us, and we buy it or download it. Sometimes the market resists. Witness the fact that Microsoft Windows 3.1 continued to have an enormous market share well after Windows 95 and Windows 98 came along.

5. Why do people often seem to be resistant to good work practices? I believe that we often look for new features because we want a tool that solves our problems without our having to change our practices. Developing effective work practices can take a long time. If it looks like a new feature of a tool will allow us to avoid the effort of adopting new practices, then we’re all for it. Very occasionally, a breakthrough feature will improve our productivity, but I feel the payoff is bigger and better if we improve our work practices.

People resist change. Change requires mental effort. Change requires humility. Many people have a hard time with these things. Many good work practices are easy and obvious, but we avoid them because we avoid change. Ironically, adopting better work practices may be less difficult than learning fancy new tools, yet people still don’t welcome such changes, even when they might reduce their workload.

Over time, however, and with the right presentation, people can learn to use better practices. I’m a living example. Once you get past looking stupid, trying new practices becomes easy.

ENVOI

Choose your tools wisely, but spend more effort on your people than on their tools. People are the source of productivity.

(Some of the examples provided have been modified for didactic purposes—and to protect the innocent and the guilty.)

DONN M. SEELEY is a senior member of the technical staff at Wind River Systems, where he works on embedded systems technology. He was cofounder of Berkeley Software Design Inc., the first commercial vendor of 4BSD. Seeley wrote A Tour of the Worm, one of the original papers on the Morris Internet Worm incident of 1988 (originally published online Dec. 10, 1988; first appeared in print in the Usenix Technical Conference Proceedings [Winter 1989], 287-3040.

 

acmqueue

Originally published in Queue vol. 1, no. 6
Comment on this article in the ACM Digital Library








© ACM, Inc. All Rights Reserved.