The Kollected Kode Vicious

Kode Vicious - @kode_vicious

  Download PDF version of this article PDF

Kode Vicious: The Return

A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Koding problems driving you nuts, ko-workers making you krazy? Never fear, Kode Vicious is here—to answer your questions, solve your problems, and just basically make the world a better place.

Dear KV,

Whenever my team reviews my code, they always complain that I don’t check for return values from system calls. I can see having to check a regular function call, because I don’t trust my co-workers, but system calls are written by people who know what they’re doing—and, besides, if a system call fails, there isn’t much I can do to recover. Why bother?

Pointless Returns

Dear Pointless,

While you may think that those people who write operating systems are programming gods and goddesses who live on a higher plane, I can tell you from close experience that you’re dead wrong. One of the first rules of programming, one that they seem to leave out of most programming courses, is TRUST NO ONE! The person who wrote the system call you’re calling has long since collected a paycheck for the code you’re working with and is, if they’re lucky, working on something else.

The real fact of the matter is that return values and error checking are there for a reason. No one can ever anticipate all the possible ways in which a piece of code could be used. So, if programmers are smart, they have written and clearly documented (hint, hint) all the possible ways in which their routines could fail and have made sure to return a reasonable error value to the caller.

It turns out that not checking return values is a very common error in software. You’re coding along and you think, “Well, if this fails the system is horked anyways, so I might as well ignore the return value, as it’s not going to matter.” What about when you want to debug a horked system? If you put in a place for the return value to live, then when you’re sitting there, too tired to think at 2 a.m., staring at your debugger, you can just find out what the return value was. Otherwise, you have to add that variable, recompile the system, and check again. On systems where getting to the error condition takes hours, or days, putting proper error checking in ahead of time can save you a lot of headaches.

As a quick aside, I have to agree with those who say that excessive error checking makes code uglier. It’s clear this problem has not been solved because every decade we have a new way to represent an error check. Once upon a time it was if/then clauses, then we added exceptions, now most languages have try/except clauses. None of these seems to satisfy. If someone can point me to something that doesn’t make my code run off the side of my screen, I’d like to know what it is.

Back to the main point. Think of checking return codes as a form of safety belt. Sure, you may not get in an accident today, but you’ll feel really stupid if you wind up flying through the windshield.

KV

Dear KV,

I just joined a company building a large Web services platform and I’m working with its QA group. My current job is to write unit tests for the system and hook them into our nightly regression suite. A lot of the kode jockeys on the team complain that my tests are pointless and that I’m just wasting my time. These koders don’t actually write tests themselves, so how do they know? I’m getting a bit tired of being hammered on by these guys. Do you write tests or do you just code? How do you know that the test you write is good?

Testy Tester

Dear Testy,

First of all, all good koders write tests. No koders worth their paychecks would just crank out code all day without bothering to see if it worked! So, in answer to your first question, yes, I write tests. Actually, I secretly enjoy writing tests for other peoples’ code, as well as my own. Writing tests for other peoples’ code is an interesting way to learn about a system and how a coder thinks. Creating tests for my own code is simply a way of making sure I don’t look like an idiot. I hate looking like an idiot.

The real meat of your question could take far more time to answer than I have here, so I’ll give a short lesson in what I personally consider to be good test creation. Perhaps the easiest set of tests to create, and the ones I think most koders are familiar with, are the tests you write after someone has discovered you’ve made a slight… well, let’s call it a mistake, to be nice. You simply go through the bug database and create a test for every bug that exists, hook them into an automated harness of some sort, and away you go.

This is a deceptively easy way to make yourself look good in front of your boss. You have a large body of work to point to, and you can show that the product doesn’t break in the way that it broke before. It’s my belief, and I know that others disagree, that these kinds of tests ought to be written by the koder fixing the bug, instead of by someone outside of the koding team. You broke it? You fix it! You test it! You make sure it damn well doesn’t break again, period. Unfortunately by my giving that work to the koders, I’ve taken the work away from the QA team. Sorry.

I’m not exactly sure, from your letter, what the problem is that your co-workers see in your tests, but I can tell you the things that irk me when I have worked with test teams in the past. The first of these is silly test syndrome (STS, for short). STS is usually caused by managers who still believe in measuring work by lines of code, as opposed to the quality of work. These managers demand that everything be tested, without ever looking at what should be tested, or at weighing risks. So, what happens is that the QA team goes off and beavers away writing a test for every possible knob in the system, starting from some arbitrary point A and working until the product ships. Eventually the tests become so cumbersome that they take all night to run, and they rarely turn up any nasty issues. The reason this is ineffective is that the QA team is not allowed to use their brains, an important asset, to write tests that will find the bugs that will be the worst for the users of the system.

Avoiding STS requires a few things. It requires a brain, which I believe you have, because you managed to write to me. The second thing it requires is a working relationship with the people designing and implementing the system. You have to be able to ask questions so that you can direct your efforts at the riskiest areas. It makes little sense to test a library interface that does something simple and well understood when there are 10,000 lines of experimental, or just plain weird, code that is the company’s secret sauce, which also needs to be tested. The last thing that is required to avoid STS is a manager, or management chain, that understands what it means to do good tests. If you’re working for one of these LOC (lines of code) folks, you’re at a disadvantage and you’ll have to work around them to get good tests written. After a few months, though, writing good tests will pay off because you’ll be the person in the group finding the most, and nastiest, issues.

The other thing that irks me, and that might be the source of your co-workers’ complaints, are random tests, scattered around, that are not coherently organized. I’m a strong believer in good test harnesses. Make it easy for coders to write tests and they’ll write them for you; they’ll even run them before they ship the code. If your tests are sitting in your home directory, and require a lot of setup, you can forget getting any help working with them. Tests should not be one-offs; they should be part of a system, just like the software that you’re testing.

So, if either you have STS or your tests are just bits and pieces, you know why your co-workers are complaining. In my experience the skills required to write good tests are somewhat different from, but just as worthy as, those required to write good code. For the most part it takes brains, curiosity, and a penchant for breaking things to write good tests.

KV

KODE VICIOUS, known to mere mortals as George V. Neville-Neil, works on networking and operating system code for fun and profit. He also teaches courses on various subjects related to programming. His areas of interest are code spelunking, operating systems, and rewriting your bad code (OK, maybe not that last one). He earned his bachelor’s degree in computer science at Northeastern University in Boston, Massachusetts, and is a member of ACM, the Usenix Association, and IEEE. He is an avid bicyclist and traveler who has made San Francisco his home since 1990.

© 2004 ACM 1542-7730/04/1200 $5.00

acmqueue

Originally published in Queue vol. 2, no. 9
Comment on this article in the ACM Digital Library





More related articles:

Nicole Forsgren, Eirini Kalliamvakou, Abi Noda, Michaela Greiler, Brian Houck, Margaret-Anne Storey - DevEx in Action
DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.


João Varajão, António Trigo, Miguel Almeida - Low-code Development Productivity
This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.


Ivar Jacobson, Alistair Cockburn - Use Cases are Essential
While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions.


Jorge A. Navas, Ashish Gehani - OCCAM-v2: Combining Static and Dynamic Analysis for Effective and Efficient Whole-program Specialization
OCCAM-v2 leverages scalable pointer analysis, value analysis, and dynamic analysis to create an effective and efficient tool for specializing LLVM bitcode. The extent of the code-size reduction achieved depends on the specific deployment configuration. Each application that is to be specialized is accompanied by a manifest that specifies concrete arguments that are known a priori, as well as a count of residual arguments that will be provided at runtime. The best case for partial evaluation occurs when the arguments are completely concretely specified. OCCAM-v2 uses a pointer analysis to devirtualize calls, allowing it to eliminate the entire body of functions that are not reachable by any direct calls.





© ACM, Inc. All Rights Reserved.