It’s not always size that matters.
I’ve been dealing with a large program written in Java that seems to spend most of its time asking me to restart it because it has run out of memory. I’m not sure if this is an issue in the JVM (Java Virtual Machine) I’m using or in the program itself, but during these frequent restarts, I keep wondering why this program is so incredibly bloated. I would have thought Java’s garbage collector would prevent programs from running out of memory, especially when my desktop has quite a lot of it. It seems that eight gigabytes just isn’t enough to handle a modern IDE anymore.
Lack of RAM
> Bugs and Bragging Rights
Waste not memory, want not memory—unless it doesn’t matter
I’ve been reworking a device driver for a high-end, high-performance networking card and I have a resource allocation problem. The devices I’m working with have several network ports, but these are not always in use; in fact, many of our customers use only one of the four available ports. It would greatly simplify the logic in my driver if I could allocate the resources for all the ports—no matter how many there are—when the device driver is first loaded into the system, instead of dealing with allocation whenever an administrator brings up an interface. I should point out that this device has a good deal of complexity and the resource allocation isn’t as simple as a quick malloc of memory and pointer jiggling—a lot of moving parts are inside this thing.
Also, the perils of premature rebooting
An argument recently broke out between two factions of our systems administration team concerning the naming of our next set of hosts. One faction wants to name machines after services, with each host having a numeric suffix, and the other wants to continue our current scheme of each host having a unique name, without a numeric string. We now have so many hosts that any unique name is getting quite long—and is annoying to type. A compromise was recently suggested whereby each host could have two names in our internal DNS (Domain Name System), but this seems overly complicated. How do you decide on a host-naming scheme?
> The Naming of Hosts is a Difficult Matter
Have a question for Kode Vicious? E-mail him at email@example.com. If your question appears in his column, we’ll send you a rare piece of authentic Queue memorabilia. We edit e-mails for style, length, and clarity.
Software is supposed be a part of computer science, and science demands proof.
I’ve spent the past three weeks trying to cherry-pick changes out of one branch into another. When do I just give up and merge?
In the Pits
I once rode home with a friend from a computer conference in Monterey. It just so happened that this friend is a huge fan of fresh cherries, and when he saw a small stand selling baskets of them he stopped to buy some. Another trait this friend possesses is that he can’t ever pass up a good deal. So while haggling with the cherry seller, it became obvious that buying a whole flat of cherries would be a better deal than buying a single basket, even though that was all we really wanted. Not wanting to pass up a deal, however, my friend bought the entire flat and off we went—eating and talking. It took another 45 minutes to get home, and during that time we had eaten more than half the flat of cherries. I couldn’t look at anything even remotely cherry-flavored for months; and today, when someone says “cherry-picking,” that doesn’t conjure up happy images of privileged kids playing farmer on Saturday mornings along the California coast—I just feel ill.
Cherry-picking and the Scientific Method
Whenever someone asks you to trust them, don’t.
As part of a recent push to automate everything from test builds to documentation updates, my group—at the request of one of our development groups—deployed a job-scheduling system. The idea behind the deployment is that anyone should be able to set up a periodic job to run in order to do some work that takes a long time, but that isn’t absolutely critical to the day-to-day work of the company. It’s a way of avoiding having people run cron jobs on their desktops and of providing a centralized set of background processing services.
Swamped by Automation
Is there a “best used by” date for software?
Do you know of any rule of thumb for how often a piece of software should need maintenance? I’m not thinking about bug fixes, since bugs are there from the moment the code is written, but about the constant refactoring that seems to go on in code. Sometimes I feel as if programmers use refactoring as a way of keeping their jobs, rather than offering any real improvement. Is there a “best used by” date for software?
I’ve been upgrading some Python 2 code to Python 3 and ran across the following change in the language. It used to be that division (/) of two integers resulted in an integer, but to get that functionality in Python 3, I need to use //. There is still a /, but that’s different. Why would anyone in their right mind have two similar operations that are that closely coded? Don’t they know this will lead to errors?
Divided by Division
Divided by Division
One programmer’s extension is another programmer’s abuse.
During some recent downtime at work, I’ve been cleaning up a set of libraries, removing dead code, updating documentation blocks, and fixing minor bugs that have been annoying but not critical. This bit of code spelunking has revealed how some of the libraries have been not only used, but also abused. The fact that everyone and their sister use the timing library for just about any event they can think of isn’t so bad, as it is a library that’s meant to call out to code periodically (although some of the events seem as if they don’t need to be events at all). It was when I realized that some programmers were using our socket classes to store strings—just because the classes happen to have a bit of variable storage attached, and some of them are globally visible throughout the system—that I nearly lost my lunch. We do have string classes that could easily be used, but instead these programmers just abused whatever was at hand. Why?
The bytes you save today may bite you tomorrow.
GEORGE V. NEVILLE-NEIL
One of the coders I work with keeps removing my calls to system()from my code, insisting that it’s better to write code that does the work that I’m doing via the shell. He keeps saying that it’s far safer to code using the language we’re using than to call out to the shell to get this work done. I would believe that if he didn’t add 10 to 20 lines of code just to do what I do in one line with system(). How can increasing the number of lines of code decrease the number of bugs?
Happy with the One Liner
Colorful metaphors and properly reusing functions
GEORGE V. NEVILLE-NEIL
In the last installment of Kode Vicious (A System is not a Product,ACM Queue 10 (4), April 2012), I mentioned that I had recently read two pieces of code that had actually lowered, rather than raised, my blood pressure. As promised, this edition’s KV covers that second piece of code.
Stopping to smell the code before wasting time reentering configuration data
GEORGE V. NEVILLE-NEIL, NEVILLE-NEIL CONSULTING
Every once in a while, I come across a piece of good code and like to take a moment to recognize this fact, if only to keep my blood pressure low before my yearly medical checkup.