Risk is a necessary consequence of dependence
What is critical? To what degree is critical defined as a matter of principle, and to what degree is it defined operationally? I am distinguishing what we say from what we do.
Mainstream media love to turn a spotlight on anything they can label “hypocrisy,” the Merriam-Webster unabridged dictionary meaning of which is:
the act or practice of pretending to be what one is not or to have principles or beliefs that one does not have, especially the false assumption of an appearance of virtue
> Resolved: the Internet Is No Place for Critical Infrastructure
Software is supposed be a part of computer science, and science demands proof.
I’ve spent the past three weeks trying to cherry-pick changes out of one branch into another. When do I just give up and merge?
In the Pits
I once rode home with a friend from a computer conference in Monterey. It just so happened that this friend is a huge fan of fresh cherries, and when he saw a small stand selling baskets of them he stopped to buy some. Another trait this friend possesses is that he can’t ever pass up a good deal. So while haggling with the cherry seller, it became obvious that buying a whole flat of cherries would be a better deal than buying a single basket, even though that was all we really wanted. Not wanting to pass up a deal, however, my friend bought the entire flat and off we went—eating and talking. It took another 45 minutes to get home, and during that time we had eaten more than half the flat of cherries. I couldn’t look at anything even remotely cherry-flavored for months; and today, when someone says “cherry-picking,” that doesn’t conjure up happy images of privileged kids playing farmer on Saturday mornings along the California coast—I just feel ill.
Cherry-picking and the Scientific Method
Whenever someone asks you to trust them, don’t.
As part of a recent push to automate everything from test builds to documentation updates, my group—at the request of one of our development groups—deployed a job-scheduling system. The idea behind the deployment is that anyone should be able to set up a periodic job to run in order to do some work that takes a long time, but that isn’t absolutely critical to the day-to-day work of the company. It’s a way of avoiding having people run cron jobs on their desktops and of providing a centralized set of background processing services.
Swamped by Automation
Is there a “best used by” date for software?
Do you know of any rule of thumb for how often a piece of software should need maintenance? I’m not thinking about bug fixes, since bugs are there from the moment the code is written, but about the constant refactoring that seems to go on in code. Sometimes I feel as if programmers use refactoring as a way of keeping their jobs, rather than offering any real improvement. Is there a “best used by” date for software?
I’ve been upgrading some Python 2 code to Python 3 and ran across the following change in the language. It used to be that division (/) of two integers resulted in an integer, but to get that functionality in Python 3, I need to use //. There is still a /, but that’s different. Why would anyone in their right mind have two similar operations that are that closely coded? Don’t they know this will lead to errors?
Divided by Division
Divided by Division
One programmer’s extension is another programmer’s abuse.
During some recent downtime at work, I’ve been cleaning up a set of libraries, removing dead code, updating documentation blocks, and fixing minor bugs that have been annoying but not critical. This bit of code spelunking has revealed how some of the libraries have been not only used, but also abused. The fact that everyone and their sister use the timing library for just about any event they can think of isn’t so bad, as it is a library that’s meant to call out to code periodically (although some of the events seem as if they don’t need to be events at all). It was when I realized that some programmers were using our socket classes to store strings—just because the classes happen to have a bit of variable storage attached, and some of them are globally visible throughout the system—that I nearly lost my lunch. We do have string classes that could easily be used, but instead these programmers just abused whatever was at hand. Why?
Quality happens only when someone is responsible for it.
Thirteen years ago, Eric Raymond’s book The Cathedral and the Bazaar (O’Reilly Media, 2001) redefined our vocabulary and all but promised an end to the waterfall model and big software companies, thanks to the new grass-roots open source software development movement. I found the book thought provoking, but it did not convince me. On the other hand, being deeply involved in open source, I couldn’t help but think that it would be nice if he was right.
Open vs. Closed: Which Source is More Secure?
The Hyperdimensional Tar Pit
The bytes you save today may bite you tomorrow.
GEORGE V. NEVILLE-NEIL
One of the coders I work with keeps removing my calls to system()from my code, insisting that it’s better to write code that does the work that I’m doing via the shell. He keeps saying that it’s far safer to code using the language we’re using than to call out to the shell to get this work done. I would believe that if he didn’t add 10 to 20 lines of code just to do what I do in one line with system(). How can increasing the number of lines of code decrease the number of bugs?
Happy with the One Liner
If it does not take a full second to calculate the password hash, it is too weak.
6.5 million unsalted SHA1 hashed LinkedIn passwords have appeared in the criminal underground. There are two words in that sentence that should cause LinkedIn no end of concern: “unsalted” and “SHA1.”
Colorful metaphors and properly reusing functions
GEORGE V. NEVILLE-NEIL
In the last installment of Kode Vicious (A System is not a Product,ACM Queue 10 (4), April 2012), I mentioned that I had recently read two pieces of code that had actually lowered, rather than raised, my blood pressure. As promised, this edition’s KV covers that second piece of code.
Until our programming languages catch up, code will be full of horrors
Only lately—and after a long wait—have a lot of smart people found audiences for making sound points about what and how we code. Various colleagues have been beating drums and heads together for ages trying to make certain that wise insights about programming stick to neurons. Articles on coding style in this and other publications have provided further examples of such advocacy.
Related: Reveling in Constraints - Sir, Please Step Away from the ASR-33! - Coding Smart: People vs. Tools