Download PDF version of this article PDF

Kernel Solid, But Vulnerable

Debates about the relative robustness and security of various operating systems have raged for years. A few months ago the Linux community got some concrete numbers to back up its claims of superiority. A study conducted by Stanford researchers revealed that the Linux 2.6 kernel, which has 5.7 million lines of code, contains “only” 985 bugs. This number pales in comparison to the average number for commercial software, which a Carnegie Mellon University team determined to be 20 to 30 bugs per 1,000 lines of code. Based on this ratio, one would expect 114,000 to 171,000 bugs in the Linux kernel. What’s more—of the kernel’s 985 bugs, only about 10 percent were security related. Not bad, huh?

Unfortunately, these impressive numbers belie a common, yet preventable, security vulnerability. A columnist for recently discovered that many of the popular Linux distributions ship with default settings that make the Linux kernel vulnerable to a “fork bomb” attack. One of the oldest and crudest methods of attack, a fork bomb involves a program or shell script that repeatedly copies itself until a system is brought to its knees. This vulnerability can be easily remedied by changing the default kernel settings to limit the number of concurrently running processes. Usability might suffer slightly, but that’s a small price to pay for protecting your box. Linux newbies take note: Check those default settings!

WANT MORE?,1411,66022,00.html

IT Security Gets Physical

In offices of yore, security meant large men in uniforms behind large desks with large clusters of (analog) video displays. But in the modern IT world, security more commonly evokes notions of networks, routers, firewalls, and encryption. Well, it now seems these two security realms—physical and IT—are finally coming together. Many companies already run their physical security systems over IP networks, and the growing commercial interest in this sector will ensure that all aspects of physical security, even those (now digital) video screens, will soon be managed from the same centralized system that checks your network logon.

But is this really a good thing? While there are obvious efficiencies to be gained from this convergence, there are also risks. For example, attackers who penetrate the company network would have access not only to sensitive data, but also to the surveillance systems and physical barriers installed to thwart real, in-person security breaches. And what about that company safe with the bearer bonds in it? Will that be connected? The question must be asked: Is the convergence of physical and IT security a Nakatomi Plaza waiting to happen?

WANT MORE?,39020375,39191839,00.htm

A Warm Reception for Rejected Chips

One of the astounding achievements of modern science is the degree to which microchips enable lifesaving or mission-critical hardware devices. With such high stakes, it’s crucial that these chips be tested against strict quality-control standards. Errors of millimeters and milliseconds can mean the loss of a life or the failure of a mission to Mars. Accordingly, 20-50 percent of all chips produced are either discarded or recycled.

This statistic is not lost on USC professor Melvin Breuer, who’s trying to find ways of putting these millions of discarded chips to practical use. The key to his efforts is the fact that many applications simply do not require pinpoint computational accuracy. Think multimedia, where some distorted audio or flipped pixels here and there are never a real showstopper. The hope is that collecting and repackaging these “defective” chips will benefit both manufacturers, which will save on manufacturing costs, and consumers, who will have access to cheaper electronic devices. Maybe some of these chips will even end up at Queue—actually, we thonk a few may have already arroved...

WANT MORE?,1367,66928,00.html


Originally published in Queue vol. 3, no. 4
see this item in the ACM Digital Library


© ACM, Inc. All Rights Reserved.