Articles

RSS
Sort By:

20 Obstacles to Scalability

Watch out for these pitfalls that can prevent Web application scaling.

by Sean Hull | August 5, 2013

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 9

0 comments

A Call to Arms

Long anticipated, the arrival of radically restructured database architectures is now finally at hand.

by Jim Gray, Mark Compton | April 21, 2005

Topic: Databases

1 comments

A Decade of OS Access-control Extensibility

Open source security foundations for mobile and embedded devices

by Robert N. M. Watson | January 18, 2013

Topic: Security

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 2

2 comments

A File System All Its Own

Flash memory has come a long way. Now it's time for software to catch up.

by Adam H. Leventhal | April 13, 2013

Topic: File Systems and Storage

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 5

2 comments

A Guided Tour through Data-center Networking

A good user experience depends on predictable performance within the data-center network.

by Dennis Abts, Bob Felderman | May 3, 2012

Topic: Networks

0 comments

A High-Performance Team

You work in the product development group of a software company, where the product is often compared with the competition on performance grounds. Performance is an important part of your business; but so is adding new functionality, fixing bugs, and working on new projects. So how do you lead your team to develop high-performance software, as well as doing everything else? And how do you keep that performance high throughout cycles of maintenance and enhancement?

by Philip Beevers | February 23, 2006

Topic: Performance

0 comments

A New Objective-C Runtime: from Research to Production

Backward compatibility always trumps new features.

by David Chisnall | July 11, 2012

Topic: Programming Languages

0 comments

A Passage to India

Most American IT employees take a dim view of offshore outsourcing. It's considered unpatriotic and it drains valuable intellectual capital and jobs from the United States to destinations such as India or China. Online discussion forums on sites such as isyourjobgoingoffshore.com are headlined with titles such as "How will you cope?" and "Is your career in danger?" A cover story in BusinessWeek magazine a couple of years ago summed up the angst most people suffer when faced with offshoring: "Is your job next?"

by Mark Kobayashi-Hillary | February 16, 2005

Topic: Distributed Development

0 comments

A Pioneer's Flash of Insight

Jim Gray's vision of flash-based storage anchors this issue's theme. In the May/June issue of Queue, Eric Allman wrote a tribute to Jim Gray, mentioning that Queue would be running some of Jim's best works in the months to come. I'm embarrassed to confess that when this idea was first discussed, I assumed these papers would consist largely of Jim's seminal work on databasesshowing only that I (unlike everyone else on the Queue editorial board) never knew Jim.

by Bryan Cantrill | September 24, 2008

Topic: File Systems and Storage

0 comments

A Plea to Software Vendors from Sysadmins - 10 Do's and Don'ts

What can software vendors do to make the lives of sysadmins a little easier?

by Thomas A. Limoncelli | December 22, 2010

Topic: System Administration

48 comments

A Primer on Provenance

Better understanding of data requires tracking its history and context.

by Lucian Carata, Sherif Akoush, Nikilesh Balakrishnan, Thomas Bytheway, Ripduman Sohan, Margo Seltzer, Andy Hopper | April 10, 2014

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 5

1 comments

A Requirements Primer

Many software engineers and architects are exposed to compliance through the growing number of rules, regulations, and standards with which their employers must comply. Some of these requirements, such as HIPAA (Health Insurance Portabililty and Accountability Act), focus primarily on one industry, whereas others, such as SOX (Sarbanes-Oxley Act), span many industries. Some apply to only one country, while others cross national boundaries. To help navigate this often confusing world, Queue has assembled a short primer that provides background on four of the most important compliance challenges that organizations face today.

by George W. Beeler, Dana Gardner | September 15, 2006

Topic: Compliance

0 comments

A Threat Analysis of RFID Passports

Do RFID passports make us vulnerable to identity theft?

by Jim Waldo, Alan Ramos, Weina Scott, William Scott, Doug Lloyd, Katherine O'Leary | October 1, 2009

Topic: Privacy and Rights

1 comments

A Time and a Place for Standards

History shows how abuses of the standards process have impeded progress. Over the next decade, we will encounter at least three major opportunities where success will hinge largely on our ability to define appropriate standards. That's because intelligently crafted standards that surface at just the right time can do much to nurture nascent industries and encourage product development simply by creating a trusted and reliable basis for interoperability. From where I stand, the three specific areas I see as particularly promising are: (1) all telecommunications and computing capabilities that work together to facilitate collaborative work; (2) hybrid computing/home entertainment products providing for the online distribution of audio and/or video content; and (3) wireless sensor and network platforms (the sort that some hope the 802.15.4 and ZigBee Alliance standards will ultimately enable).

by Gordon Bell | October 25, 2004

Topic: VoIP

0 comments

A Tour through the Visualization Zoo

A survey of powerful visualization techniques, from the obvious to the obscure

by Jeffrey Heer, Michael Bostock, Vadim Ogievetsky | May 13, 2010

Topic: Graphics

24 comments

A Tribute to Jim Gray

Computer science attracts many very smart people, but a few stand out above the others, somehow blessed with a kind of creativity that most of us are denied. Names such as Alan Turing, Edsger Dijkstra, and John Backus come to mind. Jim Gray is another.

by Eric Allman | July 28, 2008

Topic: Databases

0 comments

A closer look at GPUs

As the line between GPUs and CPUs begins to blur, it's important to understand what makes GPUs tick.

by Kayvon Fatahalian, Mike Houston | September 23, 2008

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 10

0 comments

A co-Relational Model of Data for Large Shared Data Banks

Contrary to popular belief, SQL and noSQL are really just two sides of the same coin.

by Erik Meijer, Gavin Bierman | March 18, 2011

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 4

23 comments

A conversation with David E. Shaw

Stanford professor Pat Hanrahan sits down with the noted hedge fund founder, computational biochemist, and (above all) computer scientist.

by CACM Staff | September 22, 2009

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 10

0 comments

A conversation with Ed Catmull

Pixar's president Ed Catmull sits down with Stanford professor (and former Pixar-ian) Pat Hanrahan to reflect on the blending of art and technology.

by CACM Staff | November 23, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 12

0 comments

A guided tour of data-center networking

A good user experience depends on predictable performance within the data-center network.

by Dennis Abts, Bob Felderman | May 23, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 6

0 comments

A new Objective-C runtime:
from research to production

Backward compatibility always trumps new features

by David Chisnall | August 23, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 9

0 comments

A plea from sysadmins to software vendors:
10 do's and don'ts

What can software vendors do to make the lives of system administrators a little easier?

by Thomas A. Limoncelli | January 28, 2011

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 2

0 comments

A threat analysis of RFID passports

Do RFID passports make us vulnerable to identity theft?

by Alan Ramos, Weina Scott, William Scott, Doug Lloyd, Katherine O'Leary, Jim Waldo | November 23, 2009

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 12

0 comments

A tour through the visualization zoo

A survey of powerful visualization techniques, from the obvious to the obscure.

by Jeffrey Heer, Michael Bostock, Vadim Ogievetsky | May 26, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 6

1 comments

A view of cloud computing

Clearing the clouds away from the true potential and obstacles posed by this computing capability.

by Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy Katz, Andy Konwinski, Gunho Lee, David Patterson, Ariel Rabkin, Ion Stoica, Matei Zaharia | March 29, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 4

0 comments

AI Gets a Brain

In the 50 years since John McCarthy coined the term artificial intelligence, much progress has been made toward identifying, understanding, and automating many classes of symbolic and computational problems that were once the exclusive domain of human intelligence. Much work remains in the field because humans still significantly outperform the most powerful computers at completing such simple tasks as identifying objects in photographs—something children can do even before they learn to speak.

by Jeff Barr, Luis Felipe Cabrera | June 30, 2006

Topic: AI

0 comments

AI in Computer Games

If you've been following the game development scene, you've probably heard many remarks such as: "The main role of graphics in computer games will soon be over; artificial intelligence is the next big thing!" Although you should hardly buy into such statements, there is some truth in them. The quality of AI (artificial intelligence) is a high-ranking feature for game fans in making their purchase decisions and an area with incredible potential to increase players' immersion and fun.

by Alexander Nareyek | February 24, 2004

Topic: Game Development

0 comments

API Design Matters

Why changing APIs might become a criminal offense. After more than 25 years as a software engineer, I still find myself underestimating the time it will take to complete a particular programming task. Sometimes, the resulting schedule slip is caused by my own shortcomings: as I dig into a problem, I simply discover that it is a lot harder than I initially thought, so the problem takes longer to solvesuch is life as a programmer. Just as often I know exactly what I want to achieve and how to achieve it, but it still takes far longer than anticipated. When that happens, it is usually because I am struggling with an API that seems to do its level best to throw rocks in my path and make my life difficult.

by Michi Henning | June 7, 2007

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 5

3 comments

ASPs:
The Integration Challenge

The promise of software as a service is becoming a reality with many ASPs (application service providers). Organizations using ASPs and third-party vendors that provide value-added products to ASPs need to integrate with them. ASPs enable this integration by providing Web service-based APIs. There are significant differences between integrating with ASPs over the Internet and integrating with a local application. When integrating with ASPs, users have to consider a number of issues, including latency, unavailability, upgrades, performance, load limiting, and lack of transaction support.

by Len Takeuchi | June 30, 2006

Topic: Component Technologies

0 comments

Abstraction in Hardware System Design

Applying lessons from software languages to hardware languages using Bluespec SystemVerilog

by Rishiyur S. Nikhil | August 18, 2011

Topic: System Evolution

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 10

1 comments

Adopting DevOps Practices in Quality Assurance

Merging the art and science of software development

by James Roche | October 30, 2013

Topic: Quality Assurance

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 11

0 comments

Advances and Challenges in Log Analysis

Logs contain a wealth of information for help in managing systems.

by Adam Oliner, Archana Ganapathi, Wei Xu | December 20, 2011

Topic: System Administration

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 2

0 comments

Agile and SEMAT:
perfect partners

Combining agile and SEMAT yields more advantages than either one alone.

by Ivar Jacobson, Ian Spence, Pan-Wei Ng | October 23, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 11

0 comments

Agile and SEMAT - Perfect Partners

Combining agile and SEMAT yields more advantages than either one alone

by Ivar Jacobson, Ian Spence, Pan-Wei Ng | November 5, 2013

Topic: Development

0 comments

All Your Database Are Belong to Us

In the big open world of the cloud, highly available distributed objects will rule.

by Erik Meijer | July 23, 2012

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 9

5 comments

An Open Web Services Architecture

The name of the game is web services - sophisticated network software designed to bring us what we need, when we need it, through any device we choose. We are getting closer to this ideal, as in recent years the client/server model has evolved into web-based computing, which is now evolving into the web services model. In this article, I will discuss Sun Microsystems' take on web services, specifically Sun ONE: an open, standards-based web services framework. I'll share with you Sun's decision-making rationales regarding web services, and discuss directions we are moving in.

by Stan Kleijnen, Srikanth Raju | March 4, 2003

Topic: Web Services

0 comments

An overview of non-uniform memory access

NUMA becomes more common because memory controllers get close to execution units on microprocessors.

by Christoph Lameter | August 23, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 9

0 comments

Anatomy of a Solid-state Drive

While the ubiquitous SSD shares many features with the hard-disk drive, under the surface they are completely different.

by Michael Cornwell | October 17, 2012

Topic: File Systems and Storage

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 12

8 comments

Another Day, Another Bug

As part of this issue on programmer tools, we at Queue decided to conduct an informal Web poll on the topic of debugging. We asked you to tell us about the tools that you use and how you use them. We also collected stories about those hard-to-track-down bugs that sometimes make us think of taking up another profession.

by Queue Readers | October 2, 2003

Topic: Development

0 comments

Arm Your Applications for Bulletproof Deployment: A Conversation with Tom Spalthoff

The deployment of applications, updates, and patches is one of the most common - and risky - functions of any IT department. Deploying any application that isn't properly configured for distribution can disrupt or crash critical applications and cost companies dearly in lost productivity and help-desk expenses - and companies do it every day. In fact, Gartner reports that even after 10 years of experience, most companies cannot automatically deploy software with a success rate of 90 percent or better.

July 14, 2008

Topic: SIP

0 comments

Arrogance in Business Planning

Technology business plans that assume no competition (ever)

by Paul Vixie | July 20, 2011

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 9

7 comments

Attack Trends:
2004 and 2005

Counterpane Internet Security Inc. monitors more than 450 networks in 35 countries, in every time zone. In 2004 we saw 523 billion network events, and our analysts investigated 648,000 security “tickets.” What follows is an overview of what’s happening on the Internet right now, and what we expect to happen in the coming months.

by Bruce Schneier | July 6, 2005

Topic: Security

0 comments

Automated QA Testing at EA: Driven by Events

A discussion with Michael Donat, Jafar Husain, and Terry Coatta

by Terry Coatta, Michael Donat, Jafar Husain | May 19, 2014

Topic: Quality Assurance

0 comments

Automated QA testing at electronic arts

A discussion with Michael Donat, Jafar Husain, and Terry Coatta

June 26, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 7

0 comments

Automating Software Failure Reporting

There are many ways to measure quality before and after software is released. For commercial and internal-use-only products, the most important measurement is the user's perception of product quality. Unfortunately, perception is difficult to measure, so companies attempt to quantify it through customer satisfaction surveys and failure/behavioral data collected from its customer base. This article focuses on the problems of capturing failure data from customer sites.

by Brendan Murphy | December 6, 2004

Topic: Failure and Recovery

0 comments

B.Y.O.C (1,342 times and counting)

Why can't we all use standard libraries for commonly needed algorithms?

by Poul-Henning Kamp | February 23, 2011

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 3

0 comments

BASE: An Acid Alternative

Web applications have grown in popularity over the past decade. Whether you are building an application for end users or application developers (i.e., services), your hope is most likely that your application will find broad adoption and with broad adoption will come transactional growth. If your application relies upon persistence, then data storage will probably become your bottleneck.

by Dan Pritchett | July 28, 2008

Topic: File Systems and Storage

12 comments

BPM: The Promise and the Challenge

Over the last decade, businesses and governments have been giving increasing attention to business processes - to their description, automation, and management. This interest grows out of the need to streamline business operations, consolidate organizations, and save costs, reflecting the fact that the process is the basic unit of business value within an organization.

by Laury Verner | April 16, 2004

Topic: Workflow Systems

0 comments

Barbarians at the Gateways

High-frequency Trading and Exchange Technology

by Jacob Loveless | October 16, 2013

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 10

26 comments

Best Practice (BPM)

Just as BPM (business process management) technology is markedly different from conventional approaches to application support, the methodology of BPM development is markedly different from traditional software implementation techniques. With CPI (continuous process improvement) as the core discipline of BPM, the models that drive work through the company evolve constantly. Indeed, recent studies suggest that companies fine-tune their BPM-based applications at least once a quarter (and sometimes as often as eight times per year). The point is that there is no such thing as a “finished” process; it takes multiple iterations to produce highly effective solutions.

by Derek Miers | March 29, 2006

Topic: Workflow Systems

1 comments

Best Practices on the Move: Building Web Apps for Mobile Devices

Which practices should be modified or avoided altogether by developers for the mobile Web?

by Alex Nicolaou | July 25, 2013

Topic: Web Development

0 comments

Best practices on the move:
building web apps for mobile devices

Which practices should be modified or avoided altogether by developers for the mobile Web?

by Alex Nicolaou | July 24, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 8

0 comments

Better Scripts, Better Games

The video game industry earned $8.85 billion in revenue in 2007, almost as much as movies made at the box office. Much of this revenue was generated by blockbuster titles created by large groups of people. Though large development teams are not unheard of in the software industry, game studios tend to have unique collections of developers. Software engineers make up a relatively small portion of the game development team, while the majority of the team consists of content creators such as artists, musicians, and designers.

by Walker White, Christoph Koch, Johannes Gehrke, Alan Demers | January 8, 2009

Topic: Game Development

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 3

2 comments

Better, Faster, More Secure

Since I started a stint as chair of the IETF (Internet Engineering Task Force) in March 2005, I have frequently been asked, “What’s coming next?” but I have usually declined to answer. Nobody is in charge of the Internet, which is a good thing, but it makes predictions difficult (and explains why this article starts with a disclaimer: It represents my views alone and not those of my colleagues at either IBM or the IETF).

by Brian Carpenter | December 28, 2006

Topic: Networks

0 comments

Beyond Beowulf Clusters

In the early ’90s, the Berkeley NOW (Network of Workstations) Project under David Culler posited that groups of less capable machines (running SunOS) could be used to solve scientific and other computing problems at a fraction of the cost of larger computers. In 1994, Donald Becker and Thomas Sterling worked to drive the costs even lower by adopting the then-fledgling Linux operating system to build Beowulf clusters at NASA’s Goddard Space Flight Center. By tying desktop machines together with open source tools such as PVM (Parallel Virtual Machine), MPI (Message Passing Interface), and PBS (Portable Batch System), early clusters—which were often PC towers stacked on metal shelves with a nest of wires interconnecting them—fundamentally altered the balance of scientific computing.

by Philip Papadopoulos, Greg Bruno, Mason Katz | May 4, 2007

Topic: Distributed Computing

0 comments

Beyond Instant Messaging

The recent rise in popularity of IM (instant messaging) has driven the development of platforms and the emergence of standards to support IM. Especially as the use of IM has migrated from online socializing at home to business settings, there is a need to provide robust platforms with the interfaces that business customers use to integrate with other work applications. Yet, in the rush to develop a mature IM infrastructure, it is also important to recognize that IM features and uses are still evolving. For example, popular press stories1 have raised the concern that IM interactions may be too distracting in the workplace.

by John C. Tang, James "Bo" Begole | January 28, 2004

Topic: Email and IM

0 comments

Beyond Relational Databases

There is more to data access than SQL.

by Margo Seltzer | April 21, 2005

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 7

1 comments

Beyond Server Consolidation

Virtualization technology was developed in the late 1960s to make more efficient use of hardware. Hardware was expensive, and there was not that much available.

by Werner Vogels | March 4, 2008

Topic: Virtualization

0 comments

Big Games, Small Screens

One thing that becomes immediately apparent when creating and distributing mobile 3D games is that there are fundamental differences between the cellphone market and the more traditional games markets, such as consoles and handheld gaming devices. The most striking of these are the number of delivery platforms; the severe constraints of the devices, including small screens whose orientation can be changed; limited input controls; the need to deal with other tasks; the nonphysical delivery mechanism; and the variations in handset performance and input capability.

by Mark Callow, Paul Beardow, David Brittain | January 17, 2008

Topic: Game Development

1 comments

Black Box Debugging

Modern software development practices build applications as a collection of collaborating components. Unlike older practices that linked compiled components into a single monolithic application, modern executables are made up of any number of executable components that exist as separate binary files.

by James A. Whittaker, Herbert H. Thompson | January 29, 2004

Topic: Quality Assurance

0 comments

Blaster Revisited

What lessons can we learn from the carnage the Blaster worm created? The following tale is based upon actual circumstances from corporate enterprises that were faced with confronting and eradicating the Blaster worm, which hit in August 2003. The story provides views from many perspectives, illustrating the complexity and sophistication needed to combat new blended threats.

by Jim Morrison | August 31, 2004

Topic: Web Security

0 comments

Blurring Lines Between Hardware and Software

Motivated by technology leading to the availability of many millions of gates on a chip, a new design paradigm is emerging. This new paradigm allows the integration and implementation of entire systems on one chip.

by Homayoun Shahri | April 1, 2003

Topic: Embedded Systems

0 comments

Box Their SOXes Off

Data is a precious resource for any large organization. The larger the organization, the more likely it will rely to some degree on third-party vendors and partners to help it manage and monitor its mission-critical data. In the wake of new regulations for public companies, such as Section 404 of SOX (Sarbanes-Oxley Act of 2002), the folks who run IT departments for Fortune 1000 companies have an ever-increasing need to know that when it comes to the 24/7/365 monitoring of their critical data transactions, they have business partners with well-planned and well-documented procedures.

by John Bostick | September 15, 2006

Topic: Compliance

0 comments

Breaking the Major Release Habit

Can agile development make your team more productive?

by Damon Poole | October 10, 2006

Topic: Development

1 comments

Bridging the Object-Relational Divide

Modern applications are built using two very different technologies: object-oriented programming for business logic; and relational databases for data storage. Object-oriented programming is a key technology for implementing complex systems, providing benefits of reusability, robustness, and maintainability. Relational databases are repositories for persistent data. ORM (object-relational mapping) is a bridge between the two that allows applications to access relational data in an object-oriented way.

by Craig Russell | July 28, 2008

Topic: Object-Relational Mapping

0 comments

Bringing Arbitrary Compute to Authoritative Data

Many disparate use cases can be satisfied with a single storage system.

by Mark Cavage, David Pacheco | July 13, 2014

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 8

0 comments

Broadcast Messaging:
Messaging to the Masses

“We want this and that. We demand a share in that and most of that. Some of this and - - - - in’ all of that. And their demands will all be changed then, so - - - - in’ stay awake.”1 Comedian Billy Connolly wasn’t talking about messaging when he said this, but I don’t think there is a more appropriate quote for the voracious hunger we have for information as a result of the messaging-enabled, network-connected world that we take for granted every day.

by Frank Jania | January 28, 2004

Topic: Email and IM

0 comments

Browser Security:
Lessons from Google Chrome

Google Chrome developers focused on three key problems to shield the browser from attacks.

by Charles Reis, Adam Barth, Carlos Pizano | June 18, 2009

Topic: Web Security

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 8

6 comments

Browser security:
appearances can be deceiving

A discussion with Jeremiah Grossman, Ben Livshits, Rebecca Bace, and George Neville-Neil

by CACM Staff | December 20, 2012

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 1

0 comments

BufferBloat:
what's wrong with the internet?

A discussion with Vint Cerf, Van Jacobson, Nick Weaver, and Jim Gettys.

by CACM Staff | January 23, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 2

0 comments

Bufferbloat:
dark buffers in the internet

Networks without effective AQM may again be vulnerable to congestion collapse.

by Jim Gettys, Kathleen Nichols | December 28, 2011

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 1

1 comments

Bufferbloat: Dark Buffers in the Internet

Networks without effective AQM may again be vulnerable to congestion collapse.

by Jim Gettys, Kathleen Nichols | November 29, 2011

Topic: Networks

17 comments

Building Collaboration into IDEs

Software development is rarely a solo coding effort. More often, it is a collaborative process, with teams of developers working together to design solutions and produce quality code. The members of these close-knit teams often look at one another's code, collectively make plans about how to proceed, and even fix each other's bugs when necessary. Teamwork does not stop there, however. An extended team may include project managers, testers, architects, designers, writers, and other specialists, as well as other programming teams.

by Li-Te Cheng, Cleidson R.B. de Souza, Susanne Hupfer, John Patterson, Steven Ross | January 29, 2004

Topic: Distributed Development

0 comments

Building Nutch:
Open Source Search

Search engines are as critical to Internet use as any other part of the network infrastructure, but they differ from other components in two important ways. First, their internal workings are secret, unlike, say, the workings of the DNS (domain name system). Second, they hold political and cultural power, as users increasingly rely on them to navigate online content.

by Mike Cafarella, Doug Cutting | May 5, 2004

Topic: Search Engines

0 comments

Building Scalable Web Services

In the early days of the Web we severely lacked tools and frameworks, and in retrospect it seems noteworthy that those early Web services scaled at all. Nowadays, while the tools have progressed, so too have expectations with respect to richness of interaction, performance, and scalability. In view of these raised expectations it is advisable to build only what you really need, relying on other people's work where possible. Above all, be cautious in choosing when, what, and how to optimize.

by Tom Killalea | December 4, 2008

Topic: Web Services

2 comments

Building Secure Web Applications

In these days of phishing and near-daily announcements of identity theft via large-scale data losses, it seems almost ridiculous to talk about securing the Web. At this point most people seem ready to throw up their hands at the idea or to lock down one small component that they can control in order to keep the perceived chaos at bay. 

by George V. Neville-Neil | August 16, 2007

Topic: Web Development

0 comments

Building Systems to Be Shared, Securely

The history of computing has been characterized by continuous transformation resulting from the dramatic increases in performance and drops in price described by Moore's law. Computing "power" has migrated from centralized mainframes/servers to distributed systems and the commodity desktop. Despite these changes, system sharing remains an important tool for computing. From the multitasking, file-sharing, and virtual machines of the desktop environment to the large-scale sharing of server-class ISP hardware in collocation centers, safely sharing hardware between mutually untrusting parties requires addressing critical concerns of accidental and malicious damage.

by Poul-Henning Kamp, Robert Watson | August 31, 2004

Topic: Virtual Machines

2 comments

CPU DB:
recording microprocessor history

With this open database, you can mine microprocessor trends over the past 40 years.

by Andrew Danowitz, Kyle Kelley, James Mao, John P. Stevenson, Mark Horowitz | March 22, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 4

0 comments

CPU DB: Recording Microprocessor History

With this open database, you can mine microprocessor trends over the past 40 years.

by Andrew Danowitz, Kyle Kelley, James Mao, John P. Stevenson, Mark Horowitz | April 6, 2012

Topic: Processors

12 comments

CTO Roundtable:
Cloud Computing

The age of cloud computing has begun. How can companies take advantage of the new opportunities it provides?

by Mache Creeger | July 20, 2009

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 8

0 comments

CTO roundtable:
malware defense

The battle is bigger than most of us realize.

by Mache Creeger | March 29, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 4

0 comments

Caching XML Web Services for Mobility

Web services are emerging as the dominant application on the Internet. The Web is no longer just a repository of information but has evolved into an active medium for providers and consumers of services: Individuals provide peer-to-peer services to access personal contact information or photo albums for other individuals; individuals provide services to businesses for accessing personal preferences or tax information; Web-based businesses provide consumer services such as travel arrangement (Orbitz), shopping (eBay), and e-mail (Hotmail); and several business-to-business (B2B) services such as supply chain management form important applications of the Internet.

by Douglas D. Terry, Venugopalan Ramasubramanian | July 30, 2003

Topic: Web Services

1 comments

Closed Source Fights Back

In May 2003, the SCO Group, a vendor of the Linux operating system, sent a letter to its customers. Among other things, it stated, "We believe that Linux is, in material part, an unauthorized derivative of Unix." What would make SCO do that?

by Greg Lehey | October 1, 2003

Topic: Open Source

0 comments

Code Spelunking:
Exploring Cavernous Code Bases

Try to remember your first day at your first software job. Do you recall what you were asked to do, after the human resources people were done with you? Were you asked to write a piece of fresh code? Probably not. It is far more likely that you were asked to fix a bug, or several, and to try to understand a large, poorly documented collection of source code.

by George V. Neville-Neil | October 1, 2003

Topic: Quality Assurance

1 comments

Code Spelunking Redux

It has been five years since I first wrote about code spelunking, and though systems continue to grow in size and scope, the tools we use to understand those systems are not growing at the same rate. In fact, I believe we are steadily losing ground. So why should we go over the same ground again? Is this subject important enough to warrant two articles in five years? I believe it is.

by George V. Neville-Neil | January 8, 2009

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 10

0 comments

Coding Guidelines:
Finding the Art in the Science

What separates good code from great code?

by Robert Green, Henry Ledgard | November 2, 2011

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 12

26 comments

Coding Smart: People vs. Tools

Cool tools are seductive. When we think about software productivity, tools naturally come to mind. When we see pretty new tools, we tend to believe that their amazing features will help us get our work done much faster. Because every software engineer uses software productivity tools daily, and all team managers have to decide which tools their members will use, the latest and greatest look appealing.

by Donn M. Seeley | October 1, 2003

Topic: Development

0 comments

Coding for the Code

Despite the considerable effort invested by industry and academia in modeling standards such as UML (Unified Modeling Language), software modeling has long played a subordinate role in commercial software development. Although modeling is generally perceived as state of the art and thus as something that ought to be done, its appreciation seems to pale along with the progression from the early, more conceptual phases of a software project to those where the actual handcrafting is done.

by Friedrich Steimann, Thomas Kühne | January 31, 2006

Topic: Development

2 comments

Collaboration in System Administration

For sysadmins, solving problems usually involves collaborating with others. How can we make it more effective?

by Eben M. Haber, Eser Kandogan, Paul Maglio | December 6, 2010

Topic: System Administration

1 comments

Collaboration in system administration

For sysadmins, solving problems usually involves collaborating with others. How can we make it more effective?

by Eben M. Haber, Eser Kandogan, Paul P. Maglio | December 22, 2010

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 1

0 comments

Commercializing Open Source Software

The use of open source software has become increasingly popular in production environments, as well as in research and software development. One obvious attraction is the low cost of acquisition. Commercial software has a higher initial cost, though it usually has advantages such as support and training. A number of business models designed by users and vendors combine open source and commercial software; they use open source as much as possible, adding commercial software as needed.

by Michael J. Karels | October 1, 2003

Topic: Open Source

1 comments

Communications Surveillance:
Privacy and Security at Risk

As the sophistication of wiretapping technology grows, so too do the risks it poses to our privacy and security.

by Whitfield Diffie, Susan Landau | September 11, 2009

Topic: Privacy and Rights

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 11

6 comments

Compliance Deconstructed

The topic of compliance becomes increasingly complex each year. Dozens of regulatory requirements can affect a company’s business processes. Moreover, these requirements are often vague and confusing. When those in charge of compliance are asked if their business processes are in compliance, it is understandably difficult for them to respond succinctly and with confidence. This article looks at how companies can deconstruct compliance, dealing with it in a systematic fashion and applying technology to automate compliance-related business processes. It also looks specifically at how Microsoft approaches compliance to SOX (Sarbanes-Oxley Act of 2002).

by J. C. Cannon, Marilee Byers | September 15, 2006

Topic: Compliance

0 comments

Complying with Compliance

“Hey, compliance is boring. Really, really boring. And besides, I work neither in the financial industry nor in health care. Why should I care about SOX and HIPAA?” Yep, you’re absolutely right. You write payroll applications, or operating systems, or user interfaces, or (heaven forbid) e-mail servers. Why should you worry about compliance issues?

by Eric Allman | September 15, 2006

Topic: Compliance

0 comments

Computers in Patient Care: The Promise and the Challenge

Information technology has the potential to radically transform health care. Why has progress been so slow?

by Stephen V. Cantrill | August 12, 2010

Topic: Bioscience

2 comments

Computers in patient care:
the promise and the challenge

Information technology has the potential to radically transform health care. Why has progress been so slow?

by Stephen V. Cantrill | August 24, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 9

0 comments

Computing without Processors

Heterogeneous systems allow us to target our programming to the appropriate environment.

by Satnam Singh | June 27, 2011

Topic: Computer Architecture

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 8

5 comments

Condos and Clouds

Constraints in an environment empower the services.

by Pat Helland | November 14, 2012

Topic: Distributed Computing

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 1

0 comments

Controlling Queue Delay

A modern AQM is just one piece of the solution to bufferbloat.

by Kathleen Nichols, Van Jacobson | May 6, 2012

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 7

13 comments

Cooling the Data Center

What can be done to make cooling systems in data centers more energy efficient?

by Andy Woods | March 10, 2010

Topic: Power Management

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 4

3 comments

Creating Languages in Racket

Sometimes you just have to make a better mousetrap.

by Matthew Flatt | November 9, 2011

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 1

0 comments

Criminal Code:
The Making of a Cybercriminal

NOTE: This is a fictional account of malware creators and their experiences. Although the characters are made up, the techniques and events are patterned on real activities of many different groups developing malicious software.

by Thomas Wadlow, Vlad Gorelik | November 10, 2006

Topic: Web Security

0 comments

Culture Surprises in Remote Software Development Teams

Technology has made it possible for organizations to construct teams of people who are not in the same location, adopting what one company calls "virtual collocation." Worldwide groups of software developers, financial analysts, automobile designers, consultants, pricing analysts, and researchers are examples of teams that work together from disparate locations, using a variety of collaboration technologies that allow communication across space and time.

by Judith S. Olson, Gary M. Olson | January 29, 2004

Topic: Distributed Development

0 comments

Cybercrime:
An Epidemic

Painted in the broadest of strokes, cybercrime essentially is the leveraging of information systems and technology to commit larceny, extortion, identity theft, fraud, and, in some cases, corporate espionage. Who are the miscreants who commit these crimes, and what are their motivations? One might imagine they are not the same individuals committing crimes in the physical world. Bank robbers and scam artists garner a certain public notoriety after only a few occurrences of their crimes, yet cybercriminals largely remain invisible and unheralded. Based on sketchy news accounts and a few public arrests, such as Mafiaboy, accused of paralyzing Amazon, CNN, and other Web sites, the public may infer these miscreants are merely a subculture of teenagers.

by Team Cymru | November 10, 2006

Topic: Web Security

1 comments

Cybercrime 2.0:
when the cloud turns dark

Web-based malware attacks are more insidious than ever. What can be done to stem the tide?

by Niels Provos, Moheeb Abu Rajab, Panayiotis Mavrommatis | March 24, 2009

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 4

0 comments

Cybercrime 2.0: When the Cloud Turns Dark

Web-based malware attacks are more insidious than ever. What can be done to stem the tide?

by Niels Provos, Moheeb Abu Rajab, Panayiotis Mavrommatis | March 20, 2009

Topic: Web Security

0 comments

DAFS:
A New High-Performance Networked File System

This emerging file-access protocol dramatically enhances the flow of data over a network, making life easier in the data center.

by Steve Kleiman | July 14, 2008

Topic: File Systems and Storage

0 comments

DNS Complexity

DNS (domain name system) is a distributed, coherent, reliable, autonomous, hierarchical database, the first and only one of its kind. Created in the 1980s when the Internet was still young but overrunning its original system for translating host names into IP addresses, DNS is one of the foundation technologies that made the worldwide Internet (and the World Wide Web) possible. Yet this did not all happen smoothly, and DNS technology has been periodically refreshed and refined. Though it’s still possible to describe DNS in simple terms, the underlying details are by now quite sublime.

by Paul Vixie | May 4, 2007

Topic: Networks

2 comments

DSL for the Uninitiated

Domain-specific languages bridge the semantic gap in programming

by Debasish Ghosh | June 1, 2011

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 7

2 comments

DSPs: Back to the Future

From the dawn of the DSP (digital signal processor), an old quote still echoes: "Oh, no! We'll have to use state-of-the-art 5µm NMOS!" The speaker's name is lost in the fog of history, as are many things from the ancient days of 5µm chip design. This quote refers to the first Bell Labs DSP whose mask set in fact underwent a 10 percent linear lithographic shrink to 4.5µm NMOS (N-channel metal oxide semiconductor) channel length and taped out in late 1979 with an aggressive full-custom circuit design.

by W. Patrick Hays | April 16, 2004

Topic: DSPs

0 comments

Data in Flight

How streaming SQL technology can help solve the Web 2.0 data crunch.

by Julian Hyde | December 10, 2009

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 1

1 comments

Data-Parallel Computing

Users always care about performance. Although often it's just a matter of making sure the software is doing only what it should, there are many cases where it is vital to get down to the metal and leverage the fundamental characteristics of the processor.

by Chas. Boyd | April 28, 2008

Topic: Graphics

0 comments

Databases of Discovery

Open-ended database ecosystems promote new discoveries in biotech. Can they help your organization, too?

by James Ostell | April 21, 2005

Topic: Databases

0 comments

Death by UML Fever

A potentially deadly illness, clinically referred to as UML (Unified Modeling Language) fever, is plaguing many software-engineering efforts today. This fever has many different strains that vary in levels of lethality and contagion. A number of these strains are symptomatically related, however. Rigorous laboratory analysis has revealed that each is unique in origin and makeup. A particularly insidious characteristic of UML fever, common to most of its assorted strains, is the difficulty individuals and organizations have in self-diagnosing the affliction. A consequence is that many cases of the fever go untreated and often evolve into more complex and lethal strains.

by Alex E. Bell | April 16, 2004

Topic: Development

5 comments

Debugging in an Asynchronous World

Pagers, cellular phones, smart appliances, and Web services - these products and services are almost omnipresent in our world, and are stimulating the creation of a new breed of software: applications that must deal with inputs from a variety of sources, provide real-time responses, deliver strong security - and do all this while providing a positive user experience. In response, a new style of application programming is taking hold, one that is based on multiple threads of control and the asynchronous exchange of data, and results in fundamentally more complex applications.

by Michael Donat | October 1, 2003

Topic: Quality Assurance

0 comments

Decentralizing SIP

If you're looking for a low-maintenance IP communications network, peer-to-peer SIP might be just the thing. SIP (Session Initiation Protocol) is the most popular protocol for VoIP in use today. It is widely used by enterprises, consumers, and even carriers in the core of their networks. Since SIP is designed for establishing media sessions of any kind, it is also used for a variety of multimedia applications beyond VoIP, including IPTV, videoconferencing, and even collaborative video gaming.

by David A. Bryan, Bruce B. Lowekamp | March 9, 2007

Topic: SIP

0 comments

Describing the Elephant:
The Different Faces of IT as Service

In a well-known fable, a group of blind men are asked to describe an elephant. Each encounters a different part of the animal and, not surprisingly, provides a different description. We see a similar degree of confusion in the IT industry today, as terms such as service-oriented architecture, grid, utility computing, on-demand, adaptive enterprise, data center automation, and virtualization are bandied about. As when listening to the blind men, it can be difficult to know what reality lies behind the words, whether and how the different pieces fit together, and what we should be doing about the animal(s) that are being described.

by Ian Foster, Steven Tuecke | August 18, 2005

Topic: Distributed Computing

0 comments

Design Exploration through Code-generating DSLs

High-level DSLs for low-level programming

by Bo Joel Svensson, Mary Sheeran, Ryan Newton | May 15, 2014

Topic: Programming Languages

0 comments

Design exploration through code-generating DSLs

High-level DSLs for low-level programming.

by Bo Joel Svensson, Mary Sheeran, Ryan R. Newton | May 22, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 6

0 comments

Designing Portable Collaborative Networks

Peer-to-peer technology and wireless networking offer great potential for working together away from the desk - but they also introduce unique software and infrastructure challenges. The traditional idea of the work environment is anchored to a central location - the desk and office - where the resources needed for the job are located.

by Lyn Bartram, Michael Blackstock | July 30, 2003

Topic: Mobile Computing

0 comments

Desktop Linux: Where Art Thou?

Linux on the desktop has come a long way - and it's been a roller-coaster ride. At the height of the dot-com boom, around the time of Red Hat's initial public offering, people expected Linux to take off on the desktop in short order. A few years later, after the stock market crash and the failure of a couple of high-profile Linux companies, pundits were quick to proclaim the stillborn death of Linux on the desktop.

by Bart Decrem | June 14, 2004

Topic: Open Source

0 comments

Digitally Assisted Analog Integrated Circuits

In past decades, "Moore's law" has governed the revolution in microelectronics. Through continuous advancements in device and fabrication technology, the industry has maintained exponential progress rates in transistor miniaturization and integration density. As a result, microchips have become cheaper, faster, more complex, and more power efficient.

by Boris Murmann, Bernhard Boser | April 16, 2004

Topic: Processors

0 comments

Discrimination in Online Ad Delivery

Google ads, black names and white names, racial discrimination, and click advertising

by Latanya Sweeney | April 2, 2013

Topic: Search Engines

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 5

0 comments

Disks from the Perspective of a File System

Disks lie. And the controllers that run them are partners in crime.

by Marshall Kirk McKusick | September 6, 2012

Topic: File Systems and Storage

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 11

14 comments

Distributed Computing Economics

Computing economics are changing. Today there is rough price parity between: (1) one database access; (2) 10 bytes of network traffic; (3) 100,000 instructions; (4) 10 bytes of disk storage; and (5) a megabyte of disk bandwidth. This has implications for how one structures Internet-scale distributed computing: one puts computing as close to the data as possible in order to avoid expensive network traffic.

by Jim Gray | July 28, 2008

Topic: Distributed Computing

0 comments

Distributed Development:
Lessons Learned

Delivery of a technology-based project is challenging, even under well-contained, familiar circumstances. And a tight-knit team can be a major factor in success. It is no mystery, therefore, why most small, new technology teams opt to work in a garage (at times literally). Keeping the focus of everyone's energy on the development task at hand means a minimum of non-engineering overhead.

by Michael Turnlund | January 29, 2004

Topic: Distributed Development

0 comments

Division of Labor in Embedded Systems

Increasingly, embedded applications require more processing power than can be supplied by a single processor, even a heavily pipelined one that uses a high-performance architecture such as very long instruction word (VLIW) or superscalar. Simply driving up the clock is often prohibitive in the embedded world because higher clocks require proportionally more power, a commodity often scarce in embedded systems. Multiprocessing, where the application is run on two or more processors concurrently, is the natural route to ever more processor cycles within a fixed power budget.

by Ivan Goddard | April 1, 2003

Topic: Embedded Systems

0 comments

Document & Media Exploitation

A computer used by Al Qaeda ends up in the hands of a Wall Street Journal reporter. A laptop from Iran is discovered that contains details of that country's nuclear weapons program. Photographs and videos are downloaded from terrorist Web sites.

by Simson L. Garfinkel | January 17, 2008

Topic: Security

0 comments

Document design matters

How do we apply the concept of resource orientation by designing representations to support interactions?

by Erik Wilde, Robert J. Glushko | September 23, 2008

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 10

0 comments

Does deterrence work in reducing information security policy abuse by employees?

Methods for evaluating and effectively managing the security behavior of employees.

by Qing Hu, Zhengchuan Xu, Tamara Dinev, Hong Ling | May 25, 2011

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 6

0 comments

Domain-specific Languages and Code Synthesis Using Haskell

Looking at embedded DSLs

by Andy Gill | May 6, 2014

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 6

2 comments

Don't Settle for Eventual Consistency

Stronger properties for low-latency geo-replicated storage

by Wyatt Lloyd, Michael J. Freedman, Michael Kaminsky, David G. Andersen | April 21, 2014

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 5

2 comments

E-mail Authentication:
What, Why, How?

Internet e-mail was conceived in a different world than we live in today. It was a small, tightly knit community, and we didn’t really have to worry too much about miscreants. Generally, if someone did something wrong, the problem could be dealt with through social means; “shunning” is very effective in small communities.

by Eric Allman | November 10, 2006

Topic: Email and IM

0 comments

Energy Management on Handheld Devices

Handheld devices are becoming ubiquitous and as their capabilities increase, they are starting to displace laptop computers - much as laptop computers have displaced desktop computers in many roles. Handheld devices are evolving from today's PDAs, organizers, cellular phones, and game machines into a variety of new forms. Although partially offset by improvements in low-power electronics, this increased functionality carries a corresponding increase in energy consumption. Second, as a consequence of displacing other pieces of equipment, handheld devices are seeing more use between battery charges. Finally, battery technology is not improving at the same pace as the energy requirements of handheld electronics.

by Marc A Viredaz, Lawrence S Brakmo, William R Hamburgen | December 5, 2003

Topic: Power Management

0 comments

Enhanced Debugging with Traces

An essential technique used in emulator development is a useful addition to any programmer's toolbox.

by Peter Phillips | March 31, 2010

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 5

0 comments

Enterprise Grid Computing

I have to admit a great measure of sympathy for the IT populace at large, when it is confronted by the barrage of hype around grid technology, particularly within the enterprise. Individual vendors have attempted to plant their flags in the notionally virgin technological territory and proclaim it as their own, using terms such as grid, autonomic, self-healing, self-managing, adaptive, utility, and so forth. Analysts, well, analyze and try to make sense of it all, and in the process each independently creates his or her own map of this terra incognita, naming it policy-based computing, organic computing, and so on.

by Paul Strong | August 18, 2005

Topic: Distributed Computing

2 comments

Enterprise SSDs

Solid-state drives are finally ready for the enterprise. But beware, not all SSDs are created alike. For designers of enterprise systems, ensuring that hardware performance keeps pace with application demands is a mind-boggling exercise. The most troubling performance challenge is storage I/O. Spinning media, while exceptional in scaling areal density, will unfortunately never keep pace with I/O requirements. The most cost-effective way to break through these storage I/O limitations is by incorporating high-performance SSDs (solid-state drives) into the systems.

by Mark Moshayedi, Patrick Wilkison | September 24, 2008

Topic: File Systems and Storage

0 comments

Enterprise Search: Tough Stuff

The last decade has witnessed the growth of information retrieval from a boutique discipline in information and library science to an everyday experience for billions of people around the world. This revolution has been driven in large measure by the Internet, with vendors focused on search and navigation of Web resources and Web content management. Simultaneously, enterprises have invested in networking all of their information together to the point where it is increasingly possible for employees to have a single window into the enterprise.

by Rajat Mukherjee, Jianchang Mao | May 5, 2004

Topic: Search Engines

0 comments

Enterprise Software as Service

While the practice of outsourcing business functions such as payroll has been around for decades, its realization as online software services has only recently become popular. In the online service model, a provider develops an application and operates the servers that host it. Customers access the application over the Internet using industry-standard browsers or Web services clients. A wide range of online applications, including e-mail, human resources, business analytics, CRM (customer relationship management), and ERP (enterprise resource planning), are available.

by Dean Jacobs | August 18, 2005

Topic: Distributed Computing

0 comments

Enterprise-Grade Wireless

We have been working in the wireless space in one form or another in excess of 10 years and have participated in every phase of its maturation process. We saw wireless progress from a toy technology before the dot-com boom, to something truly promising during the boom, only to be left wanting after the bubble when the technology was found to be not ready for prime time. Fortunately, it appears that we have finally reached the point where the technology and the enterprise's expectations have finally converged.

by Bruce Zenel | June 7, 2005

Topic: Mobile Computing

0 comments

Erlang for Concurrent Programming

What role can programming languages play in dealing with concurrency? One answer can be found in Erlang, a language designed for concurrency from the ground up.

by Jim Larson | October 24, 2008

Topic: Concurrency

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 3

0 comments

Error Messages:
What's the Problem?

Computer users spend a lot of time chasing down errors - following the trail of clues that starts with an error message and that sometimes leads to a solution and sometimes to frustration. Problems with error messages are particularly acute for system administrators (sysadmins) - those who configure, install, manage, and maintain the computational infrastructure of the modern world - as they spend a lot of effort to keep computers running amid errors and failures.

by Paul P. Maglio, Eser Kandogan | December 6, 2004

Topic: Failure and Recovery

0 comments

Eventual Consistency Today: Limitations, Extensions, and Beyond

How can applications be built on eventually consistent infrastructure given no guarantee of safety?

by Peter Bailis, Ali Ghodsi | April 9, 2013

Topic: Databases

1 comments

Eventual consistency today:
limitations, extensions, and beyond

How can applications be built on eventually consistent infrastructure given no guarantee of safety?

by Peter Bailis, Ali Ghodsi | April 24, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 5

0 comments

Eventually Consistent

At the foundation of Amazon's cloud computing are infrastructure services such as Amazon's S3 (Simple Storage Service), SimpleDB, and EC2 (Elastic Compute Cloud) that provide the resources for constructing Internet-scale computing platforms and a great variety of applications. The requirements placed on these infrastructure services are very strict; they need to score high marks in the areas of security, scalability, availability, performance, and cost effectiveness, and they need to meet these requirements while serving millions of customers around the globe, continuously.

by Werner Vogels | December 4, 2008

Topic: Web Services

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 1

4 comments

Eventually Consistent: Not What You Were Expecting?

Methods of quantifying consistency (or lack thereof) in eventually consistent storage systems

by Wojciech Golab, Muntasir R. Rahman, Alvin AuYoung, Kimberly Keeton, Xiaozhou (Steve) Li | February 18, 2014

Topic: Databases

0 comments

Eventually consistent:
not what you were expecting?

Methods of quantifying consistency (or lack thereof) in eventually consistent storage systems.

by Wojciech Golab, Muntasir R. Rahman, Alvin AuYoung, Kimberly Keeton, Xiaozhou (Steve) Li | February 25, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 3

0 comments

Exposing the ORM Cache

In the early 1990s, when object-oriented languages emerged into the mainstream of software development, a noticeable surge in productivity occurred as developers saw new and better ways to create software programs. Although the new and efficient object programming paradigm was hailed and accepted by a growing number of organizations, relational database management systems remained the preferred technology for managing enterprise data. Thus was born ORM (object-relational mapping), out of necessity, and the complex challenge of saving the persistent state of an object environment in a relational database subsequently became known as the object-relational impedance mismatch.

by Michael Keith, Randy Stafford | July 28, 2008

Topic: Databases

0 comments

Extending the Semantics of Scheduling Priorities

Increasing parallelism demands new paradigms.

by Rafael Vanoni Polanczyk | June 14, 2012

Topic: Performance

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 8

0 comments

Extensible Programming for the 21st Century

Is an open, more flexible programming environment just around the corner?

by Gregory V. Wilson | December 27, 2004

Topic: Programming Languages

2 comments

Extreme Software Scaling

The advent of SMP (symmetric multiprocessing) added a new degree of scalability to computer systems. Rather than deriving additional performance from an incrementally faster microprocessor, an SMP system leverages multiple processors to obtain large gains in total system performance. Parallelism in software allows multiple jobs to execute concurrently on the system, increasing system throughput accordingly. Given sufficient software parallelism, these systems have proved to scale to several hundred processors.

by Richard McDougall | October 18, 2005

Topic: Processors

0 comments

FPGA Programming for the Masses

The programmability of FPGAs must improve if they are to be part of mainstream computing.

by David Bacon, Rodric Rabbah, Sunil Shukla | February 23, 2013

Topic: Processors

7 comments

FPGA programming for the masses

The programmability of FPGAs must improve if they are to be part of mainstream computing.

by David F. Bacon, Rodric Rabbah, Sunil Shukla | March 25, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 4

0 comments

Fault Injection in Production

Making the case for resilience testing

by John Allspaw | August 24, 2012

Topic: Quality Assurance

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 10

1 comments

Fighting Physics: A Tough Battle

Thinking of doing IPC over the long haul? Think again. The laws of physics say you're hosed.

by Jonathan M. Smith | April 15, 2009

Topic: Networks

1 comments

Fighting Spam with Reputation Systems

Spam is everywhere, clogging the inboxes of e-mail users worldwide. Not only is it an annoyance, it erodes the productivity gains afforded by the advent of information technology. Workers plowing through hours of legitimate e-mail every day also must contend with removing a significant amount of illegitimate e-mail. Automated spam filters have dramatically reduced the amount of spam seen by the end users who employ them, but the amount of training required rivals the amount of time needed simply to delete the spam without the assistance of a filter.

by Vipul Ved Prakash, Adam O'Donnell | December 16, 2005

Topic: Email and IM

0 comments

Fighting physics:
a tough battle

The laws of physics and the Internet's routing infrastructure affect performance in a big way.

by Jonathan M. Smith | June 29, 2009

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 7

0 comments

Finding More Than One Worm in the Apple

If you see something, say something.

by Mike Bland | May 12, 2014

Topic: Security

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 7

13 comments

Finding Usability Bugs with Automated Tests

Automated usability tests can be valuable companions to in-person tests.

by Julian Harty | January 12, 2011

Topic: HCI

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 2

3 comments

Flash Disk Opportunity for Server Applications

Future flash-based disks could provide breakthroughs in IOPS, power, reliability, and volumetric capacity when compared with conventional disks. NAND flash densities have been doubling each year since 1996. Samsung announced that its 32-gigabit NAND flash chips would be available in 2007. This is consistent with Chang-gyu Hwang's flash memory growth model1 that NAND flash densities will double each year until 2010. Hwang recently extended that 2003 prediction to 2012, suggesting 64 times the current density250 GB per chip. This is hard to credit, but Hwang and Samsung have delivered 16 times since his 2003 article when 2-GB chips were just emerging.

by Jim Gray, Bob Fitzgerald | September 24, 2008

Topic: File Systems and Storage

0 comments

Flash Storage Today

Can flash memory become the foundation for a new tier in the storage hierarchy? The past few years have been an exciting time for flash memory. The cost has fallen dramatically as fabrication has become more efficient and the market has grown; the density has improved with the advent of better processes and additional bits per cell; and flash has been adopted in a wide array of applications. The flash ecosystem has expanded and continues to expand especially for thumb drives, cameras, ruggedized laptops, and phones in the consumer space.

by Adam Leventhal | September 24, 2008

Topic: File Systems and Storage

0 comments

Flash storage memory

Can flash memory become the foundation for a new tier in the storage hierarchy?

by Adam Leventhal | June 23, 2008

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 7

0 comments

Four Billion Little Brothers?:
Privacy, mobile phones, and ubiquitous data collection

Participatory sensing technologies could improve our lives and our communities, but at what cost to our privacy?

by Katie Shilton | August 27, 2009

Topic: Privacy and Rights

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 11

9 comments

From COM to Common

Ten years ago, the term component software meant something relatively specific and concrete. A small number of software component frameworks more or less defined the concept for most people. Today, few terms in the software industry are less precise than component software. There are now many different forms of software componentry for many different purposes. The technologies and methodologies of 10 years ago have evolved in fundamental ways and have been joined by an explosion of new technologies and approaches that have redefined our previously held notions of component software.

by Greg Olsen | June 30, 2006

Topic: Component Technologies

0 comments

From IR to Search, and Beyond

It's been nearly 60 years since Vannevar Bush's seminal Atlantic Monthly article, "As We May Think," portrayed the image of a scholar aided by a machine, "a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility." Unmistakably in this is the technology now known as search by millions and known as information retrieval (IR) by tens of thousands. From that point in 1945 to now, when some 25 million Web searches an hour are served, a lot has happened.

by Ramana Rao | June 14, 2004

Topic: Search Engines

0 comments

From Liability to Advantage: A Conversation with John Graham-Cumming and John Ousterhout

Software production (the back-end of software development, including tasks such as build, test, package and deploy) has become a bottleneck in many development organizations. In this interview Electric Cloud founder John Ousterhout explains how you can turn software production from a liability to a competitive advantage.

July 14, 2008

Topic: SIP

0 comments

From Server Room to Living Room

The open source movement, exemplified by the growing acceptance of Linux, is finding its way not only into corporate environments but also into a home near you. For some time now, high-end applications such as software development, computer-aided design and manufacturing, and heavy computational applications have been implemented using Linux and generic PC hardware.

by Jim Barton | October 1, 2003

Topic: Open Source

0 comments

Fun and Games:
Multi-Language Development

Computer games (or "electronic games" if you encompass those games played on console-class hardware) comprise one of the fastest-growing application markets in the world. Within the development community that creates these entertaining marvels, multi-language development is becoming more commonplace as games become more and more complex. Today, asking a development team to construct a database-enabled Web site with the requirement that it be written entirely in C++ would earn scornful looks and rolled eyes, but not long ago the idea that multiple languages were needed to accomplish a given task was scoffed at.

by Andrew M. Phelps, David M. Parks | February 24, 2004

Topic: Game Development

0 comments

Future Graphics Architectures

Graphics architectures are in the midst of a major transition. In the past, these were specialized architectures designed to support a single rendering algorithm: the standard Z buffer. Realtime 3D graphics has now advanced to the point where the Z-buffer algorithm has serious shortcomings for generating the next generation of higher-quality visual effects demanded by games and other interactive 3D applications. There is also a desire to use the high computational capability of graphics architectures to support collision detection, approximate physics simulations, scene management, and simple artificial intelligence.

by William Mark | April 28, 2008

Topic: Graphics

0 comments

Fuzzy Boundaries:
Objects, Components, and Web Services

It's easy to transform objects into components and Web services, but how do we know which is right for the job?

by Roger Sessions | December 27, 2004

Topic: Programming Languages

1 comments

GFS:
evolution on fast-forward

Kirk McKusick and Sean Quinlan discuss the origin and evolution of the Google File System.

by Kirk McKusick, Sean Quinlan | February 24, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 3

0 comments

GPUs: A Closer Look

A gamer wanders through a virtual world rendered in near- cinematic detail. Seconds later, the screen fills with a 3D explosion, the result of unseen enemies hiding in physically accurate shadows. Disappointed, the user exits the game and returns to a computer desktop that exhibits the stylish 3D look-and-feel of a modern window manager. Both of these visual experiences require hundreds of gigaflops of computing performance, a demand met by the GPU (graphics processing unit) present in every consumer PC.

by Kayvon Fatahalian, Mike Houston | April 28, 2008

Topic: Graphics

0 comments

Game Development:
Harder Than You Think

The hardest part of making a game has always been the engineering. In times past, game engineering was mainly about low-level optimization - writing code that would run quickly on the target computer, leveraging clever little tricks whenever possible.

by Jonathan Blow | February 24, 2004

Topic: Game Development

4 comments

Gaming Graphics:
The Road to Revolution

It has been a long journey from the days of multicolored sprites on tiled block backgrounds to the immersive 3D environments of modern games. What used to be a job for a single game creator is now a multifaceted production involving staff from every creative discipline. The next generation of console and home computer hardware is going to bring a revolutionary leap in available computing power; a teraflop (trillion floating-point operations per second) or more will be on tap from commodity hardware.

by Nick Porcino | May 5, 2004

Topic: Game Development

0 comments

Getting Bigger Reach Through Speech

Mark Ericson, vice president of product strategy for BlueNote Networks argues that in order to take advantage of new voice technologies you have to have a plan for integrating that capability directly into the applications that drive your existing business processes.

July 14, 2008

Topic: VoIP

0 comments

Getting Gigascale Chips:
Challenges and Opportunities in Continuing Moore's Law

Processor performance has increased by five orders of magnitude in the last three decades, made possible by following Moore's law - that is, continued technology scaling, improved transistor performance to increase frequency, additional (to avoid repetition) integration capacity to realize complex architectures, and reduced energy consumed per logic operation to keep power dissipation within limits. Advances in software technology, such as rich multimedia applications and runtime systems, exploited this performance explosion, delivering to end users higher productivity, seamless Internet connectivity, and even multimedia and entertainment.

by Shekhar Borkar | December 5, 2003

Topic: Processors

0 comments

Getting What You Measure

Four common pitfalls in using software metrics for project management

by Eric Bouwers, Joost Visser, Arie Van Deursen | May 29, 2012

Topic: Workflow Systems

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 7

1 comments

Global IT management:
structuring for scale, responsiveness, and innovation

To succeed on a global scale, businesses should focus on a trio of key elements.

by Siew Kien Sia, Christina Soh, Peter Weill | February 24, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 3

0 comments

Going with the Flow

An organization consists of two worlds. The real world contains the organization’s structure, physical goods, employees, and other organizations. The virtual world contains the organization’s computerized infrastructure, including its applications and databases. Workflow systems bridge the gap between these two worlds. They provide both a model of the organization’s design and a runtime to execute the model.

by Peter De Jong | March 29, 2006

Topic: Workflow Systems

0 comments

Hard Disk Drives:
The Good, the Bad and the Ugly!

HDDs (hard-disk drives) are like the bread in a peanut butter and jelly sandwich—sort of an unexciting piece of hardware necessary to hold the “software.” They are simply a means to an end. HDD reliability, however, has always been a significant weak link, perhaps the weak link, in data storage. In the late 1980s people recognized that HDD reliability was inadequate for large data storage systems so redundancy was added at the system level with some brilliant software algorithms, and RAID (redundant array of inexpensive disks) became a reality. RAID moved the reliability requirements from the HDD itself to the system of data disks.

by Jon Elerath | November 15, 2007

Topic: File Systems and Storage

4 comments

Hard-disk drives:
the good, the bad, and the ugly

New drive technologies and increased capacities create new categories of failure modes that will influence system designs.

by Jon Elerath | May 15, 2009

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 6

0 comments

Hazy:
making it easier to build and maintain big-data analytics

Racing to unleash the full potential of big data with the latest statistical and machine-learning techniques.

by Arun Kumar, Feng Niu, Christopher Ré | February 21, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 3

0 comments

Hazy: Making it Easier to Build and Maintain Big-data Analytics

Racing to unleash the full potential of big data with the latest statistical and machine-learning techniques.

by Arun Kumar, Feng Niu, Christopher Ré | January 23, 2013

Topic: Databases

0 comments

Hidden in Plain Sight

In December 1997, Sun Microsystems had just announced its new flagship machine: a 64-processor symmetric multiprocessor supporting up to 64 gigabytes of memory and thousands of I/O devices. As with any new machine launch, Sun was working feverishly on benchmarks to prove the machine’s performance. While the benchmarks were generally impressive, there was one in particular—an especially complicated benchmark involving several machines—that was exhibiting unexpectedly low performance. The benchmark machine—a fully racked-out behemoth with the maximum configuration of 64 processors—would occasionally become mysteriously distracted: Benchmark activity would practically cease, but the operating system kernel remained furiously busy.

by Bryan Cantrill | February 23, 2006

Topic: Performance

5 comments

High Performance Web Sites

Google Maps, Yahoo! Mail, Facebook, MySpace, YouTube, and Amazon are examples of Web sites built to scale. They access petabytes of data sending terabits per second to millions of users worldwide. The magnitude is awe-inspiring. Users view these large-scale Web sites from a narrower perspective. The typical user has megabytes of data that are downloaded at a few hundred kilobits per second. Users are not so interested in the massive number of requests per second being served; they care more about their individual requests. As they use these Web applications, they inevitably ask the same question: "Why is this site so slow?"

by Steve Souders | December 4, 2008

Topic: Web Services

2 comments

High-performance web sites

Want to make your Web site fly? Focus on frontend performance.

by Steve Souders | November 25, 2008

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 12

0 comments

How Do I Model State? Let Me Count the Ways

A study of the technology and sociology of Web services specifications

by Ian Foster, Savas Parastatidis, Paul Watson, Mark McKeown | March 17, 2009

Topic: Web Services

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 9

0 comments

How Fast is Your Web Site?

Web site performance data has never been more readily available.

by Patrick Meenan | March 4, 2013

Topic: Performance

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 4

2 comments

How Not to Write Fortran in Any Language

There are characteristics of good coding that transcend all programming languages.

by Donn Seeley | December 27, 2004

Topic: Programming Languages

8 comments

How OSGi Changed My Life

In the early 1980s I discovered OOP (object-oriented programming) and fell in love with it, head over heels. As usual, this kind of love meant convincing management to invest in this new technology, and most important of all, send me to cool conferences. So I pitched the technology to my manager. I sketched him the rosy future, how one day we would create applications from ready-made classes. We would get those classes from a repository, put them together, and voila, a new application would be born.

by Peter Kriens | March 4, 2008

Topic: Component Technologies

2 comments

How Will Astronomy Archives Survive the Data Tsunami?

Astronomers are collecting more data than ever. What practices can keep them ahead of the flood?

by G. Bruce Berriman, Steven L. Groom | October 18, 2011

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 12

1 comments

I/O Virtualization

Decoupling a logical device from its physical implementation offers many compelling advantages.

by Mendel Rosenblum, Carl Waldspurger | November 22, 2011

Topic: Virtualization

0 comments

I/O virtualization

Decoupling a logical device from its physical implementation offers many compelling advantages.

by Carl Waldspurger, Mendel Rosenblum | December 28, 2011

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 1

0 comments

Idempotence Is Not a Medical Condition

An essential property for reliable systems

by Pat Helland | April 14, 2012

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 5

0 comments

If You Have Too Much Data, then “Good Enough” Is Good Enough

In today's humongous database systems, clarity may be relaxed, but business needs can still be met.

by Pat Helland | May 23, 2011

Topic: Databases

5 comments

If you have too much data, then 'good enough' is good enough

In today's humongous database systems, clarity may be relaxed, but business needs can still be met.

by Pat Helland | May 25, 2011

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 6

0 comments

Improving Performance on the Internet

When it comes to achieving performance, reliability, and scalability for commercial-grade Web applications, where is the biggest bottleneck? In many cases today, we see that the limiting bottleneck is the middle mile, or the time data spends traveling back and forth across the Internet, between origin server and end user.

by Tom Leighton | December 4, 2008

Topic: Web Services

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 2

0 comments

Information Extraction:
Distilling Structured Data from Unstructured Text

In 2001 the U.S. Department of Labor was tasked with building a Web site that would help people find continuing education opportunities at community colleges, universities, and organizations across the country. The department wanted its Web site to support fielded Boolean searches over locations, dates, times, prerequisites, instructors, topic areas, and course descriptions. Ultimately it was also interested in mining its new database for patterns and educational trends. This was a major data-integration project, aiming to automatically gather detailed, structured information from tens of thousands of individual institutions every three months.

by Andrew McCallum | December 16, 2005

Topic: Semi-structured Data

0 comments

Injecting Errors for Fun and Profit

Error-detection and correction features are only as good as our ability to test them.

by Steve Chessin | August 6, 2010

Topic: Failure and Recovery

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 9

0 comments

Instant Messaging or Instant Headache?

It's a reality. You have IM (instant messaging) clients in your environment. You have already recognized that it is eating up more and more of your network bandwidth and with Microsoft building IM capability into its XP operating system and applications, you know this will only get worse. Management is also voicing concerns over the lost user productivity caused by personal conversations over this medium. You have tried blocking these conduits for conversation, but it is a constant battle.

by John Stone, Sarah Merrion | May 5, 2004

Topic: Email and IM

0 comments

Integrating RFID

RFID (radio frequency identification) has received a great deal of attention in the commercial world over the past couple of years. The excitement stems from a confluence of events. First, through the efforts of the former Auto-ID Center and its sponsor companies, the prospects of low-cost RFID tags and a networked supply chain have come within reach of a number of companies. Second, several commercial companies and government bodies, such as Wal-Mart and Target in the United States, Tesco in Europe, and the U.S. Department of Defense, have announced RFID initiatives in response to technology improvements.

by Sanjay Sarma | November 30, 2004

Topic: RFID

0 comments

Intellectual Property and Software Piracy:
The Power of IP Protection and Software Licensing, an interview with Aladdin vice president Gregg Gronowski

Intellectual Property (IP) - which ranges from ideas, inventions, technologies, and patented, trademarked or copyrighted work and products - can account for as much as 80% of a software company's total market value. Since IP is considered a financial asset in today's business climate, the threats to IP create a real concern. In an interview with ACM Queuecast host Michael Vizard, Aladdin vice president Gregg Gronowski explains how Software Digital Rights Management solutions are the de-facto standard today for protecting software IP, preventing software piracy, and enabling software licensing and compliance.

July 14, 2008

Topic: Security

1 comments

Interactive Dynamics for Visual Analysis

A taxonomy of tools that support the fluent and flexible use of visualizations

by Jeffrey Heer, Ben Shneiderman | February 20, 2012

Topic: Graphics

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 4

3 comments

Intermediate Representation

The increasing significance of intermediate representations in compilers

by Fred Chow | November 22, 2013

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 12

1 comments

Is Open Source Right for You?:
A Fictional Case Study of Open Source in a Commercial Software Shop

The media often present open source software as a direct competitor to commercial software. This depiction, usually pitting David (Linux) against Goliath (Microsoft), makes for fun reading in the weekend paper. However, it mostly misses the point of what open source means to a development organization. In this article, I use the experiences of GizmoSoft (a fictitious software company) to present some perspectives on the impact of open source software usage in a software development shop.

by David Ascher | June 14, 2004

Topic: Open Source

0 comments

Java Security Architecture Revisited

Hard technical problems and tough business challenges

by Li Gong | September 15, 2011

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 11

0 comments

Java in a Teacup

Few technology sectors evolve as fast as the wireless industry. As the market and devices mature, the need (and potential) for mobile applications grows. More and more mobile devices are delivered with the Java platform installed, enabling a large base of Java programmers to try their hand at embedded programming. Unfortunately, not all Java mobile devices are created equal, presenting many challenges to the new J2ME (Java 2 Platform, Micro Edition) programmer. Using a sample game application, this article illustrates some of the challenges associated with J2ME and Bluetooth programming.

by Stephen Johnson | May 2, 2006

Topic: Mobile Computing

0 comments

Keeping Bits Safe:
How Hard Can It Be?

As storage systems grow larger and larger, protecting their data for long-term storage is becoming more and more challenging.

by David S. H. Rosenthal | October 1, 2010

Topic: File Systems and Storage

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 11

4 comments

Keeping Score in the IT Compliance Game

Achieving developer acceptance of standardized procedures for managing applications from development to release is one of the largest hurdles facing organizations today. Establishing a standardized development-to-release workflow, often referred to as the ALM (application lifecycle management) process, is particularly critical for organizations in their efforts to meet tough IT compliance mandates. This is much easier said than done, as different development teams have created their own unique procedures that are undocumented, unclear, and nontraceable.

by Tracy Ragan | September 15, 2006

Topic: Workflow Systems

0 comments

Lack of Priority Queuing Considered Harmful

Most modern routers consist of several line cards that perform packet lookup and forwarding, all controlled by a control plane that acts as the brain of the router, performing essential tasks such as management functions, error reporting, control functions including route calculations, and adjacency maintenance. This control plane has many names; in this article it is the route processor, or RP. The route processor calculates the forwarding table and downloads it to the line cards using a control-plane bus. The line cards perform the actual packet lookup and forwarding.

by Vijay Gill | December 6, 2004

Topic: Web Security

0 comments

Languages, Levels, Libraries, and Longevity

New programming languages are born every day. Why do some succeed and some fail? In 50 years, we've already seen numerous programming systems come and (mostly) go, although some have remained a long time and will probably do so for: decades? centuries? millennia? The questions about language designs, levels of abstraction, libraries, and resulting longevity are numerous. Why do new languages arise? Why is it sometimes easier to write new software than to adapt old software that works? How many different levels of languages make sense? Why do some languages last in the face of "better" ones?

by John R. Mashey | December 27, 2004

Topic: Programming Languages

0 comments

Leaking Space

Eliminating memory hogs

by Neil Mitchell | October 23, 2013

Topic: Quality Assurance

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 11

0 comments

Learning from the Web

In the past decade we have seen a revolution in computing that transcends anything seen to date in terms of scope and reach, but also in terms of how we think about what makes up “good” and “bad” computing. The Web taught us several unintuitive lessons:

by Adam Bosworth | December 8, 2005

Topic: Semi-structured Data

0 comments

Lessons from the Floor

The January monthly service quality meeting started normally—around the table were representatives from development, operations, marketing, and product management, and the agenda focused on the prior month’s performance. As usual, customer-impacting incidents and quality of service were key topics, and I was armed with the numbers showing the average uptime for the part of the service that I represent: MSN, the Microsoft family of services that includes e-mail, Instant Messenger, news, weather and sports, etc.

by Daniel Rogers | January 31, 2006

Topic: Distributed Computing

0 comments

Lessons from the Letter

Security flaws in a large organization

by George V. Neville-Neil | July 22, 2010

Topic: Security

1 comments

Leveraging Application Frameworks

In today's competitive, fast-paced computing industry, successful software must increasingly be: (1) extensible to support successions of quick updates and additions to address new requirements and take advantage of emerging markets; (2) flexible to support a growing range of multimedia data types, traffic flows, and end-to-end QoS (quality of service) requirements; (3) portable to reduce the effort required to support applications on heterogeneous operating-system platforms and compilers; (4) reliable to ensure that applications are robust and tolerant to faults; (5) scalable to enable applications to handle larger numbers of clients simultaneously; and (6) affordable to ensure that the total ownership costs of software acquisition and evolution are not prohibitively high.

by Douglas C Schmidt, Aniruddha Gokhale, Balachandran Natarajan | August 31, 2004

Topic: Component Technologies

0 comments

Major-league SEMAT:
why should an executive care?

Becoming better, faster, cheaper, and happier.

by Ivar Jacobson, Pan-Wei Ng, Ian Spence, Paul E. McMahon | March 24, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 4

0 comments

Major-league SEMAT: Why Should an Executive Care?

Becoming better, faster, cheaper, and happier

by Ivar Jacobson, Pan-Wei Ng, Ian Spence, Paul E. McMahon | February 27, 2014

Topic: Development

0 comments

Making SIP Make Cents

P2P payments using SIP could enable new classes of applications and business models. The Session Initiation Protocol (SIP) is used to set up realtime sessions in IP-based networks. These sessions might be for audio, video, or IM communications, or they might be used to relay presence information. SIP service providers are mainly focused on providing a service that copies that provided by the PSTN (public switched telephone network) or the PLMN (public land mobile network) to the Internet-based environment.

by Jason Fischl, Hannes Tschofenig | March 9, 2007

Topic: SIP

0 comments

Making Sense of Revision-control Systems

Whether distributed or centralized, all revision-control systems come with complicated sets of tradeoffs. How do you find the best match between tool and team?

by Bryan O'Sullivan | August 21, 2009

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 9

7 comments

Making a Case for Efficient Supercomputing

A supercomputer evokes images of "big iron" and speed; it is the Formula 1 racecar of computing. As we venture forth into the new millennium, however, I argue that efficiency, reliability, and availability will become the dominant issues by the end of this decade, not only for supercomputing, but also for computing in general.

by Wu-chun Feng | December 5, 2003

Topic: Power Management

0 comments

Making the Mobile Web Faster

Mobile performance issues? Fix the back end, not just the client.

by Kate Matsudaira | January 31, 2013

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 3

1 comments

Making the Web Faster with HTTP 2.0

HTTP continues to evolve

by Ilya Grigorik | December 3, 2013

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 12

0 comments

Managing Collaboration

Jeff Johnstone of TechExcel explains why there is a need for a new approach to application lifecycle management that better reflects the business requirements and challenges facing development teams.

July 14, 2008

Topic: Development

0 comments

Managing Contention for Shared Resources on Multicore Processors

Contention for caches, memory controllers, and interconnects can be alleviated by contention-aware scheduling algorithms.

by Alexandra Fedorova, Sergey Blagodurov, Sergey Zhuravlev | January 20, 2010

Topic: Processors

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 2

1 comments

Managing Semi-Structured Data

I vividly remember during my first college class my fascination with the relational database—an information oasis that guaranteed a constant flow of correct, complete, and consistent information at our disposal. In that class I learned how to build a schema for my information, and I learned that to obtain an accurate schema there must be a priori knowledge of the structure and properties of the information to be modeled.

by Daniela Florescu | December 8, 2005

Topic: Semi-structured Data

1 comments

Managing Technical Debt

Shortcuts that save money and time today can cost you down the road.

by Eric Allman | March 23, 2012

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 5

2 comments

Massively Multiplayer Middleware

Wish is a multiplayer, online, fantasy role-playing game being developed by Mutable Realms. It differs from similar online games in that it allows tens of thousands of players to participate in a single game world (instead of the few hundred players supported by other games). Allowing such a large number of players requires distributing the processing load over a number of machines and raises the problem of choosing an appropriate distribution technology.

by Michi Henning | February 24, 2004

Topic: Game Development

0 comments

Maximizing Power Efficiency with Asymmetric Multicore Systems

Asymmetric multicore systems promise to use a lot less energy than conventional symmetric processors. How can we develop software that makes the most out of this potential?

by Alexandra Fedorova, Juan Carlos Saez, Daniel Shelepov, Manuel Prieto | November 20, 2009

Topic: Power Management

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 12

1 comments

Meet the Virts

When you dig into the details of supposedly overnight success stories, you frequently discover that they've actually been years in the making. Virtualization has been around for more than 30 years since the days when some of you were feeding stacks of punch cards into very physical machines yet in 2007 it tipped. VMware was the IPO sensation of the year; in November 2007 no fewer than four major operating system vendors (Microsoft, Oracle, Red Hat, and Sun) announced significant new virtualization capabilities; and among fashionable technologists it seems virtual has become the new black.

by Tom Killalea | March 4, 2008

Topic: Virtualization

0 comments

Metamorphosis: the Coming Transformation of Translational Systems Biology

In the future computers will mine patient data to deliver faster, cheaper healthcare, but how will we design them to give informative causal explanations? Ideas from philosophy, model checking, and statistical testing can pave the way for the needed translational systems biology.

by Samantha Kleinberg, Bud Mishra | October 12, 2009

Topic: Bioscience

0 comments

Microsoft's protocol documentation program:
interoperability testing at scale

A discussion with Nico Kicillof, Wolfgang Grieskamp, and Bob Binder.

by CACM Staff | June 22, 2011

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 7

0 comments

Mobile Application Development: Web vs. Native

Web apps are cheaper to develop and deploy than native apps, but can they match the native user experience?

by Andre Charland, Brian LeRoux | April 12, 2011

Topic: Mobile Computing

5 comments

Mobile Media:
Making It a Reality

Many future mobile applications are predicated on the existence of rich, interactive media services. The promise and challenge of such services is to provide applications under the most hostile conditions - and at low cost to a user community that has high expectations. Context-aware services require information about who, where, when, and what a user is doing and must be delivered in a timely manner with minimum latency. This article reveals some of the current state-of-the-art "magic" and the research challenges.

by Fred Kitson | June 7, 2005

Topic: Mobile Computing

0 comments

Mobile application development:
web vs. native

Web apps are cheaper to develop and deploy than native apps, but can they match the native user experience?

by Andre Charland, Brian Leroux | April 21, 2011

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 5

0 comments

Modeling People and Places with Internet Photo Collections

Understanding the world from the sea of online photos

by David Crandall, Noah Snavely | May 11, 2012

Topic: Graphics

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 6

6 comments

Modern Performance Monitoring

The modern Unix server floor can be a diverse universe of hardware from several vendors and software from several sources. Often, the personnel needed to resolve server floor performance issues are not available or, for security reasons, not allowed to be present at the very moment of occurrence. Even when, as luck might have it, the right personnel are actually present to witness a performance “event,” the tools to measure and analyze the performance of the hardware and software have traditionally been sparse and vendor-specific.

by Mark Purdy | February 23, 2006

Topic: Performance

0 comments

Modern System Power Management

The Advanced Configuration and Power Interface (ACPI) is the most widely used power and configuration interface for laptops, desktops, and server systems. It is also very complex, and its current specification weighs in at more than 500 pages. Needless to say, operating systems that choose to support ACPI require significant additional software support, up to and including fundamental OS architecture changes. The effort that ACPI's definition and implementation has entailed is worth the trouble because of how much flexibility it gives to the OS (and ultimately the user) to control power management policy and implementation.

by Andrew Grover | December 5, 2003

Topic: Power Management

0 comments

Monitoring and Control of Large Systems with MonALISA

MonALISA developers describe how it works, the key design principles behind it, and the biggest technical challenges in building it.

by Iosif Legrand, Ramiro Voicu, Catalin Cirstoiu, Costin Grigoras, Latchezar Betev, Alexandru Costan | July 30, 2009

Topic: Distributed Computing

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 9

0 comments

Monitoring, at Your Service

Internet services are becoming more and more a part of our daily lives. We derive value from them, depend on them, and are now beginning to assume their ubiquity as we do the phone system and electricity grid. The implementation of Internet services, though, is an unsolved problem, and Internet services remain far from fulfilling their potential in our world.

by Bill Hoffman | January 31, 2006

Topic: Distributed Computing

0 comments

Moving to the edge:
a CTO roundtable on network virtualization

Leading experts debate how virtualization and clouds impact network service architectures.

by Mache Creeger | July 26, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 8

0 comments

Multipath TCP

Decoupled from IP, TCP is at last able to support multihomed hosts.

by Christoph Paasch, Olivier Bonaventure | March 4, 2014

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 4

0 comments

Multitier Programming in Hop

A first step toward programming 21st-century applications

by Manuel Serrano, Gérard Berry | July 9, 2012

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 8

0 comments

NUMA (Non-Uniform Memory Access): An Overview

NUMA becomes more common because memory controllers get close to execution units on microprocessors.

by Christoph Lameter | August 9, 2013

Topic: Processors

1 comments

National Internet Defense - Small States on the Skirmish Line

Attacks in Estonia and Georgia highlight key vulnerabilities in national Internet infrastructure.

by Ross Stapleton-Gray, Bill Woodcock | January 19, 2011

Topic: Security

0 comments

National internet defense---small states on the skirmish line

by Ross Stapleton-Gray, William Woodcock | February 23, 2011

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 3

0 comments

Network Forensics

The dictionary defines forensics as "the use of science and technology to investigate and establish facts in criminal or civil courts of law." I am more interested, however, in the usage common in the computer world: using evidence remaining after an attack on a computer to determine how the attack was carried out and what the attacker did.

by Ben Laurie | August 31, 2004

Topic: Web Security

0 comments

Network Front-end Processors, Yet Again

The history of NFE processors sheds light on the tradeoffs involved in designing network stack software.

by Mike O'Dell | April 17, 2009

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 6

4 comments

Network Virtualization:
Breaking the Performance Barrier

The recent resurgence in popularity of virtualization has led to its use in a growing number of contexts, many of which require high-performance networking. Consider server consolidation, for example. The efficiency of network virtualization directly impacts the number of network servers that can effectively be consolidated onto a single physical machine. Unfortunately, modern network virtualization techniques incur significant overhead, which limits the achievable network performance. We need new network virtualization techniques to realize the full benefits of virtualization in network-intensive domains.

by Scot Rixner | March 4, 2008

Topic: Virtualization

0 comments

Nine IM Accounts and Counting

Instant messaging (IM) has become nearly as ubiquitous as e-mail, in some cases—on your teenager’s computer, for example—far surpassing e-mail in popularity. But it has gone far beyond teenagers’ insular world to business, where it is becoming a useful communication tool.

by Joe Hildebrand | January 28, 2004

Topic: Email and IM

0 comments

No Source Code? No Problem!

Typical software development involves one of two processes: the creation of new software to fit particular requirements or the modification (maintenance) of old software to fix problems or fit new requirements. These transformations happen at the source-code level. But what if the problem is not the maintenance of old software but the need to create a functional duplicate of the original? And what if the source code is no longer available?

by Peter Phillips, George Phillips | October 2, 2003

Topic: Development

0 comments

Node at LinkedIn:
the pursuit of thinner, lighter, faster

A discussion with Kiran Prasad, Kelly Norton, and Terry Coatta.

January 24, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 2

0 comments

Nonblocking Algorithms and Scalable Multicore Programming

Exploring some alternatives to lock-based synchronization

by Samy Al Bahra | June 11, 2013

Topic: Concurrency

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 7

2 comments

Not Your Father's PBX?

Perhaps no piece of office equipment is more taken for granted than the common business telephone. The technology behind this basic communication device, however, is in the midst of a major transformation. Businesses are now converging their voice and data networks in order to simplify their network operations and take advantage of the new functional benefits and capabilities that a converged network delivers from greater productivity and cost savings to enhanced mobility.

by James E. Coffman | October 25, 2004

Topic: VoIP

0 comments

OCaml for the Masses

Why the next language you learn should be functional

by Yaron Minsky | September 27, 2011

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 11

37 comments

ORM in Dynamic Languages

A major component of most enterprise applications is the code that transfers objects in and out of a relational database. The easiest solution is often to use an ORM (object-relational mapping) framework, which allows the developer to declaratively define the mapping between the object model and database schema and express database-access operations in terms of objects. This high-level approach significantly reduces the amount of database-access code that needs to be written and boosts developer productivity.

by Chris Richardson | July 28, 2008

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 4

0 comments

Ode to a Sailor

sailor, fleeting mood image of you; all sailor in bear grace, rough hands and poetic dream;

by Donna Carnes | July 28, 2008

0 comments

Of Processors and Processing

Digital signal processing is a stealth technology. It is the core enabling technology in everything from your cellphone to the Mars Rover. It goes much further than just enabling a one-time breakthrough product. It provides ever-increasing capability; compare the performance gains made by dial-up modems with the recent performance gains of DSL and cable modems. Remarkably, digital signal processing has become ubiquitous with little fanfare, and most of its users are not even aware of what it is.

by Gene Frantz, Ray Simar | April 16, 2004

Topic: DSPs

0 comments

On Mapping Alogrithms to DSP Architectures

Our complex world is characterized by representation, transmission, and storage of information - and information is mostly processed in digital form. With the advent of DSPs (digital signal processors), engineers are able to implement complex algorithms with relative ease. Today we find DSPs all around us - in cars, digital cameras, MP3 and DVD players, modems, and so forth. Their widespread use and deployment in complex systems has triggered a revolution in DSP architectures, which in turn has enabled engineers to implement algorithms of ever-increasing complexity.

by Homayoun Shahri | April 16, 2004

Topic: DSPs

0 comments

On Plug-ins and Extensible Architectures

Extensible application architectures such as Eclipse offer many advantages, but one must be careful to avoid "plug-in hell."

by Dorian Birsan | March 18, 2005

Topic: Computer Architecture

0 comments

One Step Ahead

Every day IT departments are involved in an ongoing struggle against hackers trying to break into corporate networks. A break-in can carry a hefty price: loss of valuable information, tarnishing of the corporate image and brand, service interruption, and hundreds of resource hours of recovery time. Unlike other aspects of information technology, security is adversarial; it pits IT departments against hackers.

by Vlad Gorelik | February 2, 2007

Topic: Security

0 comments

Online Algorithms in High-frequency Trading

The challenges faced by competing HFT algorithms

by Jacob Loveless, Sasha Stoikov, Rolf Waeber | October 7, 2013

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 10

1 comments

Oops! Coping with Human Error in IT Systems

Human operator error is one of the most insidious sources of failure and data loss in today's IT environments. In early 2001, Microsoft suffered a nearly 24-hour outage in its Web properties as a result of a human error made while configuring a name resolution system. Later that year, an hour of trading on the Nasdaq stock exchange was disrupted because of a technicians mistake while testing a development system. More recently, human error has been blamed for outages in instant messaging networks, for security and privacy breaches, and for banking system failures.

by Aaron B. Brown | December 6, 2004

Topic: Failure and Recovery

0 comments

Open Source to the Core

The open source development model is not exactly new. Individual engineers have been using open source as a collaborative development methodology for decades. Now that it has come to the attention of upper and middle management, however, it's finally being openly acknowledged as a commercial engineering force-multiplier and important option for avoiding significant software development costs.

by Jordan Hubbard | June 14, 2004

Topic: Open Source

0 comments

Open Spectrum:
A Path to Ubiquitous Connectivity

Just as open standards and open software rocked the networking and computing industry, open spectrum is poised to be a disruptive force in the use of radio spectrum for communications. At the same time, open spectrum will be a major element that helps continue the Internet's march to integrate and facilitate all electronic communications with open standards and commodity hardware.

by Robert J. Berger | July 30, 2003

Topic: Mobile Computing

0 comments

Open vs. Closed:
Which Source is More Secure?

There is no better way to start an argument among a group of developers than proclaiming Operating System A to be "more secure" than Operating System B. I know this from first-hand experience, as previous papers I have published on this topic have led to reams of heated e-mails directed at me including some that were, quite literally, physically threatening. Despite the heat (not light!) generated from attempting to investigate the relative security of different software projects, investigate we must.

by Richard Ford | February 2, 2007

Topic: Security

0 comments

OpenFlow:
a radical new idea in networking

An open standard that enables software-defined networking.

by Thomas A. Limoncelli | July 26, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 8

0 comments

OpenFlow: A Radical New Idea in Networking

An open standard that enables software-defined networking

by Thomas A. Limoncelli | June 20, 2012

Topic: Networks

5 comments

Orchestrating an Automated Test Lab

Networking and the Internet are encouraging increasing levels of interaction and collaboration between people and their software. Whether users are playing games or composing legal documents, their applications need to manage the complex interleaving of actions from multiple machines over potentially unreliable connections. As an example, Silicon Chalk is a distributed application designed to enhance the in-class experience of instructors and students. Its distributed nature requires that we test with multiple machines. Manual testing is too tedious, expensive, and inconsistent to be effective. While automating our testing, however, we have found it very labor intensive to maintain a set of scripts describing each machine's portion of a given test.

by Michael Donat | February 16, 2005

Topic: Quality Assurance

0 comments

Order from Chaos

There is probably little argument that the past decade has brought the “big bang” in the amount of online information available for processing by humans and machines. Two of the trends that it spurred (among many others) are: first, there has been a move to more flexible and fluid (semi-structured) models than the traditional centralized relational databases that stored most of the electronic data before; second, today there is simply too much information available to be processed by humans, and we really need help from machines.

by Natalya Noy | December 8, 2005

Topic: Semi-structured Data

0 comments

Other People's Data

Companies have access to more types of external data than ever before. How can they integrate it most effectively?

by Stephen Petschulat | November 13, 2009

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 1

0 comments

Outsourcing: Devising a Game Plan

Your CIO just summoned you to duty by handing off the decision-making power about whether to outsource next years big development project to rewrite the internal billing system. That's quite a daunting task! How can you possibly begin to decide if outsourcing is the right option for your company? There are a few strategies that you can follow to help you avoid the pitfalls of outsourcing and make informed decisions. Outsourcing is not exclusively a technical issue, but it is a decision that architects or development managers are often best qualified to make because they are in the best position to know what technologies make sense to keep in-house.

by Adam Kolawa | December 6, 2004

Topic: Distributed Development

1 comments

Parallel Programming with Transactional Memory

While sometimes even writing regular, single-threaded programs can be quite challenging, trying to split a program into multiple pieces that can be executed in parallel adds a whole dimension of additional problems. Drawing upon the transaction concept familiar to most programmers, transactional memory was designed to solve some of these problems and make parallel programming easier. Ulrich Drepper from Red Hat shows us how it's done.

by Ulrich Drepper | October 24, 2008

Topic: Concurrency

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 2

1 comments

Passing a Language through the Eye of a Needle

How the embeddability of Lua impacted its design

by Roberto Ierusalimschy, Luiz Henrique de Figueiredo, Waldemar Celes | May 12, 2011

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 7

3 comments

Passively Measuring TCP Round-trip Times

A close look at RTT measurements with TCP

by Stephen D. Strowes | October 28, 2013

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 10

2 comments

Patching the Enterprise

Organizations of all sizes are spending considerable efforts on getting patch management right - their businesses depend on it.

by George Brandman | March 18, 2005

Topic: Patching and Deployment

0 comments

People and Process

When Mike Hammer and I published Reengineering the Corporation in 1992, we understood the impact that real business process change would have on people. I say “real” process change, because managers have used the term reengineering to describe any and all corporate change programs—even downsizings. One misguided executive told me that his company did not know how to do real reengineering; so it just downsized large departments and business units, and expected that the people who were left would figure out how to get their work done. Sadly, this is how some companies still practice process redesign—leaving people overworked and demoralized, while customers experience bad service and poor quality.

by James Champy | March 29, 2006

Topic: Workflow Systems

0 comments

People in Our Software

People are not well represented in today's software. With the exception of IM (instant messaging) clients, today's applications offer few clues that people are actually living beings. Static strings depict things associated with people like e-mail addresses, phone numbers, and home-page URLs. Applications also tend to show the same information about a person, no matter who is viewing it.

by John Richards, Jim Christensen | February 24, 2004

Topic: Social Computing

0 comments

Perfect Storm:
The Insider, Naivety, and Hostility

Every year corporations and government installations spend millions of dollars fortifying their network infrastructures. Firewalls, intrusion detection systems, and antivirus products stand guard at network boundaries, and individuals monitor countless logs and sensors for even the subtlest hints of network penetration. Vendors and IT managers have focused on keeping the wily hacker outside the network perimeter, but very few technological measures exist to guard against insiders - those entities that operate inside the fortified network boundary. The 2002 CSI/FBI survey estimates that 70 percent of successful attacks come from the inside. Several other estimates place those numbers even higher.

by Herbert H Thompson, Richard Ford | August 31, 2004

Topic: Security

0 comments

Performance Anti-Patterns

Performance pathologies can be found in almost any software, from user to kernel, applications, drivers, etc. At Sun we’ve spent the last several years applying state-of-the-art tools to a Unix kernel, system libraries, and user applications, and have found that many apparently disparate performance problems in fact have the same underlying causes. Since software patterns are considered abstractions of positive experience, we can talk about the various approaches that led to these performance problems as anti-patterns—something to be avoided rather than emulated.

by Bart Smaalders | February 23, 2006

Topic: Performance

0 comments

Phishing Forbidden

Phishing is a significant risk facing Internet users today.1,2 Through e-mails or instant messages, users are led to counterfeit Web sites designed to trick them into divulging usernames, passwords, account numbers, and personal information. It is up to the user to ensure the authenticity of the Web site.

by Naveen Agarwal, Scott Renfro, Arturo Bejar | August 16, 2007

Topic: Web Development

1 comments

Photoshop scalability:
keeping it simple

Clem Cole and Russell Williams discuss Photoshop's long history with parallelism, and what is now seen as the chief challenge.

by ACM Case Study | September 30, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 10

0 comments

Playing for Keeps

Inflection points come at you without warning and quickly recede out of reach. We may be nearing one now. If so, we are now about to play for keeps, and “we” doesn’t mean just us security geeks. If anything, it’s because we security geeks have not worked the necessary miracles already that an inflection point seems to be approaching at high velocity.

by Daniel E. Geer | November 10, 2006

Topic: Web Security

0 comments

Postmortem Debugging in Dynamic Environments

Modern dynamic languages lack tools for understanding software failures.

by David Pacheco | October 3, 2011

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 12

0 comments

Power-Efficient Software

Power-manageable hardware can help save energy, but what can software developers do to address the problem?

by Eric Saxe | January 8, 2010

Topic: Power Management

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 2

2 comments

Powering Down

Power management - from laptops to rooms full of servers - is a topic of interest to everyone. In the beginning there was the desktop computer. It ran at a fixed speed and consumed less power than the monitor it was plugged into. Where computers were portable, their sheer size and weight meant that you were more likely to be limited by physical strength than battery life. It was not a great time for power management.

by Matthew Garrett | January 17, 2008

Topic: Power Management

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 9

0 comments

Principles of Robust Timing over the Internet

The key to synchronizing clocks over networks is taming delay variability.

by Julien Ridoux, Darryl Veitch | April 21, 2010

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 5

4 comments

Probing Biomolecular Machines with Graphics Processors

The evolution of GPU processors and programming tools is making advanced simulation and analysis techniques accessible to a growing community of biomedical scientists.

by James C Phillips, John E. Stone | October 6, 2009

Topic: Bioscience

0 comments

Probing biomolecular machines with graphics processors

GPU acceleration and other computer performance increases will offer critical benefits to biomedical science.

by James C. Phillips, John E. Stone | September 22, 2009

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 10

0 comments

Programmers Are People, too

I would like to start out this article with an odd, yet surprisingly uncontroversial assertion, which is this: programmers are human. I wish to use this as a premise to explore how to improve the programmer’s lot. So, please, no matter your opinion on the subject, grant me this assumption for the sake of argument.

by Ken Arnold | July 6, 2005

Topic: HCI

4 comments

Programming Without a Net

What if your programs didn't exit when they accidentally accessed a NULL pointer? What if all their global variables were seen by all the other applications in the system? Do you check how much memory your programs use? Unlike more traditional software platforms, embedded systems provide programmers with little protection against these and many other types of problems. This is not done capriciously, just to make working with them more difficult. Traditional software platforms, those that support a process model, exact a large price in terms of total system complexity, program response time, memory requirements, and execution speed.

by George Neville-Neil | April 1, 2003

Topic: Embedded Systems

0 comments

Provenance in Sensor Data Management

A cohesive, independent solution for bringing provenance to scientific research

by Zachary Hensley, Jibonananda Sanyal, Joshua New | January 23, 2014

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 2

0 comments

Proving the Correctness of Nonblocking Data Structures

So you've decided to use a nonblocking data structure, and now you need to be certain of its correctness. How can this be achieved? When a multithreaded program is too slow because of a frequently acquired mutex, the programmer's typical reaction is to question whether this mutual exclusion is indeed required. This doubt becomes even more pronounced if the mutex protects accesses to only a single variable performed using a single instruction at every site. Removing synchronization improves performance, but can it be done without impairing program correctness?

by Mathieu Desnoyers | June 2, 2013

Topic: Concurrency

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 7

0 comments

Purpose-Built Languages

While often breaking the rules of traditional language design, the growing ecosystem of purpose-built "little" languages is an essential part of systems development.

by Mike Shapiro | February 23, 2009

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 4

3 comments

Putting It All Together

With the growing complexity of embedded systems, more and more parts of a system are reused or supplied, often from external sources. These parts range from single hardware components or software processes to hardware-software (HW-SW) subsystems. They must cooperate and share resources with newly developed parts such that all of the design constraints are met. This, simply speaking, is the integration task, which ideally should be a plug-and-play procedure. This does not happen in practice, however, not only because of incompatible interfaces and communication standards but also because of specialization.

by Rolf Ernst | April 1, 2003

Topic: Embedded Systems

0 comments

Quality Assurance:
Much More than Testing

Quality assurance isn't just testing, or analysis, or wishful thinking. Although it can be boring, difficult, and tedious, QA is nonetheless essential.

by Stuart Feldman | February 16, 2005

Topic: Quality Assurance

0 comments

Quality software costs money---heartbleed was free

How to generate funding for free and open source software.

by Poul-Henning Kamp | July 23, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 8

0 comments

Rate-limiting State

The edge of the Internet is an unruly place

by Paul Vixie | February 4, 2014

Topic: Security

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 4

7 comments

Reading, Writing, and Code

Forty years ago, when computer programming was an individual experience, the need for easily readable code wasn't on any priority list. Today, however, programming usually is a team-based activity, and writing code that others can easily decipher has become a necessity. Creating and developing readable code is not as easy as it sounds.

by Diomidis Spinellis | December 5, 2003

Topic: Development

1 comments

Real-World Concurrency

In this look at how concurrency affects practitioners in the real world, Cantrill and Bonwick argue that much of the anxiety over concurrency is unwarranted.

by Bryan Cantrill, Jeff Bonwick | October 24, 2008

Topic: Concurrency

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 11

0 comments

Real-time GPU audio

Real-time finite difference-based sound synthesis using graphics processors.

by Bill Hsu, Marc Sosnick-Pérez | May 23, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 6

0 comments

Real-time computer vision with OpenCV

Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.

by Kari Pulli, Anatoly Baksheev, Kirill Kornyakov, Victor Eruhimov | May 23, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 6

0 comments

Realtime Computer Vision with OpenCV

Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.

by Kari Pulli, Anatoly Baksheev, Kirill Kornyakov, Victor Eruhimov | April 22, 2012

Topic: HCI

9 comments

Realtime GPU Audio

Finite difference-based sound synthesis using graphics processors

by Bill Hsu, Marc Sosnick-Pérez | May 8, 2013

Topic: Processors

3 comments

Realtime Garbage Collection

Traditional computer science deals with the computation of correct results. Realtime systems interact with the physical world, so they have a second correctness criterion: they have to compute the correct result within a bounded amount of time. Simply building functionally correct software is hard enough. When timing is added to the requirements, the cost and complexity of building the software increase enormously.

by David F. Bacon | February 2, 2007

Topic: Programming Languages

1 comments

Reconfigurable Future

The Ability to Produce Cheaper, More Compact Chips is a Double-edged Sword.

by Mark Horowitz | July 14, 2008

Topic: Processors

0 comments

Resilience engineering:
learning to embrace failure

A discussion with Jesse Robbins, Kripa Krishnan, John Allspaw, and Tom Limoncelli.

October 24, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 11

0 comments

Resolved:
the internet is no place for critical infrastructure

Risk is a necessary consequence of dependence.

by Dan Geer | May 23, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 6

0 comments

Rethinking Passwords

Our authentication system is lacking. Is improvement possible?

by William Cheswick | December 31, 2012

Topic: Security

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 2

6 comments

Returning Control to the Programmer:
SIMD Intrinsics for Virtual Machines

Exposing SIMD units within interpreted languages could simplify programs and unleash floods of untapped processor power.

by Jonathan Parri, Daniel Shapiro, Miodrag Bolic, Voicu Groza | February 24, 2011

Topic: Virtual Machines

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 4

3 comments

Revisiting Network I/O APIs: The netmap Framework

It is possible to achieve huge performance improvements in the way packet processing is done on modern operating systems.

by Luigi Rizzo | January 17, 2012

Topic: Networks

17 comments

Revisiting network I/O APIs:
the netmap framework

It is possible to achieve huge performance improvements in the way packet processing is done on modern operating systems.

by Luigi Rizzo | February 22, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 3

0 comments

Rules for Mobile Performance Optimization

An overview of techniques to speed page loading

by Tammy Everts | August 1, 2013

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 8

0 comments

SAGE:
whitebox fuzzing for security testing

SAGE has had a remarkable impact at Microsoft.

by Patrice Godefroid, Michael Y. Levin, David Molnar | February 22, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 3

0 comments

SAGE: Whitebox Fuzzing for Security Testing

SAGE has had a remarkable impact at Microsoft.

by Patrice Godefroid, Michael Y. Levin, David Molnar | January 11, 2012

Topic: Security

0 comments

SIP:
Basics and Beyond

More than just a simple telephony application protocol, SIP is a framework for developing communications systems. Chances are you're already using SIP (Session Initiation Protocol). It is one of the key innovations driving the current evolution of communications systems. Its first major use has been signaling in Internet telephony. Large carriers have been using SIP inside their networks for interconnect and trunking across long distances for several years. If you've made a long-distance call, part of that call probably used SIP.

by Robert Sparks | March 9, 2007

Topic: SIP

0 comments

Scalable Parallel Programming with CUDA

The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now parallel systems. Furthermore, their parallelism continues to scale with Moore's law. The challenge is to develop mainstream application software that transparently scales its parallelism to leverage the increasing number of processor cores, much as 3D graphics applications transparently scale their parallelism to manycore GPUs with widely varying numbers of cores.

by John Nickolls, Ian Buck, Michael Garland, Kevin Skadron | April 28, 2008

Topic: Graphics

1 comments

Scalable SQL

How do large-scale sites and applications remain SQL-based?

by Michael Rys | April 19, 2011

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 6

3 comments

Scaling Existing Lock-based Applications with Lock Elision

Lock elision enables existing lock-based programs to achieve the performance benefits of nonblocking synchronization and fine-grain locking with minor software engineering effort.

by Andi Kleen | February 8, 2014

Topic: Concurrency

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 3

1 comments

Scaling in Games & Virtual Worlds

I used to be a systems programmer, working on infrastructure used by banks, telecom companies, and other engineers. I worked on operating systems. I worked on distributed middleware. I worked on programming languages. I wrote tools. I did all of the things that hard-core systems programmers do.

by Jim Waldo | January 8, 2009

Topic: Game Development

3 comments

Scaling in games and virtual worlds

Online games and virtual worlds have familiar scaling requirements, but don't be fooled: Everything you know is wrong.

by Jim Waldo | July 25, 2008

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 8

0 comments

Search Considered Integral

Most corporations must leverage their data for competitive advantage. The volume of data available to a knowledge worker has grown dramatically over the past few years, and, while a good amount lives in large databases, an important subset exists only as unstructured or semi-structured data. Without the right systems, this leads to a continuously deteriorating signal-to-noise ratio, creating an obstacle for busy users trying to locate information quickly. Three flavors of enterprise search solutions help improve knowledge discovery:

by Ryan Barrows, Jim Traverso | June 30, 2006

Topic: Search Engines

0 comments

Searching vs. Finding

Finding information and organizing it so that it can be found are two key aspects of any company's knowledge management strategy. Nearly everyone is familiar with the experience of searching with a Web search engine and using a search interface to search a particular Web site once you get there. (You may have even noticed that the latter often doesn't work as well as the former.) After you have a list of hits, you typically spend a significant amount of time following links, waiting for pages to download, reading through a page to see if it has what you want, deciding that it doesn't, backing up to try another link, deciding to try another way to phrase your request, et cetera.

by William A Woods | May 5, 2004

Topic: Search Engines

0 comments

Securing Elasticity in the Cloud

Elastic computing has great potential, but many security challenges remain.

by Dustin Owens | May 6, 2010

Topic: Distributed Computing

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 6

0 comments

Security - Problem Solved?

There are plenty of security problems that have solutions. Yet, our security problems don’t seem to be going away. What’s wrong here? Are consumers being offered snake oil and rejecting it? Are they not adopting solutions they should be adopting? Or, is there something else at work, entirely? We’ll look at a few places where the world could easily be a better place, but isn’t, and build some insight as to why.

by John Viega | July 6, 2005

Topic: Security

1 comments

Security in the Browser

Web browsers leave users vulnerable to an ever-growing number of attacks. Can we make them secure while preserving their usability?

by Thomas Wadlow, Vlad Gorelik | March 16, 2009

Topic: Web Security

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 5

0 comments

Security is Harder than You Think

Many developers see buffer overflows as the biggest security threat to software and believe that there is a simple two-step process to secure software: switch from C or C++ to Java, then start using SSL (Secure Sockets Layer) to protect data communications. It turns out that this naïve tactic isn't sufficient. In this article, we explore why software security is harder than people expect, focusing on the example of SSL.

by John Viega, Matt Messier | August 31, 2004

Topic: Security

0 comments

Security: The Root of the Problem

Security bug? My programming language made me do it! It doesn't seem that a day goes by without someone announcing a critical flaw in some crucial piece of software or other. Is software that bad? Are programmers so inept? What the heck is going on, and why is the problem getting worse instead of better? One distressing aspect of software security is that we fundamentally don't seem to "get it."

by Marcus J Ranum | August 31, 2004

Topic: Security

0 comments

Self-Healing Networks

The obvious advantage to wireless communication over wired is, as they say in the real estate business, location, location, location. Individuals and industries choose wireless because it allows flexibility of location--whether that means mobility, portability, or just ease of installation at a fixed point. The challenge of wireless communication is that, unlike the mostly error-free transmission environments provided by cables, the environment that wireless communications travel through is unpredictable. Environmental radio-frequency (RF) "noise" produced by powerful motors, other wireless devices, microwaves--and even the moisture content in the air--can make wireless communication unreliable.

by Robert Poor, Cliff Bowman, Charlotte Burgess Auburn | July 30, 2003

Topic: Networks

1 comments

Self-Healing in Modern Operating Systems

A few early steps show there's a long (and bumpy) road ahead.

by Michael W. Shapiro | December 27, 2004

Topic: Failure and Recovery

0 comments

Sender-side Buffers and the Case for Multimedia Adaptation

A proposal to improve the performance and availability of streaming video and other time-sensitive media

by Aiman Erbad, Charles Krasic | October 11, 2012

Topic: Web Services

0 comments

Sender-side buffers and the case for multimedia adaptation

A proposal to improve the performance and availability of streaming video and other time-sensitive media.

by Aiman Erbad, Charles "Buck" Krasic | November 29, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 12

0 comments

Sensible Authentication

The problem with securing assets and their functionality is that, by definition, you don't want to protect them from everybody. It makes no sense to protect assets from their owners, or from other authorized individuals (including the trusted personnel who maintain the security system). In effect, then, all security systems need to allow people in, even as they keep people out. Designing a security system that accurately identifies, authenticates, and authorizes trusted individuals is highly complex and filled with nuance, but critical to security.

by Bruce Schneier | February 24, 2004

Topic: Security

0 comments

Sentient Data Access via a Diverse Society of Devices

It has been more than ten years since such “information appliances” as ATMs and grocery store UPC checkout counters were introduced. For the office environment, Mark Weiser began to articulate the notion of UbiComp (ubiquitous computing) and identified some of the salient features of the trends in 1991.1, 2 Embedded computation is also becoming widespread.

by George W. Fitzmaurice, Azam Khan, William Buxton, Gordon Kurtenbach, Ravin Balakrishnan | January 28, 2004

Topic: Embedded Systems

0 comments

Seven principles for selecting software packages

Everything you always wanted to know but were afraid to ask about the decision-making process.

by Jan Damsgaard, Jan Karlsbjerg | July 26, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 8

0 comments

Sifting Through the Software Sandbox:
SCM Meets QA

Thanks to modern SCM (software configuration management) systems, when developers work on a codeline they leave behind a trail of clues that can reveal what parts of the code have been modified, when, how, and by whom. From the perspective of QA (quality assurance) and test engineers, is this all just "data," or is there useful information that can improve the test coverage and overall quality of a product?

by William W. White | February 16, 2005

Topic: Quality Assurance

0 comments

Simplicity Betrayed

Emulating a video system shows how even a simple interface can be more complex—and capable—than it appears.

by George Phillips | April 8, 2010

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 6

2 comments

Simulators:
Virtual Machines of the Past (and Future)

Simulators are a form of "virtual machine" intended to address a simple problem: the absence of real hardware. Simulators for past systems address the loss of real hardware and preserve the usability of software after real hardware has vanished. Simulators for future systems address the variability of future hardware designs and facilitate the development of software before real hardware exists.

by Bob Supnik | August 31, 2004

Topic: Virtual Machines

1 comments

Sink or Swim:
Know When It's Time to Bail

There are endless survival challenges for newly created businesses. The degree to which a business successfully meets these challenges depends largely on the nature of the organization and the culture that evolves within it. That's to say that while market size, technical quality, and product design are obviously crucial factors, company failures are typically rooted in some form of organizational dysfunction.

by Gordon Bell | January 29, 2004

Topic: Distributed Development

0 comments

SoC: Software, Hardware, Nightmare, Bliss

System-on-a-chip (SoC) design methodology allows a designer to create complex silicon systems from smaller working blocks, or systems. By providing a method for easily supporting proprietary functionality in a larger context that includes many existing design pieces, SoC design opens the craft of silicon design to a much broader audience.

by George Neville-Neil, Telle Whitney | April 1, 2003

Topic: Embedded Systems

0 comments

Social Bookmarking in the Enterprise

One of the greatest challenges facing people who use large information spaces is to remember and retrieve items that they have previously found and thought to be interesting. One approach to this problem is to allow individuals to save particular search strings to re-create the search in the future. Another approach has been to allow people to create personal collections of material—for example, the use of electronic citation bundles (called binders) in the ACM Digital Library.

by David Millen, Jonathan Feinberg, Bernard Kerr | December 16, 2005

Topic: Social Computing

3 comments

Social Perception

Modeling human interaction for the next generation of communication services

by James L. Crowley | July 27, 2006

Topic: HCI

0 comments

Software Development with Code Maps

Could those ubiquitous hand-drawn code diagrams become a thing of the past?

by Robert DeLine, Gina Venolia, Kael Rowan | July 4, 2010

Topic: Graphics

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 8

1 comments

Software Needs Seatbelts and Airbags

Finding and fixing bugs in deployed software is difficult and time-consuming. Here are some alternatives.

by Emery D. Berger | July 16, 2012

Topic: Patching and Deployment

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 9

1 comments

Software Transactional Memory: Why Is It Only a Research Toy?

The promise of STM may likely be undermined by its overheads and workload applicabilities.

by Calin Cascaval, Colin Blundell, Maged Michael, Harold W. Cain, Peng Wu, Stefanie Chiras, Siddhartha Chatterjee | October 24, 2008

Topic: Concurrency

1 comments

Software and the Concurrency Revolution

Leveraging the full power of multicore processors demands new tools and new thinking from the software industry. Concurrency has long been touted as the "next big thing" and "the way of the future," but for the past 30 years, mainstream software development has been able to ignore it. Our parallel future has finally arrived: new machines will be parallel machines, and this will require major changes in the way we develop software. The introductory article in this issue ("The Future of Microprocessors" by Kunle Olukotun and Lance Hammond) describes the hardware imperatives behind this shift in computer architecture from uniprocessors to multicore processors, also known as CMPs (chip multiprocessors).

by Herb Sutter, James Larus | October 18, 2005

Topic: Concurrency

0 comments

Software engineering and formal methods

The answer to software reliability concerns may lie in formal methods.

by Mike Hinchey, Michael Jackson, Patrick Cousot, Byron Cook, Jonathan P. Bowen, Tiziana Margaria | August 22, 2008

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 9

0 comments

Software model checking takes off

A translator framework enables the use of model checking in complex avionics systems and other industrial settings.

by Steven P. Miller, Michael W. Whalen, Darren D. Cofer | January 26, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 2

0 comments

Software transactional memory:
why is it only a research toy?

The promise of STM may likely be undermined by its overheads and workload applicabilities.

by Calin Cascaval, Colin Blundell, Maged Michael, Harold W. Cain, Peng Wu, Stefanie Chiras, Siddhartha Chatterjee | October 22, 2008

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 11

0 comments

Spam, Spam, Spam, Spam, Spam, the FTC, and Spam

The Federal Trade Commission (FTC) held a forum on spam in Washington, D.C., April 30 to May 2. Rather to my surprise, it was a really good, content-full event. The FTC folks had done their homework and had assembled panelists that ran the gamut from ardent anti-spammers all the way to hard-core spammers and everyone in between: lawyers, legitimate marketers, and representatives from vendor groups.

by Eric Allman | October 2, 2003

Topic: Email and IM

0 comments

Splinternet Behind the Great Firewall of China

Once China opened its door to the world, it could not close it again.

by Daniel Anderson | November 30, 2012

Topic: Web Security

4 comments

Standardizing Storage Clusters

Data-intensive applications such as data mining, movie animation, oil and gas exploration, and weather modeling generate and process huge amounts of data. File-data access throughput is critical for good performance. To scale well, these HPC (high-performance computing) applications distribute their computation among numerous client machines. HPC clusters can range from hundreds to thousands of clients with aggregate I/O demands ranging into the tens of gigabytes per second.

by Garth Goodson, Sai Susharla, Rahul Iyer | November 15, 2007

Topic: File Systems and Storage

0 comments

Storage Systems:
Not Just a Bunch of Disks Anymore

The concept of a storage device has changed dramatically from the first magnetic disk drive introduced by the IBM RAMAC in 1956 to today's server rooms with detached and fully networked storage servers. Storage has expanded in both large and small directions - up to multi-terabyte server appliances and down to multi-gigabyte MP3 players that fit in a pocket. All use the same underlying technology - the rotating magnetic disk drive - but they quickly diverge from there.

by Erik Riedel | July 31, 2003

Topic: File Systems and Storage

0 comments

Storage Virtualization Gets Smart

Over the past 20 years we have seen the transformation of storage from a dumb resource with fixed reliability, performance, and capacity to a much smarter resource that can actually play a role in how data is managed. In spite of the increasing capabilities of storage systems, however, traditional storage management models have made it hard to leverage these data management capabilities effectively. The net result has been overprovisioning and underutilization. In short, although the promise was that smart shared storage would simplify data management, the reality has been different.

by Kostadis Roussos | November 15, 2007

Topic: File Systems and Storage

0 comments

Stream Processors: Progammability and Efficiency

Many signal processing applications require both efficiency and programmability. Baseband signal processing in 3G cellular base stations, for example, requires hundreds of GOPS (giga, or billions, of operations per second) with a power budget of a few watts, an efficiency of about 100 GOPS/W (GOPS per watt), or 10 pJ/op (picoJoules per operation). At the same time programmability is needed to follow evolving standards, to support multiple air interfaces, and to dynamically provision processing resources over different air interfaces. Digital television, surveillance video processing, automated optical inspection, and mobile cameras, camcorders, and 3G cellular handsets have similar needs.

by William J. Dally, Ujval J. Kapasi, Brucek Khailany, Jung Ho Ahn, Abhishek Das | April 16, 2004

Topic: DSPs

0 comments

Streams and Standards:
Delivering Mobile Video

Don’t believe me? Follow along… Mobile phones are everywhere. Everybody has one. Think about the last time you were on an airplane and the flight was delayed on the ground. Immediately after the dreaded announcement, you heard everyone reach for their phones and start dialing.

June 7, 2005

Topic: Mobile Computing

0 comments

Structured Deferral: Synchronization via Procrastination

We simply do not have a synchronization mechanism that can enforce mutual exclusion.

by Paul E. McKenney | May 23, 2013

Topic: Concurrency

0 comments

Structured deferral:
synchronization via procrastination

We simply do not have a synchronization mechanism that can enforce mutual exclusion.

by Paul E. McKenney | June 17, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 7

0 comments

Successful Strategies for IPv6 Rollouts. Really.

Knowing where to begin is half the battle.

by Thomas A. Limoncelli, Vinton G. Cerf | March 10, 2011

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 4

5 comments

System Administration Soft Skills

How can system administrators reduce stress and conflict in the workplace?

by Christina Lear | January 4, 2011

Topic: System Administration

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 2

3 comments

TCP Offload to the Rescue

In recent years, TCP/IP offload engines, known as TOEs, have attracted a good deal of industry attention and a sizable share of venture capital dollars. A TOE is a specialized network device that implements a significant portion of the TCP/IP protocol in hardware, thereby offloading TCP/IP processing from software running on a general-purpose CPU. This article examines the reasons behind the interest in TOEs and looks at challenges involved in their implementation and deployment.

by Andy Currid | June 14, 2004

Topic: Networks

1 comments

Tackling Architectural Complexity with Modeling

Component models can help diagnose architectural problems in both new and existing systems.

by Kevin Montagne | September 17, 2010

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 10

0 comments

Testable System Administration

Models of indeterminism are changing IT management.

by Mark Burgess | January 31, 2011

Topic: System Administration

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 3

1 comments

The (not so) Hidden Computer

Ubiquitous computing may not have arrived yet, but ubiquitous computers certainly have. The sustained improvements wrought by the fulfillment of Moore’s law have led to the use of microprocessors in a vast array of consumer products. A typical car contains 50 to 100 processors. Your microwave has one or maybe more. They’re in your TV, your phone, your refrigerator, your kids’ toys, and in some cases, your toothbrush.

by Terry Coatta | May 2, 2006

Topic: Purpose-built Systems

0 comments

The API Performance Contract

How can the expected interactions between caller and implementation be guaranteed?

by Robert Sproull, Jim Waldo | January 30, 2014

Topic: Performance

3 comments

The API performance contract

How can the expected interactions between caller and implementation be guaranteed?

by Robert F. Sproull, Jim Waldo | February 25, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 3

0 comments

The Age of Corporate Open Source Enlightenment

It's a bad idea, mixing politics and religion. Conventional wisdom tells us to keep them separate - and to discuss neither at a dinner party. The same has been said about the world of software. When it comes to mixing the open source church with the proprietary state (or is it the other way around?), only one rule applies: Don't do it.

by Paul Ferris | October 1, 2003

Topic: Open Source

1 comments

The Answer is 42 of Course

Why is security so hard? As a security consultant, I’m glad that people feel that way, because that perception pays my mortgage. But is it really so difficult to build systems that are impenetrable to the bad guys?

by Thomas Wadlow | July 6, 2005

Topic: Security

0 comments

The Antifragile Organization

Embracing Failure to Improve Resilience and Maximize Availability

by Ariel Tseitlin | June 27, 2013

Topic: Quality Assurance

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 8

3 comments

The Balancing Act of Choosing Nonblocking Features

Design requirements of nonblocking systems

by Maged M. Michael | August 12, 2013

Topic: Concurrency

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 9

0 comments

The Big Bang Theory of IDEs

Remember the halcyon days when development required only a text editor, a compiler, and some sort of debugger (in cases where the odd printf() or two alone didn't serve)? During the early days of computing, these were independent tools used iteratively in development's golden circle. Somewhere along the way we realized that a closer integration of these tools could expedite the development process. Thus was born the integrated development environment (IDE), a framework and user environment for software development that's actually a toolkit of instruments essential to software creation. At first, IDEs simply connected the big three (editor, compiler, and debugger), but nowadays most go well beyond those minimum requirements.

by Caspar Boekhoudt | December 5, 2003

Topic: Development

0 comments

The Case Against Data Lock-in

Want to keep your users? Just make it easy for them to leave.

by Brian W Fitzpatrick, JJ Lueck | October 8, 2010

Topic: Databases

4 comments

The Challenge of Cross-language Interoperability

Interfacing between languages is increasingly important.

by David Chisnall | November 19, 2013

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 12

8 comments

The Cost of Virtualization

Virtualization can be implemented in many different ways. It can be done with and without hardware support. The virtualized operating system can be expected to be changed in preparation for virtualization, or it can be expected to work unchanged. Regardless, software developers must strive to meet the three goals of virtualization spelled out by Gerald Popek and Robert Goldberg: fidelity, performance, and safety.

by Ulrich Drepper | March 4, 2008

Topic: Virtualization

0 comments

The Curse of the Excluded Middle

"Mostly functional" programming does not work.

by Erik Meijer | April 26, 2014

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 6

29 comments

The Deliberate Revolution

While detractors snub XML web services as CORBA with a weight problem, industry cheerleaders say these services are ushering in a new age of seamless integrated computing. But for those of us whose jobs don't involve building industry excitement, what do web services offer?

by Mike Burner | March 4, 2003

Topic: Web Services

0 comments

The Emergence of iSCSI

When most IT pros think of SCSI, images of fat cables with many fragile pins come to mind. Certainly, that's one manifestation - the oldest one. But modern SCSI, as defined by the SCSI-3 Architecture Model, or SAM, really considers the cable and physical interconnections to storage as only one level in a larger hierarchy. By separating the instructions or commands sent to and from devices from the physical layers and their protocols, you arrive at a more generic approach to storage communication.

by Jeffrey S. Goldner | July 14, 2008

Topic: File Systems and Storage

0 comments

The Essence of Software Engineering: The SEMAT Kernel

A thinking framework in the form of an actionable kernel

by Ivar Jacobson, Pan-Wei Ng, Paul McMahon, Ian Spence, Svante Lidman | October 24, 2012

Topic: Development

7 comments

The Evolution of Security

Security people are never in charge unless an acute embarrassment has occurred. Otherwise, their advice is tempered by “economic reality,” which is to say that security is a means, not an end. This is as it should be. Since means are about tradeoffs, security is about trade-offs, but you knew all that.

by Daniel E. Geer | May 4, 2007

Topic: Security

0 comments

The Evolution of Web Development for Mobile Devices

Building Web sites that perform well on mobile devices remains a challenge.

by Nicholas C. Zakas | February 17, 2013

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 4

0 comments

The Family Dynamics of 802.11

Three trends are driving the rapid growth of wireless LAN (WLAN): The increased use of laptops and personal digital assistants (PDAs); rapid advances in WLAN data rates (from 2 megabits per second to 108 Mbps in the past four years); and precipitous drops in WLAN prices (currently under $50 for a client and under $100 for an access point).

July 30, 2003

Topic: Mobile Computing

0 comments

The Five-Minute Rule 20 Years Later:
and How Flash Memory Changes the Rules

The old rule continues to evolve, while flash memory adds two new rules. In 1987, Jim Gray and Gianfranco Putzolu published their now-famous five-minute rule for trading off memory and I/O capacity. Their calculation compares the cost of holding a record (or page) permanently in memory with the cost of performing disk I/O each time the record (or page) is accessed, using appropriate fractions of prices for RAM chips and disk drives. The name of their rule refers to the break-even interval between accesses.

by Goetz Graefe | September 24, 2008

Topic: File Systems and Storage

0 comments

The Future of Human-Computer Interaction

Personal computing launched with the IBM PC. But popular computing launched with the modern WIMP (windows, icons, mouse, pointer) interface, which made computers usable by ordinary people.

by John Canny | July 27, 2006

Topic: HCI

3 comments

The Future of Microprocessors

The performance of microprocessors that power modern computers has continued to increase exponentially over the years for two main reasons. First, the transistors that are the heart of the circuits in all processors and memory chips have simply become faster over time on a course described by Moore’s law,1 and this directly affects the performance of processors built with those transistors. Moreover, actual processor performance has increased faster than Moore’s law would predict,2 because processor designers have been able to harness the increasing numbers of transistors available on modern chips to extract more parallelism from software.

by Kunle Olukotun, Lance Hammond | October 18, 2005

Topic: Processors

0 comments

The Future of WLAN

Since James Clerk Maxwell first mathematically described electromagnetic waves almost a century and a half ago, the world has seen steady progress toward using them in better and more varied ways. Voice has been the killer application for wireless for the past century. As performance in all areas of engineering has improved, wireless voice has migrated from a mass broadcast medium to a peer-to-peer medium. The ability to talk to anyone on the planet from anywhere on the planet has fundamentally altered the way society works and the speed with which it changes.

by Michael W. Ritter | July 9, 2003

Topic: Networks

0 comments

The Heart of Eclipse

A look inside and extensible plug-in architecture ECLIPSE is both an open, extensible development environment for building software and an open, extensible application framework upon which software can be built. Considered the most popular Java IDE, it provides a common UI model for working with tools and promotes rapid development of modular features based on a plug-in component model. The Eclipse Foundation designed the platform to run natively on multiple operating systems, including Macintosh, Windows, and Linux, providing robust integration with each and providing rich clients that support the GUI interactions everyone is familiar with: drag and drop, cut and paste (clipboard), navigation, and customization.

by Dan Rubel | October 10, 2006

Topic: Development

0 comments

The Hitchhiker's Guide to Biomorphic Software

The natural world may be the inspiration we need for solving our computer problems. While it is certainly true that "the map is not the territory," most visitors to a foreign country do prefer to take with them at least a guidebook to help locate themselves as they begin their explorations. That is the intent of this article. Although there will not be enough time to visit all the major tourist sites, with a little effort and using the information in the article as signposts, the intrepid explorer can easily find numerous other, interesting paths to explore.

by Kenneth N Lodding | August 31, 2004

Topic: Bioscience

0 comments

The Ideal HPC Programming Language

Maybe it's Fortran. Or maybe it just doesn't matter.

by Eugene Loh | June 18, 2010

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 7

3 comments

The Inevitability of Reconfigurable Systems

The introduction of the microprocessor in 1971 marked the beginning of a 30-year stall in design methods for electronic systems. The industry is coming out of the stall by shifting from programmed to reconfigurable systems. In programmed systems, a linear sequence of configuration bits, organized into blocks called instructions, configures fixed hardware to mimic custom hardware. In reconfigurable systems, the physical connections among logic elements change with time to mimic custom hardware. The transition to reconfigurable systems will be wrenching, but this is inevitable as the design emphasis shifts from cost performance to cost performance per watt. Here's the story.

by Nick Tredennick, Brion Shimamoto | December 5, 2003

Topic: Power Management

0 comments

The Invisible Assistant

One lab's experiment with ubiquitous computing

by Gaetano Borriello | July 27, 2006

Topic: HCI

0 comments

The Long Road to 64 Bits

"Double, double, toil and trouble"... Shakespeare's words (Macbeth, Act 4, Scene 1) often cover circumstances beyond his wildest dreams. Toil and trouble accompany major computing transitions, even when people plan ahead. To calibrate "tomorrow's legacy today," we should study "tomorrow's legacy yesterday." Much of tomorrow's software will still be driven by decades-old decisions. Past decisions have unanticipated side effects that last decades and can be difficult to undo.

by John R. Mashey | October 10, 2006

Topic: System Evolution

0 comments

The Magic of RFID

Many modern technologies give the impression they work by magic, particularly when they operate automatically and their mechanisms are invisible. A technology called RFID (radio frequency identification), which is relatively new to the mass market, has exactly this characteristic and for many people seems a lot like magic. RFID is an electronic tagging technology that allows an object, place, or person to be automatically identified at a distance without a direct line-of-sight, using an electromagnetic challenge/response exchange.

by Roy Want | November 30, 2004

Topic: RFID

1 comments

The NSA and Snowden:
securing the all-seeing eye

How good security at the NSA could have stopped him.

by Bob Toxen | April 24, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 5

0 comments

The NSA and Snowden: Securing the All-Seeing Eye

How good security at the NSA could have stopped him

by Bob Toxen | April 28, 2014

Topic: Security

4 comments

The Network is Reliable

An informal survey of real-world communications failures

by Peter Bailis, Kyle Kingsbury | July 23, 2014

Topic: Networks

0 comments

The Network's New Role

Companies have always been challenged with integrating systems across organizational boundaries. With the advent of Internet-native systems, this integration has become essential for modern organizations, but it has also become more and more complex, especially as next-generation business systems depend on agile, flexible, interoperable, reliable, and secure cross-enterprise systems.

by Taf Anthias, Krishna Sankar | June 30, 2006

Topic: Networks

0 comments

The Obama Campaign:
A Programmer's Perspective

The Obama campaign has been praised for its innovative use of technology. What was the key to its success?

by Benjamin Boer | February 23, 2009

Topic: Web Development

1 comments

The Pain of Implementing LINQ Providers

It's no easy task for NoSQL

by Oren Eini | July 6, 2011

Topic: Object-Relational Mapping

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 8

3 comments

The Pathologies of Big Data

Scale up your datasets enough and all your apps will come undone. What are the typical problems and where do the bottlenecks generally surface?

by Adam Jacobs | July 6, 2009

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 8

2 comments

The Price of Performance

In the late 1990s, our research group at DEC was one of a growing number of teams advocating the CMP (chip multiprocessor) as an alternative to highly complex single-threaded CPUs. We were designing the Piranha system,1 which was a radical point in the CMP design space in that we used very simple cores (similar to the early RISC designs of the late ’80s) to provide a higher level of thread-level parallelism. Our main goal was to achieve the best commercial workload performance for a given silicon budget.

by Luiz André Barroso | October 18, 2005

Topic: Processors

0 comments

The Reincarnation of Virtual Machines

The term "virtual machine" initially described a 1960s operating system concept: a software abstraction with the looks of a computer system's hardware (real machine). Forty years later, the term encompasses a large range of abstractions - for example, Java virtual machines that don't match an existing real machine. Despite the variations, in all definitions the virtual machine is a target for a programmer or compilation system. In other words, software is written to run on the virtual machine.

by Mendel Rosenblum | August 31, 2004

Topic: Virtual Machines

0 comments

The Rise and Fall of CORBA

Depending on exactly when one starts counting, CORBA is about 10-15 years old. During its lifetime, CORBA has moved from being a bleeding-edge technology for early adopters, to being a popular middleware, to being a niche technology that exists in relative obscurity. It is instructive to examine why CORBA—despite once being heralded as the “next-generation technology for e-commerce”—suffered this fate. CORBA’s history is one that the computing industry has seen many times, and it seems likely that current middleware efforts, specifically Web services, will reenact a similar history.

by Michi Henning | June 30, 2006

Topic: Component Technologies

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 8

17 comments

The Road to SDN

An intellectual history of programmable networks

by Nick Feamster, Jennifer Rexford, Ellen Zegura | December 30, 2013

Topic: Networks

2 comments

The Robustness Principle Reconsidered

Seeking a middle ground

by Eric Allman | June 22, 2011

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 8

0 comments

The Scalability Problem

Back in the mid-1990s, I worked for a company that developed multimedia kiosk demos. Our biggest client was Intel, and we often created demos that appeared in new PCs on the end-caps of major computer retailers such as CompUSA. At that time, performance was in demand for all application classes from business to consumer. We created demos that showed, for example, how much faster a spreadsheet would recalculate (you had to do that manually back then) on a new processor as compared with the previous year's processor. The differences were immediately noticeable to even a casual observer - and it mattered.

by Dean Macri | February 24, 2004

Topic: Game Development

0 comments

The Seven Deadly Sins of Linux Security

Avoid these common security risks like the devil.

by Bob Toxen | June 7, 2007

Topic: Security

0 comments

The Software Inferno

Dante's tale, as experienced by a software architect

by Alex E. Bell | December 16, 2013

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 1

7 comments

The Story of the Teapot in DHTML

It's easy to do amazing things, such as rendering the classic teapot in HTML and CSS.

by Brian Beckman, Erik Meijer | February 11, 2013

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 3

2 comments

The Sun Never Sits on Distributed Development

More and more software development is being distributed across greater and greater distances. The motives are varied, but one of the most predominant is the effort to keep costs down. As talent is where you find it, why not use it where you find it, rather than spending the money to relocate it to some ostensibly more "central" location? The increasing ubiquity of the Internet is making far-flung talent ever-more accessible.

by Ken Coar | January 29, 2004

Topic: Distributed Development

0 comments

The Virtualization Reality

A number of important challenges are associated with the deployment and configuration of contemporary computing infrastructure. Given the variety of operating systems and their many versions—including the often-specific configurations required to accommodate the wide range of popular applications—it has become quite a conundrum to establish and manage such systems.

by Simon Crosby, David Brown | December 28, 2006

Topic: Virtualization

3 comments

The Web Won't Be Safe or Secure until We Break It

Unless you've taken very particular precautions, assume every Web site you visit knows exactly who you are.

by Jeremiah Grossman | November 6, 2012

Topic: Web Security

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 1

14 comments

The World According to LINQ

Big data is about more than size, and LINQ is more than up to the task.

by Erik Meijer | August 30, 2011

Topic: Databases

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 10

5 comments

The Yin and Yang of Software Development

The C/C++ Solution Manager at Parasoft explains how infrastructure elements allow development teams to increase productivity without restricting creativity.

July 14, 2008

Topic: Development

1 comments

The case against data lock-in

Want to keep your users? Just make it easy for them to leave.

by Brian W. Fitzpatrick, JJ Lueck | October 26, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 11

0 comments

The essence of software engineering:
the SEMAT kernel

A thinking framework in the form of an actionable kernel.

by Ivar Jacobson, Pan-Wei Ng, Paul E. McMahon, Ian Spence, Svante Lidman | November 29, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 12

0 comments

The five-minute rule 20 years later (and how flash memory changes the rules)

Revisiting Gray and Putzolu's famous rule in the age of Flash.

by Goetz Graefe | June 29, 2009

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 7

0 comments

The long road to 64 bits

Double, double toil and trouble ---Shakespeare, Macbeth, Act 4, Scene 1

by John Mashey | December 22, 2008

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 1

0 comments

The one-second war

Finding a lasting solution to the leap seconds problem has become increasingly urgent.

by Poul-Henning Kamp | April 21, 2011

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 5

0 comments

The software industry is the problem

The time has come for software liability laws.

by Poul-Henning Kamp | October 24, 2011

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 11

0 comments

There is no getting around it:
you are building a distributed system

Building a distributed system requires a methodical approach to requirements.

by Mark Cavage | May 23, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 6

0 comments

There's Just No Getting around It: You're Building a Distributed System

Building a distributed system requires a methodical approach to requirements.

by Mark Cavage | May 3, 2013

Topic: Distributed Computing

2 comments

There's No Such Thing as a Free (Software) Lunch

"The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software to make sure the software is free for all its users." So begins the GNU General Public License, or GPL, which has become the most widely used of open source software licenses. Freedom is the watchword; it's no coincidence that the organization that wrote the GPL is called the Free Software Foundation and that open source developers everywhere proclaim, "Information wants to be free."

by Jay Michaelson | June 14, 2004

Topic: Open Source

1 comments

Thinking Clearly about Performance

Improving the performance of complex software is difficult, but understanding some fundamental principles can make it easier.

by Cary Millsap | September 1, 2010

Topic: Performance

1 comments

Thinking Methodically about Performance

The USE method addresses shortcomings in other commonly used methodologies.

by Brendan Gregg | December 11, 2012

Topic: Performance

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 2

0 comments

Thinking clearly about performance, part 1

Improving the performance of complex software is difficult, but understanding some fundamental principles can make it easier.

by Cary Millsap | August 24, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 9

0 comments

Thinking clearly about performance, part 2

More important principles to keep in mind when designing high-performance software.

by Cary Millsap | September 30, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 10

0 comments

Thread Scheduling in FreeBSD 5.2

A busy system makes thousands of scheduling decisions per second, so the speed with which scheduling decisions are made is critical to the performance of the system as a whole. This article - excerpted from the forthcoming book, "The Design and Implementation of the FreeBSD Operating System" - uses the example of the open source FreeBSD system to help us understand thread scheduling. The original FreeBSD scheduler was designed in the 1980s for large uniprocessor systems. Although it continues to work well in that environment today, the new ULE scheduler was designed specifically to optimize multiprocessor and multithread environments. This article first studies the original FreeBSD scheduler, then describes the new ULE scheduler.

by Marshall Kirk McKusick, George V. Neville-Neil | November 30, 2004

Topic: Open Source

0 comments

Threads without the Pain

Much of today’s software deals with multiple concurrent tasks. Web browsers support multiple concurrent HTTP connections, graphical user interfaces deal with multiple windows and input devices, and Web and DNS servers handle concurrent connections or transactions from large numbers of clients.

by Andreas Gustafsson | December 16, 2005

Topic: Concurrency

0 comments

TiVo-lution

One of the greatest challenges of designing a computer system is in making sure the system itself is “invisible” to the user. The system should simply be a conduit to the desired result. There are many examples of such purpose-built systems, ranging from modern automobiles to mobile phones.

by Jim Barton | May 2, 2006

Topic: Purpose-built Systems

0 comments

Too Darned Big to Test

The increasing size and complexity of software, coupled with concurrency and distributed systems, has made apparent the ineffectiveness of using only handcrafted tests. The misuse of code coverage and avoidance of random testing has exacerbated the problem. We must start again, beginning with good design (including dependency analysis), good static checking (including model property checking), and good unit testing (including good input selection). Code coverage can help select and prioritize tests to make you more efficient, as can the all-pairs technique for controlling the number of configurations.

by Keith Stobie | February 16, 2005

Topic: Quality Assurance

1 comments

Too Much Information

Two applications reveal the key challenges in making context-aware computing a reality. As mobile computing devices and a variety of sensors become ubiquitous, new resources for applications and services - often collectively referred to under the rubric of context-aware computing - are becoming available to designers and developers. In this article, we consider the potential benefits and issues that arise from leveraging context awareness in new communication services that include the convergence of VoIP (voice over IP) and traditional information technology.

by Jim Christensen, Jeremy Sussman, Stephen Levy, William E. Bennett, Tracee Vetting Wolf, Wendy A. Kellogg | July 27, 2006

Topic: HCI

0 comments

Toward Energy-Efficient Computing

What will it take to make server-side computing more energy efficient?

by David J. Brown, Charles Reams | February 17, 2010

Topic: Power Management

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 3

1 comments

Toward Higher Precision

An introduction to PTP and its significance to NTP practitioners

by Rick Ratzel, Rodney Greenstreet | August 27, 2012

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 10

1 comments

Toward Software-defined SLAs

Enterprise computing in the public cloud

by Jason Lango | January 6, 2014

Topic: Distributed Computing

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 1

0 comments

Toward a Commodity Enterprise Middleware

Can AMQP enable a new era in messaging middleware? AMQP (Advanced Message Queuing Protocol) was born out of my own experience and frustrations in developing front- and back-office processing systems at investment banks. It seemed to me that we were living in integration Groundhog Day - the same problems of connecting systems together would crop up with depressing regularity. Each time the same discussions about which products to use would happen, and each time the architecture of some system would be curtailed to allow for the fact that the chosen middleware was reassuringly expensive.

by John O'Hara | June 7, 2007

Topic: Web Services

0 comments

Trials and Tribulations of Debugging Concurrency

We now sit firmly in the 21st century where the grand challenge to the modern-day programmer is neither memory leaks nor type issues (both of those problems are now effectively solved), but rather issues of concurrency. How does one write increasingly complex programs where concurrency is a first-class concern. Or even more treacherous, how does one debug such a beast? These questions bring fear into the hearts of even the best programmers.

by Kang Su Gatlin | November 30, 2004

Topic: Concurrency

1 comments

Triple-Parity RAID and Beyond

As hard-drive capacities continue to outpace their throughput, the time has come for a new level of RAID.

by Adam Leventhal | December 17, 2009

Topic: File Systems and Storage

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 1

5 comments

UML Fever:
Diagnosis and Recovery

Acknowledgment is only the first step toward recovery from this potentially devastating affliction. The Institute of Infectious Diseases has recently published research confirming that the many and varied strains of UML Fever continue to spread worldwide, indiscriminately infecting software analysts, engineers, and managers alike. One of the fevers most serious side effects has been observed to be a significant increase in both the cost and duration of developing software products. This increase is largely attributable to a decrease in productivity resulting from fever-stricken individuals investing time and effort in activities that are of little or no value to producing deliverable products.

by Alex E. Bell | March 18, 2005

Topic: Patching and Deployment

1 comments

UX design and agile:
a natural fit?

Talking with Julian Gosper, Jean-Luc Agathos, Richard Rutter, and Terry Coatta.

December 22, 2010

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 1

0 comments

Under New Management

In an increasingly competitive global environment, enterprises are under extreme pressure to reduce operating costs. At the same time they must have the agility to respond to business opportunities offered by volatile markets.

by Duncan Johnston-Watt | March 29, 2006

Topic: Workflow Systems

0 comments

Undergraduate Software Engineering: Addressing the Needs of Professional Software Development

Addressing the Needs of Professional Software Development

by Michael J. Lutz, J. Fernando Naveda, James R. Vallino | July 21, 2014

Topic: Development

0 comments

Undergraduate software engineering

Addressing the needs of professional software development.

by Michael J. Lutz, J. Fernando Naveda, James R. Vallino | July 23, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 8

0 comments

Understanding DRM

The explosive growth of the Internet and digital media has created both tremendous opportunities and new threats for content creators. Advances in digital technology offer new ways of marketing, disseminating, interacting with, and monetizing creative works, giving rise to expanding markets that did not exist just a few years ago. At the same time, however, the technologies have created major challenges for copyright holders seeking to control the distribution of their works and protect against piracy.

by David Sohn | January 17, 2008

Topic: Privacy and Rights

1 comments

Understanding Software Patching

Developing and deploying patches is an increasingly important part of the software development process.

by Joseph Dadzie | March 18, 2005

Topic: Patching and Deployment

0 comments

Unified Communications with SIP

SIP can provide realtime communications as a network service. Communications systems based on the SIP (Session Initiation Protocol) standard have come a long way over the past several years. SIP is now largely complete and covers even advanced telephony and multimedia features and feature interactions. Interoperability between solutions from different vendors is repeatedly demonstrated at events such as the SIPit (interoperability test) meetings organized by the SIP Forum, and several manufacturers have proven that proprietary extensions to the standard are no longer driven by technical needs but rather by commercial considerations.

by Martin J. Steinmann | March 9, 2007

Topic: SIP

0 comments

Unifying Biological Image Formats with HDF5

The biosciences need an image format capable of high performance and long-term maintenance. Is HDF5 the answer?

by Matthew T. Dougherty, Michael J. Folk, Erez Zadok, Herbert J. Bernstein, Frances C. Bernstein, Kevin W. Eliceiri, Werner Benger, Christoph Best | October 4, 2009

Topic: Bioscience

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 10

2 comments

Unikernels:
the rise of the virtual library operating system

What if all the software layers in a virtual appliance were compiled within the same safe, high-level language framework?

by Anil Madhavapeddy, David J. Scott | December 23, 2013

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 1

0 comments

Unikernels: Rise of the Virtual Library Operating System

What if all the software layers in a virtual appliance were compiled within the same safe, high-level language framework?

by Anil Madhavapeddy, David J. Scott | January 12, 2014

Topic: Distributed Computing

0 comments

Unlocking Concurrency

Multicore architectures are an inflection point in mainstream software development because they force developers to write parallel programs. In a previous article in Queue, Herb Sutter and James Larus pointed out, “The concurrency revolution is primarily a software revolution.

by Ali-Reza Adl-Tabatabai, Christos Kozyrakis, Bratin Saha | December 28, 2006

Topic: Concurrency

0 comments

Untangling Enterprise Java

Separation of concerns is one of the oldest concepts in computer science. The term was coined by Dijkstra in 1974.1 It is important because it simplifies software, making it easier to develop and maintain. Separation of concerns is commonly achieved by decomposing an application into components. There are, however, crosscutting concerns, which span (or cut across) multiple components. These kinds of concerns cannot be handled by traditional forms of modularization and can make the application more complex and difficult to maintain.

by Chris Richardson | June 30, 2006

Topic: Component Technologies

0 comments

Uprooting Software Defects at the Source

Source code analysis is an emerging technology in the software industry that allows critical source code defects to be detected before a program runs. Although the concept of detecting programming errors at compile time is not new, the technology to build effective tools that can process millions of lines of code and report substantive defects with only a small amount of noise has long eluded the market. At the same time, a different type of solution is needed to combat current trends in the software industry that are steadily diminishing the effectiveness of conventional software testing and quality assurance.

by Seth Hallem, David Park, Dawson Engler | January 28, 2004

Topic: Quality Assurance

0 comments

Usablity Testing for the Web

Today’s Internet user has more choices than ever before, with many competing sites offering similar services. This proliferation of options provides ample opportunity for users to explore different sites and find out which one best suits their needs for any particular service. Users are further served by the latest generation of Web technologies and services, commonly dubbed Web 2.0, which enables a better, more personalized user experience and encourages user-generated content.

by Vikram V. Ingleshwar | August 16, 2007

Topic: Web Development

1 comments

Verification of Safety-critical Software

Avionics software safety certification is achieved through objective-based standards.

by B. Scott Andersen, George Romanski | August 29, 2011

Topic: Quality Assurance

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 10

3 comments

Virtualization:
blessing or curse?

Managing virtualization at a large scale is fraught with hidden challenges.

by Evangelos Kotsovinos | December 22, 2010

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 1

0 comments

Virtualization: Blessing or Curse?

Managing virtualization at a large scale is fraught with hidden challenges.

by Evangelos Kotsovinos | November 22, 2010

Topic: System Administration

0 comments

Visualizing System Latency

Heat maps are a unique and powerful way to visualize latency data. Explaining the results, however, is an ongoing challenge.

by Brendan Gregg | May 28, 2010

Topic: Graphics

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 7

16 comments

VoIP Security: Not an Afterthought

Voice over IP (VoIP) promises to up-end a century-old model of voice telephony by breaking the traditional monolithic service model of the public switched telephone network (PSTN) and changing the point of control and provision from the central office switch to the end user's device.

by Douglas C. Sicker, Tom Lookabaugh | October 25, 2004

Topic: VoIP

0 comments

VoIP: What is it Good for?

VoIP (voice over IP) technology is a rapidly expanding field. More and more VoIP components are being developed, while existing VoIP technology is being deployed at a rapid and still increasing pace. This growth is fueled by two goals: decreasing costs and increasing revenues.

by Sudhir R. Ahuja, Robert Ensor | October 25, 2004

Topic: VoIP

0 comments

Voyage in the Agile Memeplex

Agile processes are not a technology, not a science, not a product. They constitute a space somewhat hard to define. Agile methods, or more precisely 'agile software development methods or processes', are a family of approaches and practices for developing software systems. Any attempt to define them runs into egos and marketing posturing.

by Philippe Kruchten | August 16, 2007

Topic: Web Development

0 comments

Weapons of Mass Assignment

A Ruby on Rails app highlights some serious, yet easily avoided, security vulnerabilities.

by Patrick McKenzie | March 30, 2011

Topic: Security

CACM This article appears in print in Communications of the ACM, Volume 54 Issue 5

3 comments

Weathering the Unexpected

Failures happen, and resilience drills help organizations prepare for them.

by Kripa Krishnan | September 16, 2012

Topic: Quality Assurance

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 11

0 comments

Web Services and IT Management

Web services aren't just for application integration anymore. Platform and programming language independence, coupled with industry momentum, has made Web services the technology of choice for most enterprise integration projects. Their close relationship with SOA (service-oriented architecture) has also helped them gain mindshare. Consider this definition of SOA: "An architectural style whose goal is to achieve loose coupling among interacting software agents. A service is a unit of work done by a service provider to achieve desired end results for a service consumer. Both provider and consumer are roles played by software agents on behalf of their owners."

by Pankaj Kumar | August 18, 2005

Topic: Distributed Computing

0 comments

Web Services: Promises and Compromises

Much of web services' initial promise will be realized via integration within the enterprise, either with legacy applications or new business processes that span organizational silos. Enterprises need organizational structures that support this new paradigm.

by Joanne Martin, Ali Arsanjani, Peri Tarr, Brent Hailpern | March 12, 2003

Topic: Web Services

0 comments

What DNS Is Not

DNS is many things to many people - perhaps too many things to too many people.

by Paul Vixie | November 5, 2009

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 12

41 comments

Whither Sockets?

High bandwidth, low latency, and multihoming challenge the sockets API.

by George V. Neville-Neil | May 11, 2009

Topic: Networks

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 6

20 comments

Who Must You Trust?

You must have some trust if you want to get anything done.

by Thomas Wadlow | May 30, 2014

Topic: Security

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 7

5 comments

Why Cloud Computing Will Never Be Free

The competition among cloud providers may drive prices downward, but at what cost?

by Dave Durkee | April 16, 2010

Topic: Distributed Computing

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 5

1 comments

Why LINQ Matters:
Cloud Composability Guaranteed

The benefits of composability are becoming clear in software engineering.

by Brian Beckman | February 14, 2012

Topic: Programming Languages

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 4

1 comments

Why Writing Your Own Search Engine Is Hard

There must be 4,000 programmers typing away in their basements trying to build the next "world's most scalable" search engine. It has been done only a few times. It has never been done by a big group; always one to four people did the core work, and the big team came on to build the elaborations and the production infrastructure. Why is it so hard? We are going to delve a bit into the various issues to consider when writing a search engine. This article is aimed at those individuals or small groups that are considering this endeavor for their Web site or intranet.

by Anna Patterson | May 5, 2004

Topic: Search Engines

15 comments

Why Your Data Won't Mix

When independent parties develop database schemas for the same domain, they will almost always be quite different from each other. These differences are referred to as semantic heterogeneity, which also appears in the presence of multiple XML documents, Web services, and ontologies—or more broadly, whenever there is more than one way to structure a body of data. The presence of semi-structured data exacerbates semantic heterogeneity, because semi-structured schemas are much more flexible to start with. For multiple data systems to cooperate with each other, they must understand each other’s schemas.

by Alon Halevy | December 8, 2005

Topic: Semi-structured Data

0 comments

XML

XML, as defined by the World Wide Web Consortium in 1998, is a method of marking up a document or character stream to identify structural or other units within the data. XML makes several contributions to solving the problem of semi-structured data, the term database theorists use to denote data that exhibits any of the following characteristics:

by C. M. Sperberg-McQueen | December 8, 2005

Topic: Semi-structured Data

3 comments

XML Fever

Don't let delusions about XML develop into a virulent strain of XML fever.

by Erik Wilde, Robert J. Glushko | December 4, 2008

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 7

0 comments

You Don't Know Jack About Software Maintenance

Long considered an afterthought, software maintenance is easiest and most effective when built into a system from the ground up.

by Paul Stachour, David Collier-Brown | October 23, 2009

Topic: Development

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 11

0 comments

You Don't Know Jack About VoIP

Telecommunications worldwide has experienced a significant revolution over recent years. The long-held promise of network convergence is occurring at an increasing pace. This convergence of data, voice, and video using IP-based networks is delivering advanced services at lower cost across the spectrum, including residential users, business customers of varying sizes, and service providers.

by Phil Sherburne, Cary Fitzgerald | October 25, 2004

Topic: VoIP

0 comments

You Don't Know Jack about Disks

Magnetic disk drives have been at the heart of computer systems since the early 1960s. They brought not only a significant advantage in processing performance, but also a new level of complexity for programmers. The three-dimensional geometry of a disk drive replaced the simple, linear, address space tape-based programming model.

by Dave Anderson | July 31, 2003

Topic: File Systems and Storage

3 comments

You Don't Know Jack about Network Performance

Why does an application that works just fine over a LAN come to a grinding halt across the wide-area network? You may have experienced this firsthand when trying to open a document from a remote file share or remotely logging in over a VPN to an application running in headquarters. Why is it that an application that works fine in your office can become virtually useless over the WAN? If you think it's simply because there's not enough bandwidth in the WAN, then you don't know jack about network performance.

by Kevin Fall, Steve McCanne | June 7, 2005

Topic: Networks

0 comments

You Don't Know Jack about Shared Variables or Memory Models

Data races are evil.

by Hans-J Boehm, Sarita V. Adve | December 28, 2011

Topic: Computer Architecture

1 comments

You don't know jack about shared variables or memory models

Data races are evil.

by Hans-J. Boehm, Sarita V. Adve | January 23, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 2

0 comments

Your Mouse is a Database

Web and mobile applications are increasingly composed of asynchronous and realtime streaming services and push notifications.

by Erik Meijer | March 27, 2012

Topic: Web Development

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 5

1 comments