Failure and Recovery

RSS
Sort By:

Oops! Coping with Human Error in IT Systems:
Errors Happen. How to Deal.

Human operator error is one of the most insidious sources of failure and data loss in today’s IT environments. In early 2001, Microsoft suffered a nearly 24-hour outage in its Web properties as a result of a human error made while configuring a name resolution system. Later that year, an hour of trading on the Nasdaq stock exchange was disrupted because of a technicians mistake while testing a development system. More recently, human error has been blamed for outages in instant messaging networks, for security and privacy breaches, and for banking system failures.

by Aaron B. Brown | December 6, 2004

0 comments

Automating Software Failure Reporting:
We can only fix those bugs we know about.

There are many ways to measure quality before and after software is released. For commercial and internal-use-only products, the most important measurement is the user’s perception of product quality. Unfortunately, perception is difficult to measure, so companies attempt to quantify it through customer satisfaction surveys and failure/behavioral data collected from its customer base. This article focuses on the problems of capturing failure data from customer sites.

by Brendan Murphy | December 6, 2004

0 comments

Error Messages:
What’s the Problem?

Computer users spend a lot of time chasing down errors - following the trail of clues that starts with an error message and that sometimes leads to a solution and sometimes to frustration. Problems with error messages are particularly acute for system administrators (sysadmins) - those who configure, install, manage, and maintain the computational infrastructure of the modern world - as they spend a lot of effort to keep computers running amid errors and failures.

by Paul P. Maglio, Eser Kandogan | December 6, 2004

0 comments

Self-Healing in Modern Operating Systems:
A few early steps show there’s a long (and bumpy) road ahead.

Driving the stretch of Route 101 that connects San Francisco to Menlo Park each day, billboard faces smilingly reassure me that all is well in computerdom in 2004. Networks and servers, they tell me, can self-defend, self-diagnose, self-heal, and even have enough computing power left over from all this introspection to perform their owner-assigned tasks.

by Michael W. Shapiro | December 27, 2004

0 comments

Injecting Errors for Fun and Profit:
Error-detection and correction features are only as good as our ability to test them.

It is an unfortunate fact of life that anything with moving parts eventually wears out and malfunctions, and electronic circuitry is no exception. In this case, of course, the moving parts are electrons. In addition to the wear-out mechanisms of electromigration (the moving electrons gradually push the metal atoms out of position, causing wires to thin, thus increasing their resistance and eventually producing open circuits) and dendritic growth (the voltage difference between adjacent wires causes the displaced metal atoms to migrate toward each other, just as magnets will attract each other, eventually causing shorts), electronic circuits are also vulnerable to background radiation.

by Steve Chessin | August 6, 2010

0 comments

Forced Exception-Handling:
You can never discount the human element in programming.

Yes, KV also reads "The Morning Paper," although he has to admit that he does not read everything that arrives in his inbox from that list. Of course, the paper you mention piqued my interest, and one of the things you don’t point out is that it’s actually a study of distributed systems failures. Now, how can we make programming harder? I know! Let’s take a problem on a single system and distribute it. Someday I would like to see a paper that tells us if problems in distributed systems increase along with the number of nodes, or the number of interconnections.

by George Neville-Neil | March 14, 2017

1 comments

Too Big NOT to Fail:
Embrace failure so it doesn’t embrace you.

Web-scale infrastructure implies LOTS of servers working together, often tens or hundreds of thousands of servers all working toward the same goal. How can the complexity of these environments be managed? How can commonality and simplicity be introduced?

by Pat Helland, Simon Weaver, Ed Harris | April 5, 2017

0 comments

Abstracting the Geniuses Away from Failure Testing:
Ordinary users need tools that automate the selection of custom-tailored faults to inject.

This article presents a call to arms for the distributed systems research community to improve the state of the art in fault tolerance testing. Ordinary users need tools that automate the selection of custom-tailored faults to inject. We conjecture that the process by which superusers select experiments can be effectively modeled in software. The article describes a prototype validating this conjecture, presents early results from the lab and the field, and identifies new research directions that can make this vision a reality.

by Peter Alvaro, Severine Tymon | October 26, 2017

1 comments