De-identification

Vol. 13 No. 8 – September-October 2015

De-identification

Automation Should Be Like Iron Man, Not Ultron:
The "Leftover Principle" Requires Increasingly More Highly-skilled Humans.

A few years ago we automated a major process in our system administration team. Now the system is impossible to debug. Nobody remembers the old manual process and the automation is beyond what any of us can understand. We feel like we’ve painted ourselves into a corner. Is all operations automation doomed to be this way?

by Thomas A. Limoncelli

How to De-identify Your Data:
Balancing statistical accuracy and subject privacy in large social-science data sets

Big data is all the rage; using large data sets promises to give us new insights into questions that have been difficult or impossible to answer in the past. This is especially true in fields such as medicine and the social sciences, where large amounts of data can be gathered and mined to find insightful relationships among variables. Data in such fields involves humans, however, and thus raises issues of privacy that are not faced by fields such as physics or astronomy.

by Olivia Angiuli, Joe Blitzstein, Jim Waldo

A Purpose-built Global Network: Google’s Move to SDN:
A discussion with Amin Vahdat, David Clark, and Jennifer Rexford

Everything about Google is at scale, of course -- a market cap of legendary proportions, an unrivaled talent pool, enough intellectual property to keep armies of attorneys in Guccis for life, and, oh yeah, a private WAN (wide area network) bigger than you can possibly imagine that also happens to be growing substantially faster than the Internet as a whole.

by Amin Vahdat, David Clark, Jennifer Rexford

Fail at Scale:
Reliability in the face of rapid change

Failure is part of engineering any large-scale system. One of Facebook’s cultural values is embracing failure. This can be seen in the posters hung around the walls of our Menlo Park headquarters: "What Would You Do If You Weren’t Afraid?" and "Fortune Favors the Bold."

by Ben Maurer

Pickled Patches:
On repositories of patches and tension between security professionals and in-house developers

I recently came upon a software repository that was not a repo of code, but a repo of patches. The project seemed to build itself out of several other components and then had complicated scripts that applied the patches in a particular order. I had to look at this repo because I wanted to fix a bug in the system, but trying to figure out what the code actually looked like at any particular point in time was baffling. Are there tools that would help in working like this? I’ve never come across this type of system before, where there were over a hundred patches, some of which contained thousands of lines of code.

by George Neville-Neil

Challenges of Memory Management on Modern NUMA System:
Optimizing NUMA systems applications with Carrefour

Modern server-class systems are typically built as several multicore chips put together in a single system. Each chip has a local DRAM (dynamic random-access memory) module; together they are referred to as a node. Nodes are connected via a high-speed interconnect, and the system is fully coherent. This means that, transparently to the programmer, a core can issue requests to its node’s local memory as well as to the memories of other nodes. The key distinction is that remote requests will take longer, because they are subject to longer wire delays and may have to jump several hops as they traverse the interconnect. The latency of memory-access times is hence non-uniform, because it depends on where the request originates and where it is destined to go. Such systems are referred to as NUMA (non-uniform memory access).

by Fabien Gaud, Baptiste Lepers, Justin Funston, Mohammad Dashti, Alexandra Fedorova, Vivien Quéma, Renaud Lachaize, Mark Roth

Lean Software Development - Building and Shipping Two Versions:
Catering to developers’ strengths while still meeting team objectives

Once I was managing a software team and we were working on several initiatives. Projects were assigned based on who was available, their skillsets, and their development goals. This resulted in two developers, Mary and Melissa, being assigned to the same project.

by Kate Matsudaira

It Probably Works:
Probabilistic algorithms are all around us--not only are they acceptable, but some programmers actually seek out chances to use them.

Probabilistic algorithms exist to solve problems that are either impossible or unrealistic (too expensive, too time-consuming, etc.) to solve precisely. In an ideal world, you would never actually need to use probabilistic algorithms. To programmers who are not familiar with them, the idea can be positively nervewracking: "How do I know that it will actually work? What if it’s inexplicably wrong? How can I debug it? Maybe we should just punt on this problem, or buy a whole lot more servers..."

by Tyler McMullen

Componentizing the Web:
We may be on the cusp of a new revolution in web development.

There is no task in software engineering today quite as herculean as web development. A typical specification for a web application might read: The app must work across a wide variety of browsers. It must run animations at 60 fps. It must be immediately responsive to touch. It must conform to a specific set of design principles and specs. It must work on just about every screen size imaginable, from TVs and 30-inch monitors to mobile phones and watch faces. It must be well-engineered and maintainable in the long term.

by Taylor Savage