ACM's annual Applicative conference is June 1-2 in New York City.

Watch presentations from Applicative 2015:

Keynote: JSON Graph: Reactive REST at Netflix
Keynote: Systems at Facebook Scale
Utilizing the other 80% of your system's performance
Flux: A Unidirectional Dataflow Architecture for React Apps
Exploring the Reactive Extensions in JavaScript



March/April 2016


Everything Sysadmin:
The Small Batches Principle


  Thomas A. Limoncelli

Reducing waste, encouraging experimentation, and making everyone happy

Q: What do DevOps people mean when they talk about small batches?

A:To answer that, let's take a look at an unpublished chapter from the upcoming book The Practice of System and Network Administration, third edition, due out in October 2016.

The small batches principle is part of the DevOps methodology. It comes from the lean manufacturing movement, which is often called just-in-time manufacturing. It can be applied to just about any kind of process. It also enables the MVP (minimum viable product) methodology, which involves launching a small version of a service to get early feedback that informs the decisions made later in the project.

System Administration, EverythingSysadmin



Debugging Distributed Systems

  Ivan Beschastnikh
  Patty Wang
  Yuriy Brun
  Michael D. Ernst

ShiViz is a new distributed system debugging visualization tool.

Distributed systems pose unique challenges for software developers. Reasoning about concurrent activities of system nodes and even understanding the system's communication topology can be difficult. A standard approach to gaining insight into system activity is to analyze system logs. Unfortunately, this can be a tedious and complex process. This article looks at several key features and debugging challenges that differentiate distributed systems from other kinds of software. The article presents several promising tools and ongoing research to help resolve these challenges.

Distributed Development, Distributed Computing


The Soft Side of Software:
Nine Things I Didn't Know I Would Learn
Being an Engineer Manager


  Kate Matsudaira

Many of the skills aren't technical at all.

When I moved from being an engineer to being a dev lead, I knew I had a lot to learn. My initial thinking was that I had to be able to do thorough code reviews, design and architect websites, see problems before they happened, and ask insightful technical questions. To me that meant learning the technology and becoming a better engineer. When I actually got into the role (and after doing it almost 15 years), the things I have learned—and that have mattered the most—weren't those technical details. In fact, many of the skills I have built that made me a good engineer manager weren't technical at all and, while unexpected lessons, have helped me in many other areas of my life.

What follows are some of these lessons, along with ideas for applying them in your life—whether you are a manager, want to be a manager, or just want to be a better person and employee.

The Soft Side of Software



Should You Upload or Ship Big Data to the Cloud?

  Sachin Date

The accepted wisdom does not always hold true.

It is accepted wisdom that when the data you wish to move into the cloud is at terabyte scale and beyond, you are better off shipping it to the cloud provider, rather than uploading it. This article takes an analytical look at how shipping and uploading strategies compare, the various factors on which they depend, and under what circumstances you are better off shipping rather than uploading data, and vice versa. Such an analytical determination is important to make, given the increasing availability of gigabit-speed Internet connections, along with the explosive growth in data-transfer speeds supported by newer editions of drive interfaces such as SAS and PCI Express. As this article reveals, the aforementioned "accepted wisdom" does not always hold true, and there are well-reasoned, practical recommendations for uploading versus shipping data to the cloud.

Data and Databases, Distributed Computing, Networks


Kode Vicious: What Are You Trying to Pull?

A single cache miss is more expensive than many instructions.

Saving instructions—how very 1990s of him. It's always nice when people pay attention to details, but sometimes they simply don't pay attention to the right ones. While KV would never encourage developers to waste instructions, given the state of modern software, it does seem like someone already has. KV would, as you did, come out on the side of legibility over the saving of a few instructions.

Kode Vicious



The Flame Graph

  Brendan Gregg

This visualization of software execution is a new necessity for performance profiling and debugging.

An everyday problem in our industry is understanding how software is consuming resources, particularly CPUs. What exactly is consuming how much, and how did this change since the last software version? These questions can be answered using software profilers, tools that help direct developers to optimize their code and operators to tune their environment. The output of profilers can be verbose, however, making it laborious to study and comprehend. The flame graph provides a new visualization for profiler output and can make for much faster comprehension, reducing the time for root cause analysis.

Development



January/February 2016


The Soft Side of Software:
Delegation as Art


  Kate Matsudaira

Be someone who makes everyone else better.

When I started my career as a junior engineer, I couldn't wait to be senior. I would regularly review our promotion guidelines and assess my progress and contributions against them. Of course, at the time I didn't really understand what being senior meant. Being a senior engineer means having strong technical skills, the ability to communicate well and navigate ambiguous situations, and most important of all, the ability to grow and lead other people. Leadership isn't just for managers anymore.

The Soft Side of Software



Why Logical Clocks are Easy

  Carlos Baquero and Nuno Preguiça

Sometimes all you need is the right language.

Any computing system can be described as executing sequences of actions, with an action being any relevant change in the state of the system. For example, reading a file to memory, modifying the contents of the file in memory, or writing the new contents to the file are relevant actions for a text editor. In a distributed system, actions execute in multiple locations; in this context, actions are often called events. Examples of events in distributed systems include sending or receiving messages, or changing some state in a node. Not all events are related, but some events can cause and influence how other, later events occur. For example, a reply to a received mail message is influenced by that message, and maybe by prior messages received.

Languages



Use-Case 2.0

  Ivar Jacobson, Ian Spence, Brian Kerr

The Hub of Software Development

Use cases have been around for almost 30 years as a requirements approach and have been part of the inspiration for more-recent techniques such as user stories. Now the inspiration has flown in the other direction. Use-Case 2.0 is the new generation of use-case-driven development-light, agile, and lean-inspired by user stories and the agile methodologies Scrum and Kanban.

Development



Kode Vicious: GNL is Not Linux

What's in a name?

I keep seeing the terms Linux and GNU/Linux online when I'm reading about open-source software. The terms seem to be mixed up or confused a lot and generate a lot of angry mail and forum threads. When I use a Linux distro am I using Linux or GNU? Does it matter?

What, indeed, is in a name? As you've already seen, this quasi-technical topic continues to cause a bit of heat in the software community, particularly in the open-source world. You can find the narrative from the GNU side by clicking on the link provided in the postscript to this article, but KV finds that narrative lacking, and so, against my better judgment about pigs and dancing, I will weigh in with a few comments.

Kode Vicious



The Bikeshed: More Encryption Means Less Privacy

  Poul-Henning Kamp

Retaining electronic privacy requires more political engagement.

When Edward Snowden made it known to the world that pretty much all traffic on the Internet was collected and searched by the NSA, GCHQ (the UK Government Communications Headquarters) and various other countries' secret services as well, the IT and networking communities were furious and felt betrayed.

The Bikeshed, Privacy



Statistics for Engineers

  Heinrich Hartmann

Applying statistical techniques to operations data

Modern IT systems collect an increasing wealth of data from network gear, operating systems, applications, and other components. This data needs to be analyzed to derive vital information about the user experience and business performance. For instance, faults need to be detected, service quality needs to be measured and resource usage of the next days and month needs to be forecast.

Data



Borg, Omega, and Kubernetes

  Brendan Burns, Brian Grant, David Oppenheimer, Eric Brewer, and John Wilkes

Lessons learned from three container-management systems over a decade

Though widespread interest in software containers is a relatively recent phenomenon, at Google we have been managing Linux containers at scale for more than ten years and built three different container-management systems in that time. Each system was heavily influenced by its predecessors, even though they were developed for different reasons. This article describes the lessons we've learned from developing and operating them.

System Evolution



November/December 2015


Kode Vicious:
Code Hoarding


Committing to commits, and the beauty of summarizing graphs

There are as many reasons for exposing or hiding code as there are coders in the firmament of programming. Put another way, there is more code hidden in source repos than are dreamt of in your... well, you get the idea.
One of the most common forms of code hiding that I come across when working with any code, not just open source, is the code that is committed but to which the developers are not fully committed themselves. Sometimes this is code that supports a feature demanded by sales or marketing, but which the developers either do not believe in or which they consider to be actively harmful to the system.

Kode Vicious



The Soft Side of Software:
The Paradox of Autonomy and Recognition


  Kate Matsudaira

Thoughts on trust and merit in software team culture

Who doesn't want recognition for their hard work and contributions? Early in my career I wanted to believe that if you worked hard, and added value, you would be rewarded. I wanted to believe in the utopian ideal that hard work, discipline, and contributions were the fuel that propelled you up the corporate ladder. Boy, was I wrong.

The Soft Side of Software



Everything Sysadmin:
How Sysadmins Devalue Themselves


  Thomas A. Limoncelli

And how to track on-call coverage

Q: Dear Tom, How can I devalue my work? Lately I've felt like everyone appreciates me, and, in fact, I'm overpaid and underutilized. Could you help me devalue myself at work?

A: Dear Reader, Absolutely! I know what a pain it is to lug home those big paychecks. It's so distracting to have people constantly patting you on the back. Ouch! Plus, popularity leads to dates with famous musicians and movie stars. (Just ask someone like Taylor Swift or Leonardo DiCaprio.) Who wants that kind of distraction when there's a perfectly good video game to be played?

System Administration, EverythingSysadmin



The Verification of a Distributed System

  Caitie McCaffrey

A practitioner's guide to increasing confidence in system correctness

Leslie Lamport, known for his seminal work in distributed systems, famously said, "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." Given this bleak outlook and the large set of possible failures, how do you even begin to verify and validate that the distributed systems you build are doing the right thing?

Distributed Development, Distributed Computing



Accountability in Algorithmic Decision-making

  Nicholas Diakopoulos

A view from computational journalism

Every fiscal quarter automated writing algorithms churn out thousands of corporate earnings articles for the AP (Associated Press) based on little more than structured data. Companies such as Automated Insights, which produces the articles for AP, and Narrative Science can now write straight news articles in almost any domain that has clean and well-structured data: finance, sure, but also sports, weather, and education, among others. The articles aren't cardboard either; they have variability, tone, and style, and in some cases readers even have difficulty distinguishing the machine-produced articles from human-written ones.

Privacy and Rights, HCI, AI



Immutability Changes Everything

  Pat Helland

We need it, we can afford it, and the time is now.

There is an inexorable trend toward storing and sending immutable data. We need immutability to coordinate at a distance, and we can afford immutability as storage gets cheaper. This article is an amuse-bouche sampling the repeated patterns of computing that leverage immutability. Climbing up and down the compute stack really does yield a sense of déjà vu all over again.

Data



Time is an Illusion

  George Neville-Neil

Lunchtime doubly so.
- Ford Prefect to Arthur Dent in "The Hitchhiker's Guide to the Galaxy", by Douglas Adams

One of the more surprising things about digital systems is how poorly they keep time. When most programs ran on a single system this was not a significant issue for the majority of software developers, but once software moved into the distributed-systems realm this inaccuracy became a significant challenge. Few programmers have read the most important paper in this area, Leslie Lamport's "Time, Clocks, and the Ordering of Events in a Distributed System" (1978), and only a few more have come to appreciate the problems they face once they move into the world of distributed systems.

Distributed Computing



Non-volatile Storage

  Mihir Nanavati,
  Malte Schwarzkopf,
  Jake Wires,
  Andrew Warfield

Implications of the Datacenter's Shifting Center

For the entire careers of most practicing computer scientists, a fundamental observation has consistently held true: CPUs are significantly more performant and more expensive than I/O devices. The fact that CPUs can process data at extremely high rates, while simultaneously servicing multiple I/O devices, has had a sweeping impact on the design of both hardware and software for systems of all sizes, for pretty much as long as we've been building them. This assumption, however, is in the process of being completely invalidated.

Data and Databases



Schema.org: Evolution of Structured Data on the Web

  R.V. Guha, Google
  Dan Brickley, Google
  Steve Macbeth, Microsoft

Big data makes common schemas even more necessary.

Separation between content and presentation has always been one of the important design aspects of the Web. Historically, however, even though most Web sites were driven off structured databases, they published their content purely in HTML. Services such as Web search, price comparison, reservation engines, etc. that operated on this content had access only to HTML. Applications requiring access to the structured data underlying these Web pages had to build custom extractors to convert plain HTML into structured data. These efforts were often laborious and the scrapers were fragile and error prone, breaking every time a site changed its layout.

Data and Databases



Older Issues