The talks from the 2015 Applicative conference are now online

April 2015

From the EDVAC to WEBVACs

  Daniel C. Wang

Cloud computing for computer scientists

By now everyone has heard of cloud computing and realized that it is changing how both traditional enterprise IT and emerging startups are building solutions for the future. Is this trend toward the cloud just a shift in the complicated economics of the hardware and software industry, or is it a fundamentally different way of thinking about computing? Having worked in the industry, I can confidently say it is both.

Distributed Computing

March 2015

Spicing Up Dart with Side Effects

  Erik Meijer, Applied Duality; Kevin Millikin, Google; Gilad Bracha, Google

A set of extensions to the Dart programming language, designed to support asynchrony and generator functions

The Dart programming language has recently incorporated a set of extensions designed to support asynchrony and generator functions. Because Dart is a language for Web programming, latency is an important concern. To avoid blocking, developers must make methods asynchronous when computing their results requires nontrivial time. Generator functions ease the task of computing iterable sequences.

Programming Languages

Reliable Cron across the Planet

  Štěpán Davidovič, Kavita Guliani, Google

...or How I stopped worrying and learned to love time

This article describes Google's implementation of a distributed Cron service, serving the vast majority of internal teams that need periodic scheduling of compute jobs. During its existence, we have learned many lessons on how to design and implement what might seem like a basic service. Here, we discuss the problems that distributed Crons face and outline some potential solutions.

Distributed Computing

There is No Now

  Justin Sheehy

Problems with simultaneity in distributed systems

"Now." The time elapsed between when I wrote that word and when you read it was at least a couple of weeks. That kind of delay is one that we take for granted and don't even think about in written media. "Now." If we were in the same room and instead I spoke aloud, you might have a greater sense of immediacy. You might intuitively feel as if you were hearing the word at exactly the same time that I spoke it. That intuition would be wrong. If, instead of trusting your intuition, you thought about the physics of sound, you would know that time must have elapsed between my speaking and your hearing. The motion of the air, carrying my word, would take time to get from my mouth to your ear.

Distributed Computing

Parallel Processing with Promises

  Spencer Rathbun

A simple method of writing a collaborative system

In today's world, there are many reasons to write concurrent software. The desire to improve performance and increase throughput has led to many different asynchronous techniques. The techniques involved, however, are generally complex and the source of many subtle bugs, especially if they require shared mutable state. If shared state is not required, then these problems can be solved with a better abstraction called promises. These allow programmers to hook asynchronous function calls together, waiting for each to return success or failure before running the next appropriate function in the chain.


February 2015

Raw Networking

  George Neville-Neil

Relevance and repeatability

Dear KV, The company I work for has decided to use a wireless network link to reduce latency, at least when the weather between the stations is good. It seems to me that for transmission over lossy wireless links we'll want our own transport protocol that sits directly on top of whatever the radio provides, instead of wasting bits on IP and TCP or UDP headers, which, for a point-to-point network, aren't really useful.

Networks, Kode Vicious Performance

Go Static or Go Home

  Paul Vixie

In the end, dynamic systems are simply less secure.

Most current and historic problems in computer and network security boil down to a single observation: letting other people control our devices is bad for us. At another time, I'll explain what I mean by "other people" and "bad." For the purpose of this article, I'll focus entirely on what I mean by control. One way we lose control of our devices is to external distributed denial of service (DDoS) attacks, which fill a network with unwanted traffic, leaving no room for real ("wanted") traffic. Other forms of DDoS are similar—an attack by the Low Orbit Ion Cannon (LOIC), for example, might not totally fill up a network, but it can keep a web server so busy answering useless attack requests that the server can't answer any useful customer requests. Either way, DDoS means outsiders are controlling our devices, and that's bad for us.

Networks, Security Web Security

HTTP/2.0 - The IETF is Phoning It In

  Poul-Henning Kamp

Bad protocol, bad politics

In the long run, the most memorable event of 1989 will probably be that Tim Berners-Lee hacked up the HTTP protocol and named the result the "World Wide Web." Tim's HTTP protocol ran on 10Mbit/s, Ethernet, and coax cables, and his computer was a NeXT Cube with a 25-MHz clock frequency. Twenty-six years later, my laptop CPU is a hundred times faster and has a thousand times as much RAM as Tim's machine had, but the HTTP protocol is still the same. A few days ago the IESG, The Internet Engineering Steering Group, asked for "Last Call" comments on new "HTTP/2.0" protocol before blessing it as a "Proposed Standard".

The Bikeshed, Networks, Security Web Development, Web Security, Web Services

December 2014 / January 2015

META II: Digital Vellum in the Digital Scriptorium

  Dave Long

Revisiting Schorre's 1962 compiler-compiler

Some people do living history—reviving older skills and material culture by reenacting Waterloo or knapping flint knives. One pleasant rainy weekend in 2012, I set my sights a little more recently and settled in for a little meditative retro-computing, ca. 1962, following the ancient mode of transmission of knowledge: lecture and recitation—or rather, grace of living in historical times, lecture (here, in the French sense, reading) and transcription (or even more specifically, grace of living post-Post, lecture and reimplementation). Fortunately, for my purposes, Dewey Val Schorre's paper on META II was, unlike many more recent digital artifacts, readily available as a digital scan.

Programming Languages

Model-based Testing: Where Does It Stand?

  Robert V. Binder, Bruno Legeard, and Anne Kramer

MBT has positive effects on efficiency and effectiveness, even if it only partially fulfills high expectations.

From mid-June 2014 to early August 2014, we conducted a survey to learn how MBT users view its efficiency and effectiveness. The 2014 MBT User Survey, a follow-up to a similar 2012 survey (, was open to all those who have evaluated or used any MBT approach. Its 32 questions included some from a survey distributed at the 2013 User Conference on Advanced Automated Testing. Some questions focused on the efficiency and effectiveness of MBT, providing the figures that managers are most interested in. Other questions were more technical and sought to validate a common MBT classification scheme. A common classification scheme could help users understand both the general diversity and specific approaches.

Quality Assurance

Securing the Network Time Protocol

  Harlan Stenn

Crackers discover how to use NTP as a weapon for abuse.

In the late 1970s David L. Mills began working on the problem of synchronizing time on networked computers, and NTP (Network Time Protocol) version 1 made its debut in 1980. This was at a time when the net was a much friendlier place—the ARPANET days. NTP version 2 appeared approximately a year later, about the same time as CSNET (Computer Science Network). NSFNET (National Science Foundation Network) launched in 1986. NTP version 3 showed up in 1993.

Networks, Security

November 2014

Scalability Techniques for Practical Synchronization Primitives

  Davidlohr Bueso

Designing locking primitives with performance in mind

In an ideal world, applications are expected to scale automatically when executed on increasingly larger systems. In practice, however, not only does this scaling not occur, but it is common to see performance actually worsen on those larger systems.


Internal Access Controls

  Geetanjali Sampemane

Trust But Verify.

Every day seems to bring news of another dramatic and high-profile security incident, whether it is the discovery of longstanding vulnerabilities in widely used software such as OpenSSL or Bash, or celebrity photographs stolen and publicized. There seems to be an infinite supply of zero-day vulnerabilities and powerful state-sponsored attackers. In the face of such threats, is it even worth trying to protect your systems and data? What can systems security designers and administrators do?


Disambiguating Databases

  Rick Richardson

Use the database built for your access model.

The topic of data storage is one that doesn't need to be well understood until something goes wrong (data disappears) or something goes really right (too many customers). Because databases can be treated as black boxes with an API, their inner workings are often overlooked. They're often treated as magic things that just take data when offered and supply it when asked. Since these two operations are the only understood activities of the technology, they are often the only features presented when comparing different technologies.


Kode Vicious: Too Big to Fail

Visibility leads to debuggability.

Our project has been rolling out a well-known, distributed key/value store onto our infrastructure, and we've been surprised—more than once—when a simple increase in the number of clients has not only slowed things, but brought them to a complete halt. This then results in rollback while several of us scour the online forums to figure out if anyone else has seen the same problem. The entire reason for using this project's software is to increase the scale of a large system, so I have been surprised at how many times a small increase in load has led to a complete failure. Is there something about scaling systems that's so difficult that these systems become fragile, even at a modest scale?

Kode Vicious, Development

October 2014

The Responsive Enterprise: Embracing the Hacker Way

  Erik Meijer and Vikram Kapoor

Soon every company will be a software company.

As of July 2014, Facebook, founded in 2004, is in the top 20 of the most valuable companies in the S&P 500, putting the 10-year-old software company in the same league as IBM, Oracle, and Coca-Cola. Of the top five fastest-growing companies with regard to market capitalization in 2014 (table 1), three are software companies: Apple, Google, and Microsoft (in fact, one could argue that Intel is also driven by software, making it four out of five).


There's No Such Thing
as a General-purpose Processor

  David Chisnall

And the belief in such a device is harmful.

There is an increasing trend in computer architecture to categorize processors and accelerators as "general purpose." Of the papers published at this year's International Symposium on Computer Architecture (ISCA 2014), nine out of 45 explicitly referred to general-purpose processors; one additionally referred to general-purpose FPGAs (field-programmable gate arrays), and another referred to general-purpose MIMD (multiple instruction, multiple data) supercomputers, stretching the definition to the breaking point. This article presents the argument that there is no such thing as a truly general-purpose processor and that the belief in such a device is harmful.

Computer Architecture

A New Software Engineering

  Ivar Jacobson, Ed Seidewitz

What happened to the promise of rigorous, disciplined, professional practices for software development?

What has been adopted under the rubric of "software engineering" is a set of practices largely adapted from other engineering disciplines: project management, design and blueprinting, process control, and so forth. The basic analogy was to treat software as a manufactured product, with all the real "engineering" going on upstream of that—in requirements analysis, design, modeling, etc.


September 2014

JavaScript and the Netflix User Interface

  Alex Liu

Conditional dependency resolution

In the two decades since its introduction, JavaScript has become the de facto official language of the Web. JavaScript trumps every other language when it comes to the number of runtime environments in the wild. Nearly every consumer hardware device on the market today supports the language in some way. While this is done most commonly through the integration of a Web browser application, many devices now also support Web views natively as part of the operating system UI (user interface). Across most platforms (phones, tablets, TVs, and game consoles), the Netflix UI, for example, is written almost entirely in JavaScript.

Web Development

Productivity in Parallel Programming: A Decade of Progress

  John T. Richards, Jonathan Brezin, Calvin B. Swart, Christine A. Halverson

Looking at the design and benefits of X10

In 2002 DARPA (Defense Advanced Research Projects Agency) launched a major initiative in HPCS (high-productivity computing systems). The program was motivated by the belief that the utilization of the coming generation of parallel machines was gated by the difficulty of writing, debugging, tuning, and maintaining software at peta scale.


Evolution of the Product Manager

  Ellen Chisa

Better education needed to develop the discipline

Software practitioners know that product management is a key piece of software development. Product managers talk to users to help figure out what to build, define requirements, and write functional specifications. They work closely with engineers throughout the process of building software. They serve as a sounding board for ideas, help balance the schedule when technical challenges occur — and push back to executive teams when technical revisions are needed. Product managers are involved from before the first code is written, until after it goes out the door.


Kode Vicious: Port Squatting

  George Neville-Neil

Don't irk your local sysadmin.

A few years ago you upbraided some developers for not following the correct process when requesting a reserved network port from IETF (Internet Engineering Task Force). While I get that squatting a used port is poor practice, I wonder if you, yourself, have ever tried to get IETF to allocate a port. We recently went through this with a new protocol on an open-source project, and it was a nontrivial and frustrating exercise. While I wouldn't encourage your readers to squat ports, I can see why they might just look for unallocated ports on their own and simply start using those, with the expectation that if their protocols proved popular, they would be granted the allocations later.

Kode Vicious, Networks

Older Issues