ACM's new Applicative conference runs from February 25th through the 27th in New York.

Keynotes from Jafar Husain at Netflix and Ben Maurer at Facebook. Other speakers include Maged Michael, Theo Schlossnagle, Alexandra Federova, Ulrich Drepper, and Queue's Kode Vicious, George Neville-Neil

Early registration discounts through January 28th.
Conference site.
Register here.


December 2014 / January 2015


META II: Digital Vellum in the Digital Scriptorium

  Dave Long

Revisiting Schorre's 1962 compiler-compiler

Some people do living history—reviving older skills and material culture by reenacting Waterloo or knapping flint knives. One pleasant rainy weekend in 2012, I set my sights a little more recently and settled in for a little meditative retro-computing, ca. 1962, following the ancient mode of transmission of knowledge: lecture and recitation—or rather, grace of living in historical times, lecture (here, in the French sense, reading) and transcription (or even more specifically, grace of living post-Post, lecture and reimplementation). Fortunately, for my purposes, Dewey Val Schorre's paper on META II was, unlike many more recent digital artifacts, readily available as a digital scan.

Programming Languages,



Model-based Testing: Where Does It Stand?

  Robert V. Binder, Bruno Legeard, and Anne Kramer

MBT has positive effects on efficiency and effectiveness, even if it only partially fulfills high expectations.

From mid-June 2014 to early August 2014, we conducted a survey to learn how MBT users view its efficiency and effectiveness. The 2014 MBT User Survey, a follow-up to a similar 2012 survey (http://robertvbinder.com/real-users-of-model-based-testing/), was open to all those who have evaluated or used any MBT approach. Its 32 questions included some from a survey distributed at the 2013 User Conference on Advanced Automated Testing. Some questions focused on the efficiency and effectiveness of MBT, providing the figures that managers are most interested in. Other questions were more technical and sought to validate a common MBT classification scheme. A common classification scheme could help users understand both the general diversity and specific approaches.

Quality Assurance,



Go Static or Go Home

  Paul Vixie

In the end, dynamic systems are simply less secure.

Most current and historic problems in computer and network security boil down to a single observation: letting other people control our devices is bad for us. At another time, I'll explain what I mean by "other people" and "bad." For the purpose of this article, I'll focus entirely on what I mean by control. One way we lose control of our devices is to external distributed denial of service (DDoS) attacks, which fill a network with unwanted traffic, leaving no room for real ("wanted") traffic. Other forms of DDoS are similar—an attack by the Low Orbit Ion Cannon (LOIC), for example, might not totally fill up a network, but it can keep a web server so busy answering useless attack requests that the server can't answer any useful customer requests. Either way, DDoS means outsiders are controlling our devices, and that's bad for us.

Networks, Security Web Security,



Securing the Network Time Protocol

  Harlan Stenn

Crackers discover how to use NTP as a weapon for abuse.

In the late 1970s David L. Mills began working on the problem of synchronizing time on networked computers, and NTP (Network Time Protocol) version 1 made its debut in 1980. This was at a time when the net was a much friendlier place—the ARPANET days. NTP version 2 appeared approximately a year later, about the same time as CSNET (Computer Science Network). NSFNET (National Science Foundation Network) launched in 1986. NTP version 3 showed up in 1993.

Networks, Security



HTTP/2.0 - The IETF is Phoning It In

  Poul-Henning Kamp

Bad protocol, bad politics

In the long run, the most memorable event of 1989 will probably be that Tim Berners-Lee hacked up the HTTP protocol and named the result the "World Wide Web." Tim's HTTP protocol ran on 10Mbit/s, Ethernet, and coax cables, and his computer was a NeXT Cube with a 25-MHz clock frequency. Twenty-six years later, my laptop CPU is a hundred times faster and has a thousand times as much RAM as Tim's machine had, but the HTTP protocol is still the same. A few days ago the IESG, The Internet Engineering Steering Group, asked for "Last Call" comments on new "HTTP/2.0" protocol before blessing it as a "Proposed Standard".

The Bikeshed, Networks, Security Web Development, Web Security, Web Services,




November 2014


Scalability Techniques for Practical Synchronization Primitives

  Davidlohr Bueso

Designing locking primitives with performance in mind

In an ideal world, applications are expected to scale automatically when executed on increasingly larger systems. In practice, however, not only does this scaling not occur, but it is common to see performance actually worsen on those larger systems.

Concurrency



Internal Access Controls

  Geetanjali Sampemane

Trust But Verify.

Every day seems to bring news of another dramatic and high-profile security incident, whether it is the discovery of longstanding vulnerabilities in widely used software such as OpenSSL or Bash, or celebrity photographs stolen and publicized. There seems to be an infinite supply of zero-day vulnerabilities and powerful state-sponsored attackers. In the face of such threats, is it even worth trying to protect your systems and data? What can systems security designers and administrators do?

Security



Disambiguating Databases

  Rick Richardson

Use the database built for your access model.

The topic of data storage is one that doesn't need to be well understood until something goes wrong (data disappears) or something goes really right (too many customers). Because databases can be treated as black boxes with an API, their inner workings are often overlooked. They're often treated as magic things that just take data when offered and supply it when asked. Since these two operations are the only understood activities of the technology, they are often the only features presented when comparing different technologies.

Data



Kode Vicious: Too Big to Fail

Visibility leads to debuggability.

Our project has been rolling out a well-known, distributed key/value store onto our infrastructure, and we've been surprised—more than once—when a simple increase in the number of clients has not only slowed things, but brought them to a complete halt. This then results in rollback while several of us scour the online forums to figure out if anyone else has seen the same problem. The entire reason for using this project's software is to increase the scale of a large system, so I have been surprised at how many times a small increase in load has led to a complete failure. Is there something about scaling systems that's so difficult that these systems become fragile, even at a modest scale?

Kode Vicious, Development




October 2014


The Responsive Enterprise: Embracing the Hacker Way

  Erik Meijer and Vikram Kapoor

Soon every company will be a software company.

As of July 2014, Facebook, founded in 2004, is in the top 20 of the most valuable companies in the S&P 500, putting the 10-year-old software company in the same league as IBM, Oracle, and Coca-Cola. Of the top five fastest-growing companies with regard to market capitalization in 2014 (table 1), three are software companies: Apple, Google, and Microsoft (in fact, one could argue that Intel is also driven by software, making it four out of five).

Development



There's No Such Thing
as a General-purpose Processor


  David Chisnall

And the belief in such a device is harmful.

There is an increasing trend in computer architecture to categorize processors and accelerators as "general purpose." Of the papers published at this year's International Symposium on Computer Architecture (ISCA 2014), nine out of 45 explicitly referred to general-purpose processors; one additionally referred to general-purpose FPGAs (field-programmable gate arrays), and another referred to general-purpose MIMD (multiple instruction, multiple data) supercomputers, stretching the definition to the breaking point. This article presents the argument that there is no such thing as a truly general-purpose processor and that the belief in such a device is harmful.

Computer Architecture



A New Software Engineering

  Ivar Jacobson, Ed Seidewitz

What happened to the promise of rigorous, disciplined, professional practices for software development?

What has been adopted under the rubric of "software engineering" is a set of practices largely adapted from other engineering disciplines: project management, design and blueprinting, process control, and so forth. The basic analogy was to treat software as a manufactured product, with all the real "engineering" going on upstream of that—in requirements analysis, design, modeling, etc.

Development




September 2014


JavaScript and the Netflix User Interface

  Alex Liu

Conditional dependency resolution

In the two decades since its introduction, JavaScript has become the de facto official language of the Web. JavaScript trumps every other language when it comes to the number of runtime environments in the wild. Nearly every consumer hardware device on the market today supports the language in some way. While this is done most commonly through the integration of a Web browser application, many devices now also support Web views natively as part of the operating system UI (user interface). Across most platforms (phones, tablets, TVs, and game consoles), the Netflix UI, for example, is written almost entirely in JavaScript.

Web Development



Productivity in Parallel Programming: A Decade of Progress

  John T. Richards, Jonathan Brezin, Calvin B. Swart, Christine A. Halverson

Looking at the design and benefits of X10

In 2002 DARPA (Defense Advanced Research Projects Agency) launched a major initiative in HPCS (high-productivity computing systems). The program was motivated by the belief that the utilization of the coming generation of parallel machines was gated by the difficulty of writing, debugging, tuning, and maintaining software at peta scale.

Concurrency



Evolution of the Product Manager

  Ellen Chisa

Better education needed to develop the discipline

Software practitioners know that product management is a key piece of software development. Product managers talk to users to help figure out what to build, define requirements, and write functional specifications. They work closely with engineers throughout the process of building software. They serve as a sounding board for ideas, help balance the schedule when technical challenges occur — and push back to executive teams when technical revisions are needed. Product managers are involved from before the first code is written, until after it goes out the door.

Education



Kode Vicious: Port Squatting

  George Neville-Neil

Don't irk your local sysadmin.

A few years ago you upbraided some developers for not following the correct process when requesting a reserved network port from IETF (Internet Engineering Task Force). While I get that squatting a used port is poor practice, I wonder if you, yourself, have ever tried to get IETF to allocate a port. We recently went through this with a new protocol on an open-source project, and it was a nontrivial and frustrating exercise. While I wouldn't encourage your readers to squat ports, I can see why they might just look for unallocated ports on their own and simply start using those, with the expectation that if their protocols proved popular, they would be granted the allocations later.

Kode Vicious, Networks



Older Issues