New: Check out the ACMQueue subreddit

New: Translations. ACM Q em Língua Portuguesa

Latest Queue Content     

Concurrency


Concurrency

Productivity in Parallel Programming:
A Decade of Progress

  John T. Richards
  Jonathan Brezin
  Calvin B. Swart
  Christine A. Halverson

Looking at the design and benefits of X10


In 2002 DARPA (Defense Advanced Research Projects Agency) launched a major initiative in HPCS (high-productivity computing systems). The program was motivated by the belief that the utilization of the coming generation of parallel machines was gated by the difficulty of writing, debugging, tuning, and maintaining software at peta scale.


Related:
Unlocking Concurrency
The Ideal HPC Programming Language
Software Transactional Memory: Why is it only a Research Toy?



Web Development


Web Development

JavaScript and the Netflix User Interface

  Alex Liu

Conditional dependency resolution


In the two decades since its introduction, JavaScript has become the de facto official language of the Web. JavaScript trumps every other language when it comes to the number of runtime environments in the wild. Nearly every consumer hardware device on the market today supports the language in some way. While this is done most commonly through the integration of a Web browser application, many devices now also support Web views natively as part of the operating system UI (user interface). Across most platforms (phones, tablets, TVs, and game consoles), the Netflix UI, for example, is written almost entirely in JavaScript.


Related:
Reveling in Constraints
Multitier Programming in Hop
The Antifragile Organization



Columns: Kode Vicious


Port Squatting

  George V. Neville-Neil

Don't irk your local sysadmin


Dear KV, A few years ago you upbraided some developers for not following the correct process when requesting a reserved network port from IETF (Internet Engineering Task Force). While I get that squatting a used port is poor practice, I wonder if you, yourself, have ever tried to get IETF to allocate a port. We recently went through this with a new protocol on an open-source project, and it was a nontrivial and frustrating exercise. While I wouldn't encourage your readers to squat ports, I can see why they might just look for unallocated ports on their own and simply start using those, with the expectation that if their protocols proved popular, they would be granted the allocations later.



Web Security


Web Security

Security Collapse in the HTTPS Market

  Axel Arnbak, Hadi Asghari, Michel Van Eeten, Nico Van Eijk

Assessing legal and technical solutions to secure HTTPS


HTTPS (Hypertext Transfer Protocol Secure) has evolved into the de facto standard for secure Web browsing. Through the certificate-based authentication protocol, Web services and Internet users first authenticate one another ("shake hands") using a TLS/SSL certificate, encrypt Web communications end-to-end, and show a padlock in the browser to signal that a communication is secure. In recent years, HTTPS has become an essential technology to protect social, political, and economic activities online.


Related:
Securing the Edge
The Seven Deadly Sins of Linux Security
Security in the Browser



Why Is It Taking So Long to Secure Internet Routing?

  Sharon Goldberg

Routing security incidents can still slip past deployed security defenses.


BGP (Border Gateway Protocol) is the glue that sticks the Internet together, enabling data communications between large networks operated by different organizations. BGP makes Internet communications global by setting up routes for traffic between organizations—for example, from Boston University's network, through larger ISPs (Internet service providers) such as Level3, Pakistan Telecom, and China Telecom, then on to residential networks such as Comcast or enterprise networks such as Bank of America.


Related:
What DNS Is Not
The Network is Reliable
DNS Complexity by Paul Vixie



Certificate Transparency

  Ben Laurie

Public, verifiable, append-only logs


On August 28, 2011, a mis-issued wildcard HTTPS certificate for google.com was used to conduct a man-in-the-middle attack against multiple users in Iran. The certificate had been issued by a Dutch CA (certificate authority) known as DigiNotar, a subsidiary of VASCO Data Security International. Later analysis showed that DigiNotar had been aware of the breach in its systems for more than a month—since at least July 19. It also showed that at least 531 fraudulent certificates had been issued. The final count may never be known, since DigiNotar did not have records of all the mis-issued certificates. On September 20, 2011, DigiNotar was declared bankrupt.


Related:
Network Forensics
The Case Against Data Lock-in
A Decade of OS Access-control Extensibility



Securing the Tangled Web

  Christoph Kern

Preventing script injection vulnerabilities through software design


Script injection vulnerabilities are a bane of Web application development: deceptively simple in cause and remedy, they are nevertheless surprisingly difficult to prevent in large-scale Web development.


Related:
Fault Injection in Production
High Performance Web Sites
Vicious XSS



Education


Education

Privacy, Anonymity, and Big Data
in the Social Sciences

  Jon P. Daries, Justin Reich, Jim Waldo, Elise M. Young, Jonathan Whittinghill, Daniel Thomas Seaton, Andrew Dean Ho, Isaac Chuang

Quality social science research and the privacy of human subjects requires trust.


Open data has tremendous potential for science, but, in human subjects research, there is a tension between privacy and releasing high-quality open data. Federal law governing student privacy and the release of student records suggests that anonymizing student data protects student privacy. Guided by this standard, we de-identified and released a data set from 16 MOOCs (massive open online courses) from MITx and HarvardX on the edX platform. In this article, we show that these and other de-identification procedures necessitate changes to data sets that threaten replication and extension of baseline analyses. To balance student privacy and the benefits of open data, we suggest focusing on protecting privacy without anonymizing data by instead expanding policies that compel researchers to uphold the privacy of the subjects in open data sets. If we want to have high-quality social science research and also protect the privacy of human subjects, we must eventually have trust in researchers. Otherwise, we'll always have the strict tradeoff between anonymity and science illustrated here.


Related:
Four Billion Little Brothers?: Privacy, mobile phones, and ubiquitous data collection
Communications Surveillance: Privacy and Security at Risk
Modeling People and Places with Internet Photo Collections



Undergraduate Software Engineering: Addressing the Needs of Professional Software Development

  Michael J. Lutz, J. Fernando Naveda, James R. Vallino

Addressing the Needs of Professional Software Development


In the fall semester of 1996 RIT (Rochester Institute of Technology) launched the first undergraduate software engineering program in the United States. The culmination of five years of planning, development, and review, the program was designed from the outset to prepare graduates for professional positions in commercial and industrial software development.


Related:
Fun and Games: Multi-Language Development
Pride and Prejudice: (The Vasa)
A Conversation with John Hennessy and David Patterson



Networks


Networks

The Network is Reliable

  Peter Bailis, Kyle Kingsbury

An informal survey of real-world communications failures


"The network is reliable" tops Peter Deutsch's classic list, "Eight fallacies of distributed computing" (https://blogs.oracle.com/jag/resource/Fallacies.html), "all [of which] prove to be false in the long run and all [of which] cause big trouble and painful learning experiences." Accounting for and understanding the implications of network behavior is key to designing robust distributed programs—in fact, six of Deutsch's "fallacies" directly pertain to limitations on networked communications. This should be unsurprising: the ability (and often requirement) to communicate over a shared channel is a defining characteristic of distributed programs, and many of the key results in the field pertain to the possibility and impossibility of performing distributed computations under particular sets of network conditions.


Related:
Eventual Consistency Today: Limitations, Extensions, and Beyond
The Antifragile Organization
Self-Healing Networks


Multipath TCP

  Christoph Paasch, Olivier Bonaventure

Decoupled from IP, TCP is at last able to support multihomed hosts.


The Internet relies heavily on two protocols. In the network layer, IP (Internet Protocol) provides an unreliable datagram service and ensures that any host can exchange packets with any other host. Since its creation in the 1970s, IP has seen the addition of several features, including multicast, IPsec (IP security), and QoS (quality of service). The latest revision, IPv6 (IP version 6), supports 16-byte addresses.


Related:
Passively Measuring TCP Round-trip Times
You Don't Know Jack about Network Performance
TCP Offload to the Rescue


Rate-limiting State

  Paul Vixie

The edge of the Internet is an unruly place


By design, the Internet core is stupid, and the edge is smart. This design decision has enabled the Internet's wildcat growth, since without complexity the core can grow at the speed of demand. On the downside, the decision to put all smartness at the edge means we're at the mercy of scale when it comes to the quality of the Internet's aggregate traffic load. Not all device and software builders have the skills—and the quality assurance budgets—that something the size of the Internet deserves. Furthermore, the resiliency of the Internet means that a device or program that gets something importantly wrong about Internet communication stands a pretty good chance of working "well enough" in spite of its failings.


Related:
DNS Complexity
Broadcast Messaging: Messaging to the Masses
Lessons from the Letter



Columns: Cerf's Up


ACM and the Professional Programmer

  Vinton G. Cerf

How do you, the reader, stay informed about research that influences your work?


In the very early days of computing, professional programming was nearly synonymous with academic research because computers tended to be devices that existed only or largely in academic settings. As computers became commercially available, they began to be found in private-sector, business environments. The 1950s and 1960s brought computing in the form of automation and data processing to the private sector, and along with this came a growing community of professionals whose focus on computing was pragmatic and production-oriented. Computing was (and still is) evolving, and the academic community continued to explore new software and hardware concepts and constructs. New languages were invented (and are still being invented) to try new ideas in the formulation of programs. The introduction of time sharing created new territory to explore. In today's world cloud computing is the new time sharing, more or less.



Columns: The Bikeshed


Quality Software Costs Money - Heartbleed Was Free

  Poul-Henning Kamp

How to generate funding for FOSS


The world runs on free and open-source software, FOSS for short, and to some degree it has predictably infiltrated just about any software-based product anywhere in the world.



Case Study

Quality Assurance


Automated QA Testing at EA:
Driven by Events

A discussion with Michael Donat, Jafar Husain,
and Terry Coatta


To millions of game geeks, the position of QA (quality assurance) tester at Electronic Arts must seem like a dream job. But from the company's perspective, the overhead associated with QA can look downright frightening, particularly in an era of massively multiplayer games.


Related:
Orchestrating an Automated Test Lab
Finding Usability Bugs with Automated Tests
Adopting DevOps Practices in Quality Assurance

Videos

Upcoming
ACM Learning Webinars