January/February 2019 issue of acmqueue The January/February 2019 issue of acmqueue is out now

Subscribers and ACM Professional members login here

The Bike Shed

Privacy and Rights

  Download PDF version of this article PDF

Error 526 Ray ID: 4bc85d0dea96cce8 • 2019-03-24 11:39:15 UTC

Invalid SSL certificate








What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.


Originally published in Queue vol. 9, no. 9
see this item in the ACM Digital Library



Meng-Day (Mandel) Yu, Srinivas Devadas - Pervasive, Dynamic Authentication of Physical Items
The use of silicon PUF circuits

Nicholas Diakopoulos - Accountability in Algorithmic Decision-making
A view from computational journalism

Olivia Angiuli, Joe Blitzstein, Jim Waldo - How to De-identify Your Data
Balancing statistical accuracy and subject privacy in large social-science data sets

Jim Waldo, Alan Ramos, Weina Scott, William Scott, Doug Lloyd, Katherine O'Leary - A Threat Analysis of RFID Passports
Do RFID passports make us vulnerable to identity theft?


(newest first)

Displaying 10 most recent comments. Read the full list here

Mathias Holmgren | Tue, 28 Feb 2012 15:10:56 UTC

You are either a troll or insane.

"I am trying to impose a civil liability only for unintentionally caused damage, whether a result of sloppy coding, insufficient testing, cost cutting, incomplete documentation, or just plain incompetence."

there is so many false assumptions behind this statement, I cant even begin...

bottom line, what would happen?

- No customer would be willing to play for the explotion of prices to cover hypothetical, unintended applications of software in situation the customer was not willing to pay the testing for in the first place

=> no software would get built because nobody would afford it

you are a complete fool

Paul E. Bennett | Sat, 24 Dec 2011 12:00:22 UTC

As one who is entirely in the bespoke embedded systems realm, I know how much effort has to be invested in getting as near perfect a product as is possible to achieve. Many of the systems I have dealt with could, by their failure, lead to fatal outcomes. That they haven't in my 40+ years of being in the industry shows that the "good enough" on my, and colleagues, parts have held true. Some of the systems have run for 25 years or more without needing patches or updates of any kind (nature of the industries served).

In the UK, the "Consumer Protection Act" has far reaching teeth for the companies that are proven to be negligent in their practices. That negligence would extend to not using "Current Best Practice" or being "Cognicent of Current and Emerging Standards". What the article proposes is, therefore, already covered in the UK as far as embedded systems are concerned.

Of course, the embedded systems people have good knowledge of the hardware platform, environment of deployment and interface possibilities that would be incorporated in their remit. We will know our systems relationship to standards such as IEC61508 etc) and how well they measure up to the client requirements.

I don't know many in the desk-top software world will have done a Safety or Security Risk Assessment for their applications before they begin their coding. So, perhaps some mention in the legislation that demands all software (other than Open Source) should have some risk of failure assessment to a standardised format written into the guarantees. Open Source is only excluded from this by the nature that such is open to modification and adaptation by anyone who feels they are capable of doing so (whether they are or not).

Stephan Wehner | Mon, 12 Dec 2011 21:22:13 UTC

The "definition-problem" has been described in earlier comments: it's difficult to describe what is proper use of software, what is improper use, and which unwanted outcomes can in fact be blamed on the software and not the user; even more so when you look at how many different environments software runs under, by how many different people.

The definition of the functionality of many software packages would often be a huge and unwieldy document, no doubt having its own mistakes and therefore maintenance and update schedule.

A very small step, and one that looks more doable, may be to require vendors to offer a guarantee for some aspect of the software, however small, that they choose themselves ("This software will not reboot your PC," "This software will not access your sound system"). Once some blurb is required, meaningful guarantees may be made in time. When the blurb looks weak, one knows that the product is immature.

Oh well,


Art | Thu, 03 Nov 2011 20:18:37 UTC

Here's a more painful way to improve software quality: Immediate disclosure of vulnerability details, including proof-of-concept/working exploits. The situation now is bad, but not quite bad enough for users (customers) to demand change.

Also +1 for the comment about being liable for not having a rapid and competent response to vulnerabilities. That's a nice refinement.

And recall that users (customers) share some (perhaps a lot of) responsibility, e.g. page 14 of the Microsoft SIR vol 11.

tifkap | Sat, 15 Oct 2011 21:30:48 UTC

Software liability is long overdue. The quality of most code is terrible, and the end of out-sourcing to the cheapest code-monkey is not yet in sight. Limited Liability clauses have no value if it's not legal.

The only way to structurally change security is to let liability in, and let the market sort it out.

Kirk | Sat, 15 Oct 2011 13:49:02 UTC

"Your argument is totally bogus: FOSS software comes with source code, so I can actually look at it and decide if it is competently written, and I have a license to modify it as I wish. "

While you "can" look at the code and decide, the reality is you don't. There is no way I would believe that you have examined and understood the entirety of the OS that your computer is running. In fact, I have to revise again to say, "No you can't. There is simply too much there."

The simple reality is that you don't like closed source software and have decided that not being content with simply not using such you believe that you should prevent me from doing so as well. Which also leads me to believe that the reality is that you really hope that such a law would force open software that you have not been smart enough to mimic. Which then also shows why the company should have the right to not show you the source.

Your argument serves quite well at exposing your motives. Thank you.

Poul-Henning Kamp | Sat, 01 Oct 2011 12:37:51 UTC

@Marcus Chen (and others): First of all, you seem to think that this is about software sold in Best Buy to consumers. It is not. It is about all software sold and therefore also about your Government buying databases or Alcometers. And no, I don't expect everybody to be able to read the source code, but I know the capitalistic society well enough to know, that a new market for software reviews will spring into life, and that certification companies like UL, TUV and Norvegian Veritas would love to sell certifications to sensibly written software. We need either accountability, or transparency, today we have neither.

Gary Benner | Sat, 01 Oct 2011 01:58:23 UTC

We have evolved into an environment where software is distributed when "good enough". This is a state where it's use brings more benefits than downsides. This allows users to contribute to the final product by providing feedback, and bug reporting.

This affects the end user cost of software. If any software development company was working under the liability terms discussed herein, then the cost of (proprietary) software would be astronomical, and the options limited.

As developers we work in a specific layer, mostly application level, however the functioning of our application can be affected by updates / upgrades of compiled libraries, the OS, hardware drivers, and perhaps even the hardware and bios.

Trying to determine liability here is fraught with legal intrincacies that would make a lawyer salivate with joy.

It is not a simple world we live in ( software world that is ), and trying to overlay simplistic legal solutions will not work easily. End users require simple solutions, and that is perhaps better achieved in other ways, such as engaging service providers who can, using reasonable endeavours, manage their customers' use of, and difficulties experienced with, technology as a whole.

Marcus Chen | Fri, 30 Sep 2011 23:00:28 UTC

In one breath Mr. Kamp decries the immunity that software vendors have for their lack of liability and in the next he attempts to grab that same immunity for open source vendors. Apparently he believes that a developer is free to compromise the quality as far as he likes and risk the data of the user in any way he likes, in short, to perpetuate the ills of the current software industry, so long as he provides them the source code. And he believes this despite the full knowledge that the end-user is virtually guaranteed not to have the ability or be able to afford the expertise to evaluate said source code for hazards (let alone to modify the source code). Now, he may perhaps be forgiven for this belief since we in the industry all know that a developer who releases their source code magically has conferred upon him/her the sublime wisdom of the gods and an iron-hard moral rectitude that puts the saints to shame, but I cannot help but imagine that legislators may be the teeniest, tiniest bit skeptical of this loophole.

There is one point on which we agree: the time has indeed come for software liability. I respectfully invite Mr. Kamp to join us when he's serious about it instead of using it as a cloak to push his own agenda.

James Phillips | Fri, 30 Sep 2011 21:29:22 UTC

Revised version of my comment removing hyperlink, quoting relevant text, just over 2000 chars.

After that voting machine hack a few years ago, making use of return-oriented programming, I came to the conclusion that the computer revolution has not happened yet. Currently with ACTA being signed in days, the computer industry is actually regressing: users are allowed less and less information about the hardware they are running. To ignore unproven, untrusted hardware is to ignore the elephant in the room.

Quoting myselft: I have an alternate vision of a "trusted computer": one that can be completely trusted and verified by software development houses. Things like keyboard controllers and card readers would have socketed ROM. The ROM would be burned using a ROM burner programed using hard-wired toggle switches and other discrete components (including a CRC checksum to guard against data-entry error). The "known good", formally proven correct, code would be stored using analog means such as microfiche. All serious compilers would either be deterministic, or support a "--deterministic" switch to compare binaries compiled on different machines. The average person would not be expected to burn their own ROM chips, but they will expect the majority of their software and hardware to be formally proven correct. If an error is later found, the hardware manufacturer or software company would be sued: by a class-action lawsuit if necessary. The lawsuit would succeed because if the software was formally proven correct; any errors must be deliberate sabotage.

We are no where near that yet. The computer industry is still in its infancy. CPU time is seen as cheaper than programmer's time, so software bloat is tolerated. Even if you release a secure OS based on the L4 microkernel, you still have to trust all the hardware that goes into you machine. Modern computers are so complex that nobody knows them from top-to-bottom. This means it is impossible to secure a computer system without proving each abstraction layer implements its advertised interface. Any unpublished quirks can lead to abstraction leakage; which can lead to bugs, which can lead to security exploits.

Displaying 10 most recent comments. Read the full list here
Leave this field empty

Post a Comment:

© 2018 ACM, Inc. All Rights Reserved.