(newest first)

  • Mathias Holmgren | Tue, 28 Feb 2012 15:10:56 UTC

    You are either a troll or insane.
    "I am trying to impose a civil liability only for unintentionally caused damage, whether a result of sloppy coding, insufficient testing, cost cutting, incomplete documentation, or just plain incompetence."
    there is so many false assumptions behind this statement, I cant even begin...
    bottom line, what would happen?
    - No customer would be willing to play for the explotion of prices to cover hypothetical, unintended applications of software in situation the customer was not willing to pay the testing for in the first place
    => no software would get built because nobody would afford it
    you are a complete fool
  • Paul E. Bennett | Sat, 24 Dec 2011 12:00:22 UTC

    As one who is entirely in the bespoke embedded systems realm, I know how much effort has to be invested in getting as near perfect a product as is possible to achieve. Many of the systems I have dealt with could, by their failure, lead to fatal outcomes. That they haven't in my 40+ years of being in the industry shows that the "good enough" on my, and colleagues, parts have held true. Some of the systems have run for 25 years or more without needing patches or updates of any kind (nature of the industries served).
    In the UK, the "Consumer Protection Act" has far reaching teeth for the companies that are proven to be negligent in their practices. That negligence would extend to not using "Current Best Practice" or being "Cognicent of Current and  Emerging Standards".  What the article proposes is, therefore, already covered in the UK as far as embedded systems are concerned.
    Of course, the embedded systems people have good knowledge of the hardware platform, environment of deployment and interface possibilities that would be incorporated in their remit. We will know our systems relationship to standards such as IEC61508 etc) and how well they measure up to the client requirements.
    I don't know many in the desk-top software world will have done a Safety or Security Risk Assessment for their applications before they begin their coding.  So, perhaps some mention in the legislation that demands all software (other than Open Source) should have some risk of failure assessment to a standardised format written into the guarantees. Open Source is only excluded from this by the nature that such is open to modification and adaptation by anyone who feels they are capable of doing so (whether they are or not).
  • Stephan Wehner | Mon, 12 Dec 2011 21:22:13 UTC

    The "definition-problem" has been described in earlier comments: it's difficult to describe what is proper use of software, what is improper use, and which unwanted outcomes can in fact be blamed on the software and not the user; even more so when you look at how many different environments software runs under, by how many different people. 
    The definition of the functionality of many software packages would often be a huge and unwieldy document, no doubt having its own mistakes and therefore maintenance and update schedule.
    A very small step, and one that looks more doable, may be to require vendors to offer a guarantee for some aspect of the software, however small, that they choose themselves ("This software will not reboot your PC," "This software will not access your sound system"). Once some blurb is required, meaningful guarantees may be made in time. When the blurb looks weak, one knows that the product is immature.
    Oh well,
  • Art | Thu, 03 Nov 2011 20:18:37 UTC

    Here's a more painful way to improve software quality: Immediate disclosure of vulnerability details, including proof-of-concept/working exploits. The situation now is bad, but not quite bad enough for users (customers) to demand change.
    Also +1 for the comment about being liable for not having a rapid and competent response to vulnerabilities. That's a nice refinement.
    And recall that users (customers) share some (perhaps a lot of) responsibility, e.g. page 14 of the Microsoft SIR vol 11.
  • tifkap | Sat, 15 Oct 2011 21:30:48 UTC

    Software liability is long overdue. The quality of most code is terrible, and the end of out-sourcing to the cheapest code-monkey is not yet in sight. Limited Liability clauses have no value if it's not legal.
    The only way to structurally change security is to let liability in, and let the market sort it out.
  • Kirk | Sat, 15 Oct 2011 13:49:02 UTC

    "Your argument is totally bogus:  FOSS software comes with source code, so I can actually look at it and decide if it is competently written, and I have a license to modify it as I wish. "
    While you "can" look at the code and decide, the reality is you don't. There is no way I would believe that you have examined and understood the entirety of the OS that your computer is running. In fact, I have to revise again to say, "No you can't. There is simply too much there."
    The simple reality is that you don't like closed source software and have decided that not being content with simply not using such you believe that you should prevent me from doing so as well. Which also leads me to believe that the reality is that you really hope that such a law would force open software that you have not been smart enough to mimic. Which then also shows why the company should have the right to not show you the source.
    Your argument serves quite well at exposing your motives. Thank you.
  • Poul-Henning Kamp | Sat, 01 Oct 2011 12:37:51 UTC

    @Marcus Chen (and others):  First of all, you seem to think that this is about software sold in Best Buy to consumers.  It is not.  It is about all software sold and therefore also about your Government buying databases or Alcometers.  And no, I don't expect everybody to be able to read the source code, but I know the capitalistic society well enough to know, that a new market for software reviews will spring into life, and that certification companies like UL, TUV and Norvegian Veritas would love to sell certifications to sensibly written software.  We need either accountability, or transparency, today we have neither.
  • Gary Benner | Sat, 01 Oct 2011 01:58:23 UTC

    We have evolved into an environment where software is distributed when "good enough". This is a state where it's use brings more benefits than downsides. This allows users to contribute to the final product by providing feedback, and bug reporting.
    This affects the end user cost of software. If any software development company was working under the liability terms discussed herein, then the cost of (proprietary) software would be astronomical, and the options limited.
    As developers we work in a specific layer, mostly application level, however the functioning of our application can be affected by updates / upgrades of compiled libraries, the OS, hardware drivers, and perhaps even the hardware and bios. 
    Trying to determine liability here is fraught with legal intrincacies that would make a lawyer salivate with joy. 
    It is not a simple world we live in ( software world that is ), and trying to overlay simplistic legal solutions will not work easily. End users require simple solutions, and that is perhaps better achieved in other ways, such as engaging service providers who can, using reasonable endeavours, manage their customers' use of, and difficulties experienced with, technology as a whole.
  • Marcus Chen | Fri, 30 Sep 2011 23:00:28 UTC

    In one breath Mr. Kamp decries the immunity that software vendors have for their lack of liability and in the next he attempts to grab that same immunity for open source vendors.  Apparently he believes that a developer is free to compromise the quality as far as he likes and risk the data of the user in any way he likes, in short, to perpetuate the ills of the current software industry, so long as he provides them the source code.  And he believes this despite the full knowledge that the end-user is virtually guaranteed not to have the ability or be able to afford the expertise to evaluate said source code for hazards (let alone to modify the source code).  Now, he may perhaps be forgiven for this belief since we in the industry all know that a developer who releases their source code magically has conferred upon him/her the sublime wisdom of the gods and an iron-hard moral rectitude that puts the saints to shame, but I cannot help but imagine that legislators may be the teeniest, tiniest bit skeptical of this loophole.
    There is one point on which we agree: the time has indeed come for software liability.  I respectfully invite Mr. Kamp to join us when he's serious about it instead of using it as a cloak  to push his own agenda.
  • James Phillips | Fri, 30 Sep 2011 21:29:22 UTC

    Revised version of my comment removing hyperlink, quoting relevant text, just over 2000 chars.
    After that voting machine hack a few years ago, making use of return-oriented programming, I came to the conclusion that the computer revolution has not happened yet. Currently with ACTA being signed in days, the computer industry is actually regressing: users are allowed less and less information about the hardware they are running. To ignore unproven, untrusted hardware is to ignore the elephant in the room.
    Quoting myselft:
    I have an alternate vision of a "trusted computer": one that can be completely trusted and verified by software development houses. Things like keyboard controllers and card readers would have socketed ROM.  The ROM would be burned using a ROM burner programed using hard-wired toggle switches and other discrete components (including a CRC checksum to guard against data-entry error). The "known good", formally proven correct, code would be stored using analog means such as microfiche. All serious compilers would either be deterministic, or support a "--deterministic" switch to compare binaries compiled on different machines. The average person would not be expected to burn their own ROM chips, but they will expect the majority of their software and hardware to be formally proven correct. If an error is later found, the hardware manufacturer or software company would be sued: by a class-action lawsuit if necessary. The lawsuit would succeed because if the software was formally proven correct; any errors must be deliberate sabotage.
    We are no where near that yet. The computer industry is still in its infancy. CPU time is seen as cheaper than programmer's time, so software bloat is tolerated. Even if you release a secure OS based on the L4 microkernel, you still have to trust all the hardware that goes into you machine. Modern computers are so complex that nobody knows them from top-to-bottom. This means it is impossible to secure a computer system without proving each abstraction layer implements its advertised interface. Any unpublished quirks can lead to abstraction leakage; which can lead to bugs, which can lead to security exploits.
  • Gabriel Fineman | Fri, 30 Sep 2011 19:46:18 UTC

    This is a cost problem and not a liability problem. Most companies can write bug free code, but it is very expensive - at least ten times as expensive. Some companies have to spend that money to write code where lives are at stake, such as for anti-lock breaks or elevators. The rest of us write 'good enough' code that is adequate to get the job done for most people but not perfect. Thus, most code only substantially conforms to specifications and has constant bug fixes. The real problem is that most users want inexpensive 'good enough' code and not very expensive perfect code. Such legislation, while appealing, would never be enacted because it would force all code to be expensive, perfect code.
  • Dave Oldcorn | Fri, 30 Sep 2011 19:44:31 UTC

    But even a very highly computer literate user cannot possibly audit any software project of anything above trivial complexity in a reasonable amount of time.
    What's your estimate of how long it would take you to audit even a reasonably lightweight Linux distro?
    Given that, why should Linux be exempt from liability, but Windows or OSX or iOS be liable for bugs?
  • Poul-Henning Kamp | Fri, 30 Sep 2011 18:38:26 UTC

    Lizard:  Nobody forces a computer iliterate user to buy the clause 1 version software.
  • Ben Miller | Fri, 30 Sep 2011 16:40:49 UTC

    In this scenario, who would be at fault when open source components are reused in new applications?  For instance, company A builds a gem (Shared /Open source / Free Ruby Libraries for the those that don't know) for rails that handles authentication to a web site. Company B uses said gem in a web application they in turn market to the public at large.  If that gem is flawed, all applications built using the gem are flawed.  Company B did not pay for the gem, but they commanded a fee for their product or service.  So who gets sued?
  • Lizard | Fri, 30 Sep 2011 14:58:01 UTC

    I just wanted to notice that the "open source exemption" makes precious little sense. Indeed, it basically punishes any computer user who isn't a security expert. The fact any engineer could disassemble a Ford Pinto they bought and see how the gas tank was positioned did not exempt Ford from liability. Furthermore, I've got far better things to do with my time than go over the code to my word processor to see if it contains any security flaws (and know that if I happen to fail to find them, since it's not my area of expertise, it doesn't matter, the developers have no liability because I had the chance). The only reason this "exemption" exists is because that the author wants open source software to thrive, and if it had to follow the same liability laws as he proposes for closed source software, it would cease to exist at all. Who would contribute to an open source project if they knew they'd be liable for damage if they happened to make an error? Without a "corporate person" to sue, the contributing programmer would be PERSONALLY liable for any flaw he introduced, as would whoever maintains the software and allowed the flawed code to go into release. (PS: Would "beta" code be exempt from liability? It seems it must -- but, congrats, you've just created the Infinite Beta. "Paid betas" are common in games already, and gmail was in beta for many years while being used by millions. So, now, you need to define when software is "not in beta"? Don't say, "When you charge for it", because there's a million loopholes in that, too. ("I pay 200 dollars a year to Microsoft to join their Head Start program. I'm not BUYING any software. All that does is give me ACCESS to their Beta Software, which is 100% free for all Head Start members. Oh, they haven't RELEASED anything in 10 years. The last three versions of Windows and Office were never officially released or sold.") And with every "patch" you make to the laws to catch tricks like this, people just become more cunning in finding exploits. Sound familiar? :)
    Did you really think about this proposal? Or was it "Hey, I've got this clever trick I've thought of to get everyone to switch to open source! No one will catch on to my cunning scheme!"
    PS:In Indiana, we found out the hard way that if an owner lies -- actually LIES -- on their statement about damages to the house they're selling -- you have no standing to sue if you were allowed to inspect the house, even if the condition the owner lied about could not be detected by inspection. Specifically, we were told there was "no history of flooding". After we moved in, we found that only did the basement leak, but it did so when the prior owners had it and it was within 6 months of them selling the the house to us. The leak could not be seen, nor was damage from it in any way visible, when we inspected the house, but Indiana law would not let us sue. Do you think that's fair and just? If so, I have doubts about your moral character, frankly. If not... well, that's your open source proposal in the real world. This is not a hypothetical example made up to try to find a flaw in your case; this is actual fact, and I'll be happy to provide links and documentation as needed. (To use an actually hypothetical example, your proposed liability for non-open source software would let us sue the people who sold us this house if, months after we bought it, our neighbors drilled a hole into our basement. The people who sold the house should have told us the basement walls could be drilled into, and the people who built it should have used titanium plating to prevent it.)
  • Lizard | Fri, 30 Sep 2011 14:38:30 UTC

    Liability should be allowed only when the software fails when used appropriately -- when a database corrupts key files due to a programming flaw in the software, not due to a UserType=1D10T error. Liability for security flaws is a hideous nightmare. a)Most "flaws" are actually caused by social engineering -- no program can stop a moron from executing a trojan, and no program can tell the difference between "I am patching this file because I want to" and "I am an evil program patching this file but I'm running with the same privileges and executing the same commands as a legitimate patch." Even simpler, how do you tell if the person logged in remotely is the actual user, or someone who got his uid and password via a thousand schemes that have nothing to do with the software and which cannot be coded against? b)How do you deal with the problem of combinatorials? Company A makes a safe program. Company B makes a safe program. But when A and B are used together, they create an exploitable breach. You can't ask A to check their software is safe with every other program on the market.
    If we must, a compromise position is possible -- require rapid patching of known exploitable flaws. A company is liable if they do not inform all registered customers of their software within a reasonable timeframe and offer a patch.
  • Dag-Erling Smørgrav | Fri, 30 Sep 2011 12:36:45 UTC

    @Paul Crowely: this is not an argument against software liability, it's an argument for tort reform in the US. This is what Poul-Henning's comment about hot coffee was about, in case you didn't catch it. In the civilized world, your hypothetical lawsuit against Intuit would be laughed out of court.
    @Douglas James: that's not an argument against software liability, it's an argument against venue shopping. BTW, the SCO lawsuits have shown that judges are perfectly capable of understanding software if someone takes the time to boil it down and explain it to them. That's what court-appointed experts are for.
    @Daniel Dvorkin: this is not an argument against software liability, it's an argument for legislative reform in the US. In the civilized world, laws aren't thousands of pages written by lobbyists, they're a few (or a few dozen) pages written by politicians and civil servants and accompanied by a rationale.
    @Kurt Guntheroth: the software doesn't have to be completely bug-free. It just has to be reasonably free of bugs that could cause the user damage. The definition of "reasonably" would be up to the vendor's liability insurance. The same applies today to houses and builders.
  • Robert Howe | Fri, 30 Sep 2011 11:27:05 UTC

    I agree with Jesper. Follows an extract from a Microsoft license agreement:
    Read and weep. This is by no means an exception. These license agreements place no obligation on software vendors and by extension software developers to make any effort to ensure fundamental quality in their products. These contracts are the root cause as to why software development has remained a craft for too long. If software vendors/developers were held liable you can be sure that software would now be a full blooded engineering discipline, the difference being that engineers VERIFY their designs before building anything. Eg, Electronic engineers verify their designs using circuit engineering mathematics embodied in EDA tooling. If our sister discipline does it, why haven't software developers made the effort to be so rigorous? The answer simple: economics. Software developers are allowed to let their customers pay for their mistakes.
  • Jesper Sommer | Fri, 30 Sep 2011 09:41:41 UTC

    Herby, your argument is flawed. I do not see PHK arguing for "perfect software". I see him arguing for a better balance between quality and productivity than the current state of affairs. The software business is no different than any other business out there. Software companies who can deliver products with no liability of any kind (as opposed to most other products in fact) will do very little to ensure the quality.
    I see PHK arguing for a better balance, or for delivering open source code. Even though I am a vendor of closed-sourced software I think the argument is both valid and sound - it just needs a bit more work defining clear parameters for the parties involved. 
  • James Phillips | Fri, 30 Sep 2011 09:36:38 UTC

    After that voting machine hack a few years ago, making use of return-oriented programming, I cam to the conclusion that the computer revolution has not happened yet. Currently with ATCA being signed in days, the computer industry is actually regressing: users are allowed less and less information about the hardware they are running. To ignore unproven, untrusted hardware is to ignore the elephant in the room.
    I describe how I think a mature computer industry would look in this post:
    Essentially, I believe that for software development houses: peripherals will use socketed ROM,  ROM burners will use toggle switches, and "known good" source code will be stored using optical means such as mircofiche.
  • Herby Sagues | Fri, 30 Sep 2011 09:25:27 UTC

    Also, define "when used normally". Is "being attacked by malicious software carefully crafted to specifically make your software fail" normal usage? 
    If so, does any other type of product in the planet, resist being "used normally"? 
    Does your car keep working just fine after I attack it with a rocket? Does it even resist a lock picker?
    Does your home keep you safe if I attack it with a flame thrower?
    Do your medicines keep you healthy if I use a laser beam to heat them to 100 degrees celsius?
    No. You wouldn't expect that. But under the claim that "software producers must be as liable as the manufacturers of any other product" you are actually attempting to create an exception where software is the only product that creates that sort of liability, while everyone is excempted. 
    That makes absolutely no sense.
  • Dave Oldcorn | Fri, 30 Sep 2011 09:21:05 UTC

    It's an interesting argument but one important point strikes me as obvious: why the reason for such a dramatic exception for open-source projects?
    If you buy a car, you've complete freedom to examine and modify it as you wish, but that doesn't exclude the company that made it from liability for design or build flaws.
    That's not even taking into the account the idea that the average user - indeed, even the average corporation - certainly will not have the requisite skills to audit even any substantial project (even a well-written and documented project e.g. Apache) for security flaws. This seems like delusion of the highest magnitude.
    If liability laws are to be put in place they must surely apply to all products equally.
  • Herby Sagues | Fri, 30 Sep 2011 09:17:09 UTC

    So, wy don't you create a software company that makes flawless software? I'm pretty sure everyone would buy from you instead of from other companies and you would be a billionaire?
    Except they won't. Because there's a tradeoff between productivity and the desire for perfection. If you make flawless software (if it is ever possible) it will be so much behind what other, less picky developers do, that no one will want it. 
    And that's the sort of software you want to mandate for all.
  • Douglas Held | Fri, 30 Sep 2011 08:29:21 UTC

    Software liability is a great idea as long as it is specific, measurable and detectable. The best way to do this is by analyzing the source code.  Example would be "All data not created by the developer will be validated in a library specifically designed to process input data validation." Or "All calls to realloc() will be followed immediately for a check for success."
    With about a dozen such rules, you could catch or prevent about 80% of all vulnerabilities.  Vendors could charge more to implement/guarantee the standards; it's a mutual benefit.
    Easily more than half of the commenters know nothing of software application security.
  • Jesper Sommer | Fri, 30 Sep 2011 08:19:10 UTC

    I agree in general terms with the construct proposed here. But in order to work in real life there has to be a number of additional clauses. For example, how is the amount of damage calculated? And where does the responsibility stop?
    If the salesman infects his laptop with malware using the USB-key-scenario described above, it seems reasonable that one or more software vendors can be held liable. I'll admit to that even though I make my living selling closed-source software. But what about collateral damage? If the salesperson brings the laptop back to the corp. network, which coincidentally has no internal security (since software vendors are all accountable for their sloppy code, right?), is the damage to the entire corporate IT infrastructure also the responsibility of the aforementioned software vendor? Do we blame the infection of 300 clients and 25 servers on the single software vendor who screwed up some security aspect related to USB keys? 
    The chain of damage goes on. The corp. network is down. Orders are lost. Deliveries are delayed. Employees have to put in overtime to get things done. Is this damage also something which the original USB-something-software vendor should cover?
    What if management in the company decides not to start fixing the problem? The responsibility for the problem clearly falls on the software vendor, so why fix it? Why not just wait until the company collapses, and have the software company pay for all of it?
    For your scenario to work - and for someone like me to accept the kind of responsibility you are proposing - we need a more clear definition of the boundaries involved. A general set of rules to describe where the responsibility stops being the software vendors, and shifts back to the customer.
    I'll take responsibility for the code I deliver. Hell, I'll even pay damages when I screw up. In reality I expect nothing less from my own suppliers anyway. But I won't get raped by other peoples lack of damage control, or collateral damage not directly related to the error I have caused.
    You need to describe where the liability starts and stops or this construct will fly worse than a rock that hasn't even left the ground yet.
    - Jesper
  • Matt Mellon | Fri, 30 Sep 2011 04:49:07 UTC

    Sufficiently complex software cannot often be tested against every data set it will be required to handle. That's why QA analysts do risk analysis and figure out the most important things to test are.
    Bugs happen, even when sound development practices and QA methodologies are followed. While I see the author's point, there should be an additional way to obtain the exemption: the developer should be able to simply disclose the test plan, test articles and test results. The real problem with buggy software is that the end user isn't able to judge the risk of malfunction.
    Furthermore, most software is written on top of libraries, runs on top of OS's, etc... who's liability is it when something goes wrong concerning one of these underlayers? It's often difficult to pinpoint the cause of a bug as being company X's fault.
  • C Wilson | Fri, 30 Sep 2011 03:18:20 UTC

    Don't actually disagree. However, there is a community of folks that do live with the consequences of their software action. 
    I have done both 'certified' (financial) software (back in the 'dark days', when over-site/audits actually happened (and can tell you that legal action was/is on the table), and in emission related software. Litigation with software engineers is certainly a reality today; I have been 'deposed' three times.
    True, there is no separate formal body that 'haunts' software practitioners. Still, if your name is associated with a major loss, trust me... you will be talking to 'the man' (yes, the one with the robe and gavel, or (probably worse) a senator's/congress-person's garb).
    My point is that, to some extent, this topic is already in place. 
    Our failing; as software practitioners, living in a republic, have failed to band together (for the, imho, the pettiest of reasons) to define and protect ourselves.
    Please; get more aggressive about who really gets involved in some of the corporate cases.
    I sincerely don't want you to be the 'doormat' for some corporate flack who may have ignored you best recommendations, then put your name on a memo and found a lawyer who can 'spin the whole thing'....
  • Kyle Jones | Fri, 30 Sep 2011 02:39:09 UTC

    If there's software liability as you describe it, then closed-source software will cease to be sold on general purpose computers.  We'll still have computers, but they will be integrated into mail reading appliances, web surfing appliances, word processing appliances, etc. so the manufacturer can set limits on what the device is supposed to do.  This will let them limit their liability exposure.  E.g. I don't have to worry about my refrigerator DDOS'ing the Pentagon or serving up kiddie-porn to the net at large.  To get that kind of assurance in broader classes of appliances, software and hardware integration will be total, device functionality will be rigid, and general purpose computers will exist only in the hands of experts.  I'm not sure whether I'm for this or not.  There would be a lot more software engineering jobs out there, that's for sure.
  • Jim Lux | Fri, 30 Sep 2011 02:14:17 UTC

    There already is liability for faulty software in the "non-shrink-wrap" world. A lot of software produced under contract have explicit liability clauses (and, perhaps, limitations on how far the consequential liability tail extends..).  One might also have a liquidated damages clause to avoid arguing about the amount of damage (i.e. we agree in advance how much a bug is worth)
    So what you're really talking about is "mass produced" software (i.e. the automobile, bike, ladder, toaster). The challenge is that for automobiles, there's a huge body of literature and common knowledge about what to expect from a car. And that changes from time to time (e.g. nobody is selling cars any more with gas tanks behind the rear axle placed so that a rear end crash will cause the tank to rupture).  For consumer products there is a fighting chance of a seller figuring out what a "reasonable man" would do, particularly when the product has been around a while (hammers, tongs, frying pans).  And there are regulatory agencies to help clarify the issue as well, especially when what a seller thinks is reasonable isn't what the general public thinks.
    I don't know that there is a sufficient body of experience, law, and precedent to establish what a particular software product should do (or, more importantly, NOT do). There's already an implied warranty of merchantability and fitness for purpose that the seller can't disclaim (at least in the U.S.), so if I sell a program to view DVDs and it doesn't do that, I can get my money back (in theory).  If I sell a program to view DVDs and it erases your hard disk drive, one might actually be able to sue (successfully) for some amount of damages (notwithstanding the shrink wrap agreement) under this age old commercial law principle.  WOuld it cost a lot more to file suit than my disk data was worth?  Would the defendant allege that I had failed to take backups as a reasonable man would? All sorts of things might crop up. 
    But when it gets to something complex (an operating system), I think that they haven't been around long enough to establish that precedent.  These things evolve in human generation sorts of time scales. 
    But a good idea, and worth thinking about how it might change in the future.
    (BTW, the hot coffee thing is a bad example.. the coffee in the famous case was substantially hotter (85C) than a "reasonable person" would expect it to be, sold it in flimsy cups, etc., the seller was aware of the hazard created by doing so, had had 100s of claims and paid settlements, but kept it hot, so liability attached.  The stupid warning is just that.. it's well established that those sorts of signs and warnings have little or no effect on the success of a lawsuit.)
  • G Johnson | Fri, 30 Sep 2011 02:08:42 UTC

    The real question here isn't whether software quality would go up.  The real question is how many programmers will be willing to release software to the public if they're going to be held liable for errors in it. 
    You say 'the only code you can trust is code you wrote yourself'.  However, you can't even trust code you write yourself.  Regardless of how hard you work to make code reliable and safe, end-users are remarkably good at screwing the entire thing up, using software unintended ways, and finding every possible way to make things go wrong. (and they're not even trying to break it...).  
    I would never sell, nor release as open source, any program, ever, for fear of some idiot using my program and managing to lose millions of dollars worth of data. Its impossible to plan or test for every possible use, on every possible hardware, with every possible OS.
    I would have one thing to say: "Congratulations on killing the software industry."
  • C Wilson | Fri, 30 Sep 2011 01:54:01 UTC

    I started out in a 'maintenance' group. During the 1980's; turned to contract work. After a few years, found a 'steady home'; started a development and have stayed with it for 16 years.
    I noticed in my career that I would 'stereotype' "developers" as folks who really didn't have the 'stomach' for "living with their mistakes."
    It definitely is a very different mind set when you know you are going to have to 'live with the software you create'. 
    This is very much a condition of having to deal with the consequences of what you do.
    I can state; those who understand that they may have to live with consequences do (naturally) produce a much higher quality product than those who know they will 'just move on'
  • Meh | Fri, 30 Sep 2011 01:53:34 UTC

    This is nothing but a thinly-veiled paean for FOSS.
    Effectively, the proposed rules do nothing but create a higher barrier to entry for closed-source software vendors. Given that your average user is about as capable of finding security holes in source or object code, or sanskrit for that matter, this will do nothing to improve the quality of software.
    Users already have plenty of choices between FOSS and proprietary software. If they choose proprietary software even when the FOSS is more secure, it seems that the FOSS has the problem.
  • Paul Crowley | Fri, 30 Sep 2011 00:38:29 UTC

    The idea that a software vendor can be sued for consequential damages has been bandied about for a long time, but every single software license I have ever seen disclaims that right.  This isn't going to change.
    Today, if your teenager puts their little sister into the refrigerator and forgets about her, the manufacturer of the refrigerator will be sued.  Where we are today is if a user doesn't back up their data and suffers some kind of physical catestrophe they aren't going to be able to sue Intuit because they ignored the suggestion to back up their data periodically.  A liability law that made it possible to do so, notwithstanding any disclaimers by Intuit, would be a mistake and we would discover a whole new world of "defensive products" that would be a lot like what we see as "defensive medicine" today.  Not a good idea, in my opinion.
    The problem today with computer security isn't the published products with real-world links.  It is the users that want to get something for nothing and the idea that a general-purpose computer with an complex operating system does not need a trained administrator.  Most of the trouble comes from sources that are untraceable and would not be subject to any liability whatsoever.
    Should Microsoft be sued for continuing to suppress the display of file extensions thereby encouraging the spread of email attachments like HotChick.jpg.pif?  Maybe.  But would the user not bear a large portion of the responsiblity for clicking on it in the first place?  In today's environment the user likely would, making any liability claim irrelevalent.
    What any real effort that wasn't easily bypassed by pushing the blame back on the user (or at least 51% of the blame) would result in is simply the transformation of much of the "personal computer" market today into the "Internet appliance" market of tomorrow.  Where, for example are the iPad viruses? Oh, there are none.  Because it isn't a general purpose computer but instead an appliance that is not really all that flexible.  This is where the personal computer market is headed, and it eliminates the problems with administration.  While a Windows 7 notebook needs an administrator, an iPad does not and never will.
  • Leonard Davis | Fri, 30 Sep 2011 00:19:05 UTC

    The problem is that the scenario will have to play out in court. In order to prove that the application crash and subsequent data loss was a result of something not in the application, but rather somewhere else in the stack, the application author will have to defend against the suit in court. Depending on the way the law works, two things can happen here: 
    A) The author will have to accept the liability and then turn and sue the supplier of the other component to recoup his losses.
    B) The author will have to prove to the court itself that the provider of the other component is the rightful target of the suit.
    In scenario A), all small software vendors lose because they cannot afford to take on the behemoths. Under B), everyone loses because the courts do not have the expertise to make a decision based on the technical merits of the problem. Teaching a judge in East Texas to read and understand a backtrace just isn't going to happen.
    Anyone that has done "computer support" for a relative knows that the layman cannot understand application crashes, so in most cases the computer hardware itself is blamed - in which case the secondary suit comes from Apple, or Dell, or HP. Deep pockets.
  • Steven Kuck | Thu, 29 Sep 2011 23:34:39 UTC

    What software would be liable in the case of malware? Would Microsoft be responsible for a trojan on a thumb drive? Microsoft did not author the malware. Most security risks are the result of software systems doing exactly what they are asked to do by users who have been tricked.
    While a car company may be liable for having poor locks, I can't see any way to hold them liable for you giving your keys to someone impersonating a valet.
    While this proposal MAY result in more quality testing to make sure software doesn't inadvertently cause corruption (and I don't know of many cases in which this happens anyway), I do not see how it will impact security. Even security software that fails to detect malware would just say that it was designed to prevent problems known to it at the time it was released or updated.
  • Douglas James | Thu, 29 Sep 2011 23:29:49 UTC

    I agree with the idea of software liability.  It's been far too long that we in the software community have allowed anyone and everyone to join our fold, no training required, and write software to be sold to the trusting consumer who knows no better, and more than likely will then be immediately subjected to a flood of patches, emergency bug fixes, and downloads.
    Yes, nothing is perfect, especially software.  But it should simply be illegal to have TOS agreements which the consumer accepts merely by opening the software, and which effectively give the consumer no way to sue when they lose months or years of work, or worse 1000's of $$$ when a software that is used normally blows up in their face.
    We're all professionals, we should be proud of our work.  We should have no problems getting the necessary certifications (think doctors or lawyers) and passing the bar to join our profession.  And when we screw up intentionally, or through incompitence, we should face the penalties, whether that be the removal of our ability to work, fines, or even jail time if necessary.
    I'm tired of my profession being practiced by the wonder-kin of the world, and I'm tired of having no legal recourse when the software I depend on has a bug in it that should never have occurred.  I point to SQL-Injection and Cross-Site Scripting attacks as perfect examples of where companies have used talent that should likely be fired.  And on the other side, I can point to many examples of good software that's nearly bug free, that has been written by professionals, but which is kept at a disadvantage through market forces that take advantage of the current software industry practice of making the consumer the guinea pig.
    It's time we form the necessary certification bodies, and insist that anyone wanting to work in the field become certified.  We need to become the equivalent of the lawyer, doctor, or engineer, and assume the mantle and responsibility that comes with it.
  • Nick | Thu, 29 Sep 2011 23:29:12 UTC

    If I were a software vendor (I'm not, but I write software for a living), I might be inclined to include a clause in my license which stated that my software may cause damage when used normally, and by using it you agree to hold the manufacturer not liable for any damage caused, actual or perceived. Sure, if I were competing with software which did not include this provision, and did the same stuff, I might be in trouble; however, I think all commercial software would adopt a similar clause, so I wouldn't be too worried.
    Might I offer that instead of a top-heavy legislative regulation approach, perhaps a free-market one would work better. The government could allow people to sell software with an explicit liability guarantee, and charge accordingly for it. That way people who want this can pay for it, and software vendors could make both versions, depending on demand. The market could decide the value of the additional protection, and if it was high enough, that type of software would become pervasive.
    ... oh, right: you can already do this, no additional laws/regulations required. Hm... I guess that implies something about the value proposition for users; perhaps you can extrapolate.
  • Daniel Dvorkin | Thu, 29 Sep 2011 23:06:32 UTC

    Ah, idealism! The proposed law, with Clause 1 in place, and enforced, doesn't sound too bad. Do you really think that's the way it would work? In the real world, any software liability law would be written by lobbyists working for Microsoft, Oracle, Adobe, EA, et al., and there is no way in hell it would make life easier for open source developers than for the big commercial houses.
  • Ralph | Thu, 29 Sep 2011 23:00:20 UTC

    "You can't trust code that you did not totally create yourself."
    There are many, many layers to that onion - where do you stop? Sure, there's your own code. But it leverages 3rd party libraries, templates, and the underlying OS/kernel. And your code is compiled by an application written by others too. So you roll your own compiler, OS, libraries, etc. But then the microprocessor is also running code after a fashion - do you design and implement your own microprocessor too?
    The fact is, we either have to trust to shared resources. No one can do all that on their own. And even if we could, I don't believe 'perfect' software is achievable. If software can be used, it can also be exploited. The key is not to trust too much to software *ever*, no matter what precautions have been taken. We can't build walls that will keep everyone out, but we can limit the damage when someone does.
  • B.Williams | Thu, 29 Sep 2011 22:49:00 UTC

    First Tort reform.
  • Kurt Guntheroth | Thu, 29 Sep 2011 20:44:24 UTC

    There are many problems with the proposed liability scheme. Among them are the difficulty in assuring software is completely bug-free, the problem that software must run on an almost infinitely variable zoo of computer systems quite different from the systems on which they were built and tested, and the problem that software vendors have little control over whether their software is used in high-consequence situations.
    For shrink-wrapped software, liability should be limited to repair or replacement of the software, in view of the user's role in liability for using such software in high-consequence situations. In particular, general purpose software delivered at no charge should not incur civil liability (though criminal liability should still be possible). 
    For non-shrink-wrap software, the vendor and purchaser would negotiate liability on a relatively equal footing, but the minimal liability above should not be possible to contract away. That is, any piece of software should be fit for use in low-risk situations.
    When software is embedded into hardware, the vendor should have the same liability for the software components of the system as they do for the mechanical components.
    Perhaps a standard-of-care defense should be permitted against software malpractice, wherein if a vendor showed that they used properly trained developers, approved development methods and a specified level of testing, that liability should also be limited to repair/replacement. Such standards are generally promulgated by practitioner organizations. The American Medical Association and the American Institute of Certified Public Accountants are examples. ACM or IEEE would be obvious candidates to promulgate standards of care for software development in the United States.
    The threat of liability if vendors do not adhere to a standard of care in development would probably be enough to revolutionize software development. This threat would bring on secondary litigation, like shareholder suits for breach of duty if development practices were sloppy. 
  • Poul-Henning Kamp | Tue, 27 Sep 2011 13:37:29 UTC

    Hi Kjeld.  Your argument is totally bogus:  FOSS software comes with source code, so I can actually look at it and decide if it is competently written, and I have a license to modify it as I wish.  We have neither of those options for commercial software, and that is why computer security is close to non-existent today. 
  • Kjeld Flarup | Sun, 25 Sep 2011 23:07:57 UTC

    Hmm, who would you sue for a security hole in your "free" downloaded open source programs? You paid nothing!
    What you in reality is saying is that companies should not use open source, in the systems they sell, because they themselves will be liable, but they cannot sue the supplier of the source code. 
    So, have you decided to stop working with open source?
  • Ian Eiloart | Sun, 25 Sep 2011 12:19:22 UTC

    So, adding a trivial piece of removable code and releasing the source, would allow you to avoid liability? OK, I see how releasing the source helps, but the ability to remove code needs more consideration.
    A better option would be to allow the vendor to avoid liability only for the removable bits of code (whether removed or not). So, core elements of the software should still carry liability. And, you might have to demonstrate that the software has some practical application in the absence of the removable parts. You don't want an application that simply reserves some memory, and then has a removable component which does everything else.
  • P. J. McDermott | Sat, 24 Sep 2011 21:37:16 UTC

    "The word disabling is chosen very carefully. This clause grants no permission to change or modify how the program works, only to disable the parts of it that the licensee does not want."
    If the licensee finds a security hole in a piece of proprietary software covered by Clause 1, his/her only recourse is to not use the software (or at least perhaps a specific feature of it, if the software as a whole can function without the feature). Would it not be better to allow the licensee to fix the problem, either by his/her own work or by hiring someone for their work?
    Otherwise, I like the explanation and I agree with this proposal. I've argued recently that developers of proprietary software should be held liable for their errors since the user has no other recourse, while freely-licensed software has no need for warranties and liabilities since users have the freedom to audit the software and fix any errors therein.
  • Poul-Henning Kamp | Fri, 23 Sep 2011 22:50:36 UTC

    Yes, I can imagine car manufacturers being sued when they put too weak locks on their cars, as far as I know it has even happened a number of times in history already.
    I would personally rate any operating system which automatically starts to execute code from an removable data media on insertion as defective, and I think you will have a hard time convincing any security professional otherwise.
    Nontheless, software houses have shown remarkable reluctance to remove this security hole, for reasons that can best be summed up as "We care about our bottom line, not about your security".
    When Car manufactureres held similar views, a number of product liability lawsuits changed their mind.
    We need a similar crowbar for software.
  • Pete | Fri, 23 Sep 2011 16:14:33 UTC

    Can you imagine if we sued a car manufacturer every time someone jimmied a lock? or glass manufacturers when someone breaks a window? You are completely ignoring the intelligent adversary, which makes all the difference. There are many, many flaws in today's real-world environment that do not fall under product liability law because they require exploitation by others.
    I really don't understand the example. Who is supposed to be liable to whom in this scenario? Usually, when discussing software liability folks talk about vulnerabilities, not malware.
  • K. Geiger | Tue, 13 Sep 2011 05:45:41 UTC

    This topic appears to be covered under the GPL (Gnu Pundit License) 3.0. Relevant sections:
    15. Disclaimer of Warranty.
    16. Limitation of Liability.
Leave this field empty

Post a Comment:

(Required - 4,000 character limit - HTML syntax is not allowed and will be removed)