Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Standards for Software Liability: Jim Dempsey, Lawfare, UC Berkeley Law (lawfaremedia.org)
28 points by mustache_kimono on Jan 24, 2024 | hide | past | favorite | 61 comments


Basically the argument, as I see it, the feds are demanding a liability regime, as software producers shouldn't be able to disclaim all liability for the software they sell, and the courts are better situated than regulatory agencies to decide liability questions.

However, we don't have the time to let a software liability common law regime develop over decades, given the threat presented now. Therefore we should adopt the products liability, design defect analysis, which he explains as:

     In fact, it turns on two reasonableness inquiries: Was there a “reasonable” alternative design that the manufacturer could have used to avoid the vulnerability, and, without that alternative, was the actual design used “unreasonably dangerous”?
Imagine asking the question -- are there reasonable alternative designs for OpenSSL? For C and C++? For Linux? Re: OpenSSL and C and C++ memory vulnerabilities, I'd say maybe, see: Rust and rustls! Re: Linux, I would have to say no. Is there an alternative which doesn't have the same issues? Not really.

The author's summary liability regime, in light of the NIST guidance, would be as follows:

    1. A rules-based approach would define a floor—the minimum legal standard of care for software—focused on specific product features or behaviors to be included or avoided.
    2. However, a list of known coding weaknesses cannot suffice alone. Software is so complex and dynamic that a liability regime also needs to cover design flaws that are not so easily boiled down. For these, I propose a standard based on the defects analysis common to products liability law.
    3. But this liability should not be unlimited or unpredictable. As the Biden administration’s National Cybersecurity Strategy recognizes, developers deserve a safe harbor that shields them from liability for hard-to-detect flaws above the floor. For that, I would turn to a set of robust coding practices.


> Re: ...C and C++ memory vulnerabilities, I'd say maybe, see: Rust

In general, yes.

However, I hate Rust; my software wouldn't exist if I had to write it in Rust. [1]

So I hope there would be a way for me to justify using C, along with a plan to minimize memory vulnerabilities.

In my case, my plan is to fuzz, use every one of the fuzz paths found as a test case, and ensure all of those test cases come out clean in sanitizers and Valgrind.

Oh, and I'll also do error injection, like SQLite [2], to exercise even more paths, and those paths will also have to come out clean.

[1]: https://gavinhoward.com/2023/02/why-i-use-c-when-i-believe-i...

[2]: https://www.sqlite.org/testing.html#anomaly_testing


At the moment there's an effort from the Office and the National Cyber Director (ONCD), CISA, NSF, DARPA, and OMB to encourage a move to memory safe languages, including but not limited to Rust (see [1]).

This included an RFI for which there were many responses, including a number of major tech companies (Microsoft, Google, others). The general consensus supports a move to memory safe languages, BUT many of the respondents also caution about the need to prioritize and carefully manage such a transition, and the need to continue to support and assure code written in memory unsafe languages using other assurance techniques like fuzzing, integration testing, unit testing, static code analysis, and more.

Hopefully that helps explain where things stand at the moment regarding at least US movement on this topic, and also to assuage concerns about a unilateral mandate that things get rewritten in Rust (or another language).

[1]: https://www.whitehouse.gov/oncd/briefing-room/2023/08/10/fac...


I know.

I also know that despite some having caution, there are those that would speed ahead with RIIR.


> So I hope there would be a way for me to justify using C, along with a plan to minimize memory vulnerabilities.

If the desire is -- you would like to create new software in C because you like C, I think that presents some really difficult questions in this framework, simply because a reasonable alternative exists.

First, I think you have to ask: What kind of programs at you writing? Are they FOSS? Do you sell them? Are they likely to cause "damages"?

If what you are doing is writing software for the joy of writing software, I'm not sure you have much to worry about. If you're writing a bank transaction server, exposed to untrusted inputs, in C and you're selling this software, you might have something to worry about, because there exists a reasonable alternative in Rust.


They are FOSS, but they are for a business, so yes, I will be affected.

Rust is unfortunately not a reasonable alternative in some cases.

As an example, my software needs just a C compiler to build itself. You can build my software completely with:

    tcc -run bootstrap/boostrap.c tcc
This is because my software includes a build system meant to bootstrap other software. This use case is a good reason to avoid Rust.

Another reason Rust may not be reasonable is the pervasiveness of async. Personally (not in general), async does not work for me.

Rust is not the last word in programming languages. I am working on an alternative that will not only not have async, but will have more static analysis because destructors will be guaranteed to run.

If you had read my post, you would know that I am planning to rewrite my stuff in that language as soon as possible.

If Rust becomes required, I would be forced to stop work on that language, and progress would slow down.

There are valid reasons to use C, even for software that handles untrusted inputs.


> Rust is unfortunately not a reasonable alternative in some cases.

Sure, and tort law is a flexible framework, so you will be allowed to make that argument. The only problem is if your software blows up in a way Rust was meant to prevent.


If every bug that Rust could have prevented is enough to allow people to sue me to oblivion, then SQLite and Curl will also have to be pulled from the market.

I am trying to hit the same level of robustness as those, and if their level is still not acceptable, our industry will implode.

I get that it's not okay to have 70% of vulnerabilities be memory safety bugs, but C projects shouldn't have to be perfect on that front. Just close enough. Maybe 10% or less.


> If every bug that Rust could have prevented is enough to allow people to sue me to oblivion, then SQLite and Curl will also have to be pulled from the market.

If your software blows up, in a way Rust would have prevented, and caused damages, isn't it kind of fair someone say "Hey maybe you shouldn't be using C"?

My point was simply -- if your software is not in a place where damages occur, fine, who cares, write it in C. But if it is, and Rust exists, and could work just as well in that environment? And if you must still write in C, that's still fine, until you start selling software and causing your customers damages.

> But C projects shouldn't have to be perfect on that front. Just close enough. Maybe 10% or less.

Why?


> If your software blows up, in a way Rust would have prevented, and caused damages, isn't it kind of fair someone say "Hey maybe you shouldn't be using C"?

No, that would not automatically be fair. It depends on context.

> But if it is, and Rust exists, and could work just as well in that environment?

But that's what I am saying: Rust won't work as well in that environment.

My stretch goal is to have my build system replace the awful custom Makefile build for FreeBSD. In that context, Rust doesn't exist, even if they add it to the base system because something would have to build Rust and LLVM.

Same reason my bc is in C: it is used to build the Linux kernel.

Anyway, who defines if Rust works "just as well" in an environment? It all depends on context. You can't say that any Tier 1 and Tier 2 platform must use Rust for reasons such as the one I laid out above.

And what if my language succeeds and has more bug smashing power than Rust? Should the directive immediately be, "If any Rust project causes damages that Yao could have prevented, that project owner is liable"?

Of course not.

> > But C projects shouldn't have to be perfect on that front. Just close enough. Maybe 10% or less.

> Why?

Call it a hunch, but roughly, I think that a C project that reduces its memory bugs to that much would have fewer vulnerabilities than an equivalent Rust project, just because it would smash a lot of non-memory bugs along the way.

That is the criteria that should be used: how many bugs does it have per KLoC, number of statements, cyclomatic complexity, whatever.

Otherwise, you're just effectively mandating Rust anyway, and Rust projects could get away with murder.

But set an objective criteria like that, and SQLite and Curl would survive, I would make my project hit it, and poorly done Rust projects would have to clean up their act.

IOW, I do agree that Rust should be the default, but we shouldn't choose a metric that is clearly biased in its favor.


> But that's what I am saying: Rust won't work as well in that environment.

Well then the problem would be different?! If instead the problem is the problem as I described it, you have an issue!

> Anyway, who defines if Rust works "just as well" in an environment?

Dude, you're super angry about Rust and you are refusing to address how any of this might work in a products liability context.

If you want to say software development is so inherently creative, and subjective, it can never have standards, standards which the rest of the world relies upon to build cars and toys and industrial boilers, then good luck with that opinion.

> Call it a hunch, but roughly, I think that a C project that reduces its memory bugs to that much would have fewer vulnerabilities than an equivalent Rust project, just because it would smash a lot of non-memory bugs along the way.

Good luck making this argument!

> IOW, I do agree that Rust should be the default, but we shouldn't choose a metric that is clearly biased in its favor.

Unfortunate for you, products liability is actually about that metric: safety. If you want to forgo better practices, languages, etc., that's fine. Because products liability law could never simply mandate Rust. You must only be reasonable in your decision making process.


> Well then the problem would be different?! If instead the problem is the problem as I described it, you have an issue!

Maybe. Others will have different situations, so no, even if the problem is as you described it, you may have left out some variable that precludes Rust.

> you are refusing to address how any of this might work in a products liability context.

Yes, I did. I said that we need some objective measure, and I gave a few examples. Anything that doesn't measure up gets hit with liability.

> If you want to say software development is so inherently creative, and subjective, it can never have standards, standards which the rest of the world relies upon to build cars and toys and industrial boilers, then good luck with that opinion.

I never said we could not have standards. That is the opposite of what I said.

I said that we should have a fair objective measure as a standard, not "no memory bugs" or "always write in Rust on Tier 1 and 2 platforms."

> Good luck making this argument!

It will actually be easier with time as more Rust projects appear.

But as a taste, I find it illuminating that, despite calls to rewrite SQLite and Curl in Rust, no one making those calls has actually sat down to do it.

And I do have some personal data in that regard. [1]

> Unfortunate for you, products liability is actually about that metric: safety. If you want to forgo better practices, languages, etc., that's fine. Because products liability law could never simply mandate Rust. You must only be reasonable in your decision making process.

Yes, I agree.

But if we choose the wrong standard, then products liability law will effectively mandate Rust, and that would not be good.

And "safety" is not just about memory safety; it's about that and everything else. Rust is not a valid excuse to forget everything else.

[1]: https://gavinhoward.com/2023/12/am-i-a-good-c-programmer/


You could hope for hardware support for security using a CHERI ISA [1]. You could also try proving your programs correct using something like Frama-C, or do it by hand like Dijkstra did.

[1]: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/


I would, but my programs use structured concurrency; I don't know how well Frama-C would help.

If someone paid me to, I'd do it in a heartbeat.

For CHERI, I want my software to be safe on unsafe platforms.

I am building a memory-safe language that I will rewrite my software in.


Broken record. C's memory issues can be fixed or mitigated. The reason they haven't been is because of WG21 C++ standards committee) and WG14 (C standards committee).


Yeah, it bothers me. That is why I am working on my own language.


I think this reading is _mostly_ right. To make it a little clearer:

1. A rules-based approach to define a minimum standard, likely based around existing descriptions of software weaknesses where the rules basically say "you aren't allowed to have this type of software weakness" (the doc specifically gives path traversal as an example). HOWEVER, liability would only be established if a weakness is actually exploited, to help avoid the problem of people proactively scanning for weaknesses and then troll-suing as a way to make money.

2. A process-based safe harbor to avoid liability for exploitation of weaknesses NOT on the rules-based "no no list." Basically, if you can show you followed the safe harbor process requirements, you get the benefit of the safe harbor. This is to protect and encourage organizations to go above and beyond the floor established by the rules-based standard.


> HOWEVER, liability would only be established if a weakness is actually exploited, to help avoid the problem of people proactively scanning for weaknesses and then troll-suing as a way to make money.

Agreed. Torts require damages.


Yeah, the doc acknowledges this concept is "drawing on tort law principles." In this case the author, Jim Dempsey, is proposing a rules-based framework for establishing a floor regardless of what underlying system is used (warranty, torts, etc.). So he's not _inherently_ assuming a tort law framing, and is instead making a recommendation for how it should operate regardless of whether the resulting system is based around tort law.


> So he's not _inherently_ assuming a tort law framing, and is instead making a recommendation for how it should operate regardless of whether the resulting system is based around tort law.

Not to get too in the weeds, but breaches of implied warranties were once/began as torts.

But, yes, I guess I was a little glib. I should have said -- it would be sensible to require damages for these types of actions.

Although I'm not sure I agree with the author's formulation:

> HOWEVER, liability would only be established if a weakness is actually exploited, to help avoid the problem of people proactively scanning for weaknesses and then troll-suing as a way to make money.

I think we should encourage customers to look for vulnerabilities. Damages analyses can take into account depreciation, reasonableness of damages/troll suing, etc. If you have 100s of devices you must replace because the vendor won't patch, and your patch request is reasonable (high consequence vulnerability) within 5 years of purchase, I think damages accounting for depreciation make sense, without proof of exploitation.

EDIT: Implied 2nd quoted portion was the parent commenter's notion when it was a summary of the author's position.


To be clear, I wasn’t staking anything as my position. I was summarizing the position of the author.


It seems like an odd framing to me. If you have a vulnerable device, you’ll need to replace it, whether or not it is actively being exploited. So, you’ve been damaged by the device producer’s negligence.


Imagine: The vulnerable device is on your internal network, and the likelihood of damages is very small.

I suppose you can construct a regime in which feds come in a fine you simply for having an insecure device, but a lighter touch approach is to wait for damages, which given this scenario, perhaps never comes.


I think I might not have been totally clear. If you’ve been sold a vulnerable device, you are the victim. You have to replace the device. The damage is to you, the one who did the damage is the company that sold you a defective device.

They should pay to replace the functionality, and some punitive fine, as a start. Then we can get into whatever damage their negligently designed device did (did it participate in a DDOS? Did it attack other devices on your network?)


> If you’ve been sold a vulnerable device, you are the victim.

Ahh.

Simply having to replace a defective device would be "damages". The only Q would be: Was it reasonable to replace given the threat?


I’ve edited it, hopefully for clarity :)


You seem to have fortunately survived orphaned software. I've seen scientific software running on a 15 year old OS. The lab equipment functioned so we took out the Ethernet port to prevent viruses.


This reminds me of cars—the first time I got a recall notification, I nearly panicked. Oh man, I’m going to have to bring it in and get a new one or something? What a pain!

Nope, the recall usually just means you bring it to a mechanic and they get a replacement part that does basically the same thing, but hopefully correctly this time.

Fixes like yours… hey, if it works it works, right? But the device is not exactly what was sold, if it had an Ethernet port and presumably some corresponding network functionality, there’s some value they ought to owe you for. Maybe the fix could be to ship out locked down raspberry pi to sit between the device and the rest of the network, haha.


Orphaned as in the vendor is out of business. Like there's no mechanic, no parts, no nothing. Only the last version of the software and the working scientific equipment.


The elephant in the room:

If software becomes a product does that mean it's no longer speech? How do you have liability for something that is copyright? What about the first amendment in the us?


IANAL. So this isn't a leading question; I really don't know.

What's the standard for liability for something like a manual on how to repair your own car brakes, where if you mess it up you might get killed? How about medical advice online, where if you follow it, it might kill you? How about a manual on how to prepare fugu? How about if I give what I think is good psychological advice, and they commit suicide?

If I write any of the above, and it's wrong, and someone dies because it was wrong, how liable am I under current law?

How about if I write something that someone takes as investment advice, and they follow it and lose a lot of money? How about if I write something that looks like legal advice, and they follow it and end up in jail?

What is the actual current state of the law on these things in the US?


IANAL ... but I like reading the law, I like that the law is protectionist in spite of the founding fathers wishes (a digression)

https://en.wikipedia.org/wiki/United_States_free_speech_exce...

Intent matters in the case of the first amendment. Or you limit how you can license software, that's tort, but someone will argue that your limiting what I can do with my speech at that point in time...

This change looks like the "Nuclear option" for something that is, fairly common in society.


My prediction is non commercial software becomes anonymous, and commercial software gets sold from some other low tax professional-dense location like Dubai or Singapore.

If you look at 3d printed gun plans and code, 1A has arguably helped with keeping it on the net but the authors are mostly under psuedonyms.


There is a mature body of regulatory law for these sort of things for licensed professionals. That probably does more harm than good though


Right, but what if I'm not a professional? What if I'm just an amateur posting on the web? What if I'm not even an amateur, just a rando shooting my mouth off?


This would be a great win for the lawyers. T

The better alternative would to be to completely rework the software developing discipline to be as rigorous as civil engineering from the top down. Requiring proper licensing, and security clearances for accessing U.S. customer data.


Software is tricky in this regard because of the dependency mess. If you throw your “little garden path” program up on GitHub and somebody decides to use it to build their “highway,” it really ought to be on them, not you.

Edit: I suspect any reasonable law would account for that, and these are the folks who do the Lawfare podcast, right? It is pretty good. I haven’t finished the paper yet but I’d be surprised if they missed this.


Cool thing to note: this argument was made a lot in responses to the Office of the National Cyber Directors' recent Request for Input on improving open source software security.

That said, there's a state of tension on this topic right now. The European Union's draft Cyber Resilience Act has included language across multiple versions that would in at least some cases assign liability to producers of open source software. They've tried to modify the language to exclude non-commercial open source projects, but there's been a lot of wrangling over the exact definition of "commercial."


> but there's been a lot of wrangling over the exact definition of "commercial."

That's disturbing since the definition should be trivial!

If I publish a library (whether open source or not, doesn't really matter) and charge you money to use it in your product, that's commercial. I'm happy to take the liability (and will of course charge you enough money to make it worth it to me).

If I throw out some code on github but you're not paying me, that's obviously not commercial.


Honestly, there should be tension and arguing there. For example, Google should (IMO) very clearly be treated as selling products in the cases of Android and Chrome despite the fact that these technically are related to open source projects.

I’m quite glad they aren’t ignoring the complexity.


I agree that there are details to sincerely work out here. I know in earlier drafts the wording of the "commercial" definition was such that it could be read to classify cases like a solo OSS developer who makes $10/month from GitHub Sponsors as "commercial" (and thus assign liability to them). This received substantial pushback from organizations like (as I recall) the Eclipse Foundation and the Electronic Frontier Foundation. I'm not sure where the draft language stands now.


That seems like a really hard edge case. I wonder how the liability works out for, say, a carpenter who makes home-made chairs if one breaks and hurts somebody.


> If you throw your “little garden path” program up on GitHub and somebody decides to use it to build their “highway,” it really ought to be on them, not you.

It used to be this way (at least in the companies I was involved in back then). In the 90s there was plenty of open source but it was generally a huge no-no to use it in commercial software (even when the license was permissible).

Incorporating an open source library (just one!) to our product in the 90s meant months and months of meeting with company lawyers to get that approved. And it meant we (the team developing the product) were 100% on the hook for all fixes.

While I'll admit it was a pain, in hindsight there was a lot of benefit from that approach. It discouraged importing random libraries unless there was a lot of value in them, so we had to be selective. It made it crystal clear that open source is not free, there's a lot of cost and liability.

While it's convenient today to import a library that brings in 700 other libraries, none of them vetted, any one of which might be full of malware, maybe that's actually not so smart.

I increasingly feel we need to go back to explicitly admitting that we (the company) are 100% responsible and liable for the code we ship. If we import it from open source, no matter, we're still on the hook.


> these are the folks who do the Lawfare podcast, right?

Yep, and they had a podcast episode with the author of this paper: https://www.lawfaremedia.org/article/the-lawfare-podcast-jim...


It's not as different as you might think. Civil/structural engineers are dependent on lots of upstream inputs--or maybe planes where a lot of people here (correctly) think the ultimate manufacturer is mostly liable. Everyone down the chain needs to do their own vetting/testing.


Or you end up with situation such as US housing where everything is regulated and liability insured out the ying yang to the point I built a house worth a few hundred k for like 40 grand by building it myself and taking all the liability (because I'm not going to sue myself).


> This would be a great win for the lawyers. T

My POV it's better than pretending certain segments of the economy are special and can completely avoid all liability.

People really hate the idea of slip and fall cases until their grandmother slips and falls.

> The better alternative would to be to completely rework the software developing discipline to be as rigorous as civil engineering from the top down. Requiring proper licensing, and security clearances for accessing U.S. customer data.

Yeah, maybe. It's kind of incredible this kind of seat of the pants engineering has lasted as long as it has.


The problem with liability here is that software as a discipline is so subjective anything can be argued from any expert just out of a boot camp.

Which is why it is a lawyers dream unless it is completely reworked.


> The problem with liability here is that software as a discipline is so subjective anything can be argued from any expert just out of a boot camp.

I'm really not sure this is case. I'm sure there are plenty of people who would have argued medical practice is more of an art, than a science, before the introduction of strong med mal regimes. Everyone tends to think their field is special!


Again, I don’t think it’s special. I think it lacks objectivity. I wish the field had more objectivity.

If we go down this path then we need proper licensing, certifications, and changes should be as well tested as the fda process. Aka releasing software is as rigorous as releasing a new drug.


Yeah I think we agree.


I can't think of a better way to destroy FOSS, and to drive non-free software completely off-shore.

P.S. where should the liability fall for:

This site can’t provide a secure connection www.lawfaremedia.org sent an invalid response.


> I can't think of a better way to destroy FOSS, and to drive non-free software completely off-shore.

But products liability is for products? If you aren't selling anything how would you be liable? If you and I have no business relationship, what is the basis of liability?

I don't see a liability regime touching internal use, personal use, or non-commercial (read: FOSS) development. However, when you decide to sell a service or a good, that's when liability attaches.

> This site can’t provide a secure connection www.lawfaremedia.org sent an invalid response.

Works for me?


You can be liable for harms caused by things you give away; you can even be liable for harm caused by things stolen from you (i.e. cars).


> You can be liable for harms caused by things you give away; you can even be liable for harm caused by things stolen from you (i.e. cars).

Not in traditional tort law. A criminal stealing your car is an intervening event. You didn't cause the harm.

But for the sake of argument let's say you find a precedent. Please explain how that liability would work in the context of products liability.

First ask yourself the threshold question: What is a product? A product is not your FOSS on GitHub.


This law firm made a handy chart for determining owner liability for harms caused by a stolen vehicle: https://www.mwl-law.com/wp-content/uploads/2018/02/OWNER-LIA...

>"Please explain how that liability would work in the context of products liability.

First ask yourself the threshold question: What is a product? A product is not your FOSS on GitHub. "

I think that a court could easily determine a FOSS contributor was liable if they knew of a way in which the use of their software could cause harm to a user or 'victim' of the software. I am not sure the word 'product' does any work here.


Again, you're way, way outside the context of products liability, but from your cite...

> The majority common law rule among the 50 states is that the owner of a stolen vehicle will not be held liable for damages when the vehicle is stolen and then involved in an accident that causes injury or property damage.

This is a list of states which have varied from the majority common law rule!

If a state creates a statute which says you can be fined for selling ice cream on Sundays, guess what? If your fear is the feds, like these states, may do something crazy when writing the statute, that may be a legit fear. But that doesn't have anything to do with what are traditional common law tort principles, or with the content of the article.

> I think that a court could easily determine a FOSS contributor was liable if they knew of a way in which the use of their software could cause harm to a user or 'victim' of the software.

You think? I suppose you can think anything you wish. But you're going to need to show your work. Like -- here are the elements of a product liability tort. Here, is how the facts apply to those elements. Conclusion, liability could result.

> I am not sure the word 'product' does any work here.

It does all the work I need it to do? Your core problem is a "product" must be sold. Your/our FOSS is not sold unless you/we sell it. If some ridiculous goofball takes my FOSS and puts it in a product, and there is a problem, that's his problem because he made it a product and sold it, not mine.


Say a company sells SaaS and relies on open source nginx which is found to contain such vulnerability?


> Say a company sells SaaS and relies on open source nginx which is found to contain such vulnerability?

Then that SaaS company would be liable (unless they purchased nginx)? If you take someone's FOSS and create a product with it, then of course your customers will look to you when that product has a defect.


If one release a hobbyist blueprint to put an electric motor on a bicycle and then a megacorp uses that (without any due care) to release a mass-produced e-bike marketed worldwide which ends up catching fire or otherwise producing injuries due to a design flaw:

- Why should one be held liable or have a legal obligation to the customers of the megacorp?

- Why would one even be afraid of being sued by the customers of the megacorp? (other than if we lived in an otherwise draconian society)


He wants a standard of care, and I have one. [1]

He wants a liability regime, and I have one. [2]

Yes, we need to start accepting liability for our software, and as he says, it's better to get one now before one is thrust upon us.

Bonus points if it funds FOSS.

[1]: https://gavinhoward.com/2022/10/we-must-professionalize-prog...

[2]: https://gavinhoward.com/2023/11/how-to-fund-foss-save-it-fro...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: