Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Air Canada chatbot promised a discount. Now the airline has to pay it (washingtonpost.com)
34 points by nocommandline on Feb 19, 2024 | hide | past | favorite | 21 comments


This is correct. For any law to function, responsibility needs to propagate through the use of any tool. If a company is the legal entity responsible for making the decision to deploy a chatbot as a support service, they must be responsible for what that chatbot says. This responsibility should also flow through corporations to the people who had the power to make decisions about how the corporation operates, but I'll take it as a small blessing that we're seeing an unwillingness to set precedent that further allows indirection to evaporate responsibility


This sounds similar to the guy that xeeted about fooling a car dealership chatbot into selling him a car for $1. Different jurisdiction though.

https://venturebeat.com/ai/a-chevy-for-1-car-dealer-chatbots...


I wonder if intent matters. I think it's pretty obvious from the $1 car case that the guy was intentionally trying to break the bot.

In this case, it's much more plausible that it was a genuine misunderstanding.

I'm obviously not a lawyer, I have no idea if that matters. But going by my gut feeling, I agree with the outcome of both cases.


> I wonder if intent matters. I think it's pretty obvious from the $1 car case that the guy was intentionally trying to break the bot.

IANAL either, but I thought it was that obvious errors are not usually upheld in these scenarios. (Where "obvious" is something like "a reasonable person would think…".) So a $1 car would probably be an obvious error, unless there was some reason for you to think the car was worth nothing. (A $1 car where you purposefully manipulated the bot into making that offer… well.)

But here, there seems to be nothing obviously wrong with the chatbot's offer. Bereavement flights/discounts are a thing, and permitting someone to file the necessary paperwork after the fact for 90d also sounds reasonable, since deaths can be sudden and unexpected, that would be a kind thing to offer.

The court's ruling seems sound here. "Our chatbot is a separate legal entity", OTOH…


>I wonder if intent matters.

I doubt that the dealership would feel their intent mattered if they overcharged the same guy and he balked. Isn't that the whole goal in this stupid way of buying cars that we have to deal with?


… But when Moffatt later attempted to receive the discount, he learned that the chatbot had been wrong. Air Canada only awarded bereavement fees if the request had been submitted before a flight. The airline later argued the chatbot was a separate legal entity “responsible for its own actions,”….

How exactly do you go about making a chatbot a legal entity?


Maybe the airline accidentally hired lawyers that have a conscience and wanted to help regular people so they intentionally used a stupid argument?


Yeah, but would it not be cool for this Ruling to have set a precedent for AI personhood? - thats something you would read in a humorously written scifi story


Oops. Accidental personhood.

But only airplane chatbots.


Imagine if you will a nightmare world ruled only by airline chatbots...


This gets close to the law of principal and agent. Are "intelligent agents" agents in the legal sense? That is, is the principal responsible for their actions? That's usually the case for employees, unless the employee clearly acted outside the scope of their employment. AI systems operated on behalf of a business should be held to the same standard.

There's an economic theory of accounting for mistakes of agents.[1] There's a cost of mistakes, and a cost of decreasing the error rate. So it's something that can be priced into the cost of running the business.

[1] https://www.pthistle.faculty.unlv.edu/WorkingPapers/pamistak...


Moreover, how does one hold a chatbot responsible?


* Send it to a piece of RAM which then gets paged to disk

* Make it give up 90% RAM privileges for 2 weeks

* Threaten it with removal of last month's worth of training data


* Use another chatbot to generate heaps of text about how the initial chatbot is bad and feed that corpus to the "offending" chatbot


I was thinking about this. Surely it must be made to pay back the money lost.

Options could be always to have it include advertisement to competitors or for example porn or dating sites, in each discussion or with each message. Seems reasonable enough to implement and deliver.

Always deliver some message critical of Air Canada...

There is lot of creative ways it could communicate with other customers. Thus paying for the wrongs commited...



And submitted a number of other times here and there and everywhere

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Companies will probably stop using these kind of chat bots if people keep exploiting them. Not that anyone should do that. But I'm just saying that's probably what they would do.



An human would not fall for that. Chatbots can be tricked and exploited.


Humans will absolutely fall for that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: