This is correct. For any law to function, responsibility needs to propagate through the use of any tool. If a company is the legal entity responsible for making the decision to deploy a chatbot as a support service, they must be responsible for what that chatbot says. This responsibility should also flow through corporations to the people who had the power to make decisions about how the corporation operates, but I'll take it as a small blessing that we're seeing an unwillingness to set precedent that further allows indirection to evaporate responsibility
> I wonder if intent matters. I think it's pretty obvious from the $1 car case that the guy was intentionally trying to break the bot.
IANAL either, but I thought it was that obvious errors are not usually upheld in these scenarios. (Where "obvious" is something like "a reasonable person would think…".) So a $1 car would probably be an obvious error, unless there was some reason for you to think the car was worth nothing. (A $1 car where you purposefully manipulated the bot into making that offer… well.)
But here, there seems to be nothing obviously wrong with the chatbot's offer. Bereavement flights/discounts are a thing, and permitting someone to file the necessary paperwork after the fact for 90d also sounds reasonable, since deaths can be sudden and unexpected, that would be a kind thing to offer.
The court's ruling seems sound here. "Our chatbot is a separate legal entity", OTOH…
I doubt that the dealership would feel their intent mattered if they overcharged the same guy and he balked. Isn't that the whole goal in this stupid way of buying cars that we have to deal with?
… But when Moffatt later attempted to receive the discount, he learned that the chatbot had been wrong. Air Canada only awarded bereavement fees if the request had been submitted before a flight. The airline later argued the chatbot was a separate legal entity “responsible for its own actions,”….
How exactly do you go about making a chatbot a legal entity?
Yeah, but would it not be cool for this Ruling to have set a precedent for AI personhood? - thats something you would read in a humorously written scifi story
This gets close to the law of principal and agent. Are "intelligent agents" agents in the legal sense? That is, is the principal responsible for their actions?
That's usually the case for employees, unless the employee clearly acted outside the scope of their employment. AI systems operated on behalf of a business should be held to the same standard.
There's an economic theory of accounting for mistakes of agents.[1] There's a cost of mistakes, and a cost of decreasing the error rate. So it's something that can be priced into the cost of running the business.
I was thinking about this. Surely it must be made to pay back the money lost.
Options could be always to have it include advertisement to competitors or for example porn or dating sites, in each discussion or with each message. Seems reasonable enough to implement and deliver.
Always deliver some message critical of Air Canada...
There is lot of creative ways it could communicate with other customers. Thus paying for the wrongs commited...
Companies will probably stop using these kind of chat bots if people keep exploiting them. Not that anyone should do that. But I'm just saying that's probably what they would do.