When Your Chatbot Becomes Your Lawyer: How Air Canada Lost a Case It Should Never Have Fought
A Funeral and a Wrong Answer
November 2022. Jake Moffatt's grandmother dies. He needs to get from Vancouver to Toronto for the funeral.
He goes to Air Canada's website. Opens the chatbot. Asks about bereavement fares.
The chatbot tells him he can book now and apply for the bereavement discount within 90 days after the flight. He books at full price, attends the funeral, applies within 90 days exactly as instructed.
Air Canada denies the refund. The airline's official policy requires applications before the flight. The chatbot was wrong. The airline cited its actual policy. No resolution offered.
This is the moment when the case stopped being about a customer complaint and started being about who owns AI judgment.
The Argument That Lost the Case
Air Canada's legal strategy was simple and fatal: blame the chatbot.
The airline argued the chatbot was a "separate legal entity" responsible for its own actions. They told the tribunal that Moffatt should have cross-verified chatbot information against other website sections. The customer, they claimed, bore the responsibility for validating machine-generated advice.
This was the moment the case became a framework illustration. The company deployed AI judgment, then disclaimed accountability for that judgment. You cannot build a system that answers policy questions, claim it represents your company, and then argue when it fails that it is not your company's responsibility.
Air Canada did not appeal. The precedent stood.
What the Tribunal Actually Decided
Case: Moffatt v. Air Canada, 2024 BCCRT 149. Tribunal Member Christopher C. Rivers issued five foundational rulings.
First: airlines owe a duty of care to chatbot users. Second: the airline failed the negligence standard by not ensuring the chatbot was accurate. Third: inaccurate information from the airline's system constitutes negligent misrepresentation. Fourth: companies are responsible for all information on their websites, whether static or chatbot-generated. Fifth: it is unreasonable to expect customers to cross-verify chatbot advice against official policies.
The airline "did not take reasonable care to ensure its chatbot was accurate."
British Columbia Civil Resolution Tribunal, Member Christopher C. Rivers
This was not a judgment about chatbots. It was a judgment about responsibility. When you deploy an AI system on your website answering policy questions, you are answerable for what it says. Deferring to "the AI made the mistake" is not a legal shield. It is abdication.
The Gates Air Canada Never Built
The Liability Gate asks one question before deployment: "If our AI gets this wrong, who owns the failure?" Air Canada never asked it.
They deployed a chatbot with policy authority but no policy verification. The visible layer—policy lookup—is theoretically automatable. But it has a prerequisite: accuracy. The chatbot was trained on incomplete or outdated data. No human verification loop existed.
The Escalation Gate was also absent. When the chatbot provided policy information, there was no pathway for the customer to verify with a human before acting on it. The customer had to take the machine at its word.
The Framework Failure
Why C$812 Is Terrifying
The award itself is trivial. The precedent is not.
This ruling established: companies are liable for AI chatbot information. Period. Customers are not required to cross-verify AI output against official sources. The "separate legal entity" defense for AI is dead on arrival.
Every company running a customer-facing chatbot now operates under this standard. If your AI gives wrong information and a customer acts on it, you own the outcome.
Air Canada could absorb C$812. But the precedent now exists. The next company cannot absorb the next ruling—because the liability framework is set. The next case will not be about a funeral discount. It will be about a much larger failure. And the tribunal will cite Moffatt v. Air Canada.
The lesson is not about chatbots. It is about deploying AI judgment without asking who is accountable when that judgment fails. Air Canada deployed judgment. Then it argued it was not responsible for that judgment. The tribunal rejected this argument completely.
The Liability Gate is not a feature. It is a requirement. And now it is the law.
Moffatt v. Air Canada: Companies Remain Liable for Information Provided by AI Chatbot · American Bar Association
Moffatt v. Air Canada: Misrepresentation by AI Chatbot · McCarthy Tétrault
Air Canada Chatbot Lawsuit: $812 Court Case Sets Precedent · CBC News
Air Canada Chatbot Denies Discount, Triggers Legal Battle · CBS News
Air Canada Found Liable for Negligent Misrepresentation by Chatbot · Foster & Company
Explore the Framework
The Judgment Architecture is a model for understanding where AI succeeds, where it fails, and where it must never go. Learn how to apply it to your organization.