The Liability Gate

When Your Chatbot Becomes Your Lawyer: How Air Canada Lost a Case It Should Never Have Fought

A case study in the Judgment Architecture framework

01. The Bereavement

A Funeral and a Wrong Answer

November 2022. Jake Moffatt's grandmother dies. He needs to get from Vancouver to Toronto for the funeral.

He goes to Air Canada's website. Opens the chatbot. Asks about bereavement fares.

The chatbot tells him he can book now and apply for the bereavement discount within 90 days after the flight. He books at full price, attends the funeral, applies within 90 days exactly as instructed.

Air Canada denies the refund. The airline's official policy requires applications before the flight. The chatbot was wrong. The airline cited its actual policy. No resolution offered.

This is the moment when the case stopped being about a customer complaint and started being about who owns AI judgment.

02. The Defense

The Argument That Lost the Case

Air Canada's legal strategy was simple and fatal: blame the chatbot.

The airline argued the chatbot was a "separate legal entity" responsible for its own actions. They told the tribunal that Moffatt should have cross-verified chatbot information against other website sections. The customer, they claimed, bore the responsibility for validating machine-generated advice.

This was the moment the case became a framework illustration. The company deployed AI judgment, then disclaimed accountability for that judgment. You cannot build a system that answers policy questions, claim it represents your company, and then argue when it fails that it is not your company's responsibility.

Award Issued
C$812.02
Appeals Filed
0

Air Canada did not appeal. The precedent stood.

03. The Ruling

What the Tribunal Actually Decided

Case: Moffatt v. Air Canada, 2024 BCCRT 149. Tribunal Member Christopher C. Rivers issued five foundational rulings.

First: airlines owe a duty of care to chatbot users. Second: the airline failed the negligence standard by not ensuring the chatbot was accurate. Third: inaccurate information from the airline's system constitutes negligent misrepresentation. Fourth: companies are responsible for all information on their websites, whether static or chatbot-generated. Fifth: it is unreasonable to expect customers to cross-verify chatbot advice against official policies.

The airline "did not take reasonable care to ensure its chatbot was accurate."

British Columbia Civil Resolution Tribunal, Member Christopher C. Rivers

This was not a judgment about chatbots. It was a judgment about responsibility. When you deploy an AI system on your website answering policy questions, you are answerable for what it says. Deferring to "the AI made the mistake" is not a legal shield. It is abdication.

04. Framework Diagnosis

The Gates Air Canada Never Built

The Liability Gate asks one question before deployment: "If our AI gets this wrong, who owns the failure?" Air Canada never asked it.

They deployed a chatbot with policy authority but no policy verification. The visible layer—policy lookup—is theoretically automatable. But it has a prerequisite: accuracy. The chatbot was trained on incomplete or outdated data. No human verification loop existed.

The Escalation Gate was also absent. When the chatbot provided policy information, there was no pathway for the customer to verify with a human before acting on it. The customer had to take the machine at its word.

The Framework Failure

Visible Layer
Policy lookup—theoretically automatable, but the accuracy prerequisite failed. The chatbot was wrong.
Contextual Layer
Understanding bereavement urgency and customer vulnerability—entirely eliminated. Efficiency replaced judgment.
Invisible Layer
Brand trust and the customer relationship during the most vulnerable moment—destroyed. A refund of C$812 cost Air Canada far more.
The Liability Gate
Never asked. "If the AI is wrong, do we own it?" The tribunal answered yes.
The Escalation Gate
No pathway existed. The customer had to trust the machine. There was no human option.
05. The Precedent

Why C$812 Is Terrifying

The award itself is trivial. The precedent is not.

This ruling established: companies are liable for AI chatbot information. Period. Customers are not required to cross-verify AI output against official sources. The "separate legal entity" defense for AI is dead on arrival.

Every company running a customer-facing chatbot now operates under this standard. If your AI gives wrong information and a customer acts on it, you own the outcome.

Air Canada could absorb C$812. But the precedent now exists. The next company cannot absorb the next ruling—because the liability framework is set. The next case will not be about a funeral discount. It will be about a much larger failure. And the tribunal will cite Moffatt v. Air Canada.

The lesson is not about chatbots. It is about deploying AI judgment without asking who is accountable when that judgment fails. Air Canada deployed judgment. Then it argued it was not responsible for that judgment. The tribunal rejected this argument completely.

The Liability Gate is not a feature. It is a requirement. And now it is the law.

Explore the Framework

The Judgment Architecture is a model for understanding where AI succeeds, where it fails, and where it must never go. Learn how to apply it to your organization.