AI Exhaust: The Emerging Evidence Trail In Canadian Litigation
- Janet Momoh

- Feb 28
- 4 min read
Updated: Mar 5

The Canadian legal landscape is moving past the novelty, risks, and concerns of “AI hallucinations." The next challenge for the legal profession, especially litigators, is far more complex: AI Exhaust. In the context of legal practice, "exhaust" refers to the digital trail left behind by generative tools: prompts, chat logs, retrieval traces, and metadata. As courts move from curiosity to scrutiny, this data trail is becoming a critical factor in assessing a lawyer’s diligence and professional candour.
A Shift from Competence to Candour
Courts are no longer treating AI errors as simple technical glitches; they are viewing them through the lens of honesty and professional duty. In the recent case of Hussein v Canada 2025, the Federal Court characterized the undisclosed use of an AI-generated fake case not as a mistake but as an attempt to mislead the Court, resulting in an award of adverse costs.
The message from the bench is clear: AI cannot replace independent human verification. If AI contributed to citations or legal propositions, counsel must verify. These steps preserve the integrity of the profession and align with core duties owed to the administration of justice.
Recent decisions across Canada reinforce this shift:
Federal Court (Hussein v Canada, 2025): Undisclosed use of AI-generated non-existent cases treated as misleading the court, adverse costs awarded.
Alberta (Reddy v Saroya, 2025): The Court of Appeal emphasized the Tri-Court Notice: human verification is mandatory.
Ontario (Ko v Li, 2025 ): The court raised the possibility of contempt, underscoring that lawyers must read the cases they cite.
British Columbia (Zhang v Chen, 2024): Counsel ordered to pay personal costs for citing fabricated AI authorities.
Why "AI Exhaust" Matters
While current case law focuses on the errors themselves, the next wave of litigation will focus on the process. In future disputes over professional conduct or negligence, a lawyer’s prompt history may be important or probative evidence capable of proving diligence or rebutting allegations of bad faith.
AI exhaust is the digital footprint of a lawyer’s interaction with AI tools. It can show:
how a quotation or summary was generated;
whether a case was actually validated;
what sources were consulted; and
how a legal argument evolved.
In other words, it exposes counsel’s process. However, this creates high stakes risks and a strategic tension. While logs can prove diligence, they also risk exposing: (i) litigation strategy (how you framed your prompts); (ii) solicitor-client privilege (sensitive facts fed into the model); (iii) work product (the iterative “thinking” behind a legal theory).
Retaining exhaust can help prove diligence and protect against negligence or misconduct claims. Conversely, retaining excessive exhaust risks exposing privileged strategy, client instructions, and internal reasoning.

Regulatory Landscape: Alberta’s Three Pillars
Alberta remains one of the most explicit jurisdictions regarding AI-assisted practice. To mitigate risks, counsel must navigate three key requirements:
Mandatory Verification: The Tri-Court Notice mandates meaningful human oversight for every filing. AI‑generated citations must be checked against authoritative sources, aligned with the guidance set out in the Law Society of Alberta's Generative AI Playbook.
Privacy Compliance: Lawyers should ensure AI vendors contractually prohibit training on client data, provide clear retention and deletion controls and audit rights, and maintain data residency and sub‑processor arrangements that comply with Alberta’s Personal Information Protection Act (PIPA); address Personal Information Protection and Electronic Documents Act (PIPEDA) where it applies.
Disclosure: Alberta courts do not require lawyers to disclose that AI was used in preparing materials, nor do they require disclosure of the method of verification; instead, they focus on ensuring human oversight to prevent errors and fabricated authorities. This contrasts with Manitoba and Yukon, where practice directions require parties to disclose that AI was used, and Yukon goes further by requiring an explanation of the purpose for which it was used.
Practical Protocol
To manage "exhaust" without compromising privilege, lawyers should adopt a disciplined, defensible workflow:
Verify: Cross-reference every AI output with primary sources (CanLII, Westlaw, LexisNexis). Never cite a summary without reading the full text judgment.
Document: Maintain a "Verification Sheet" for the file. This is a one-page summary of the steps taken to validate the research, serving as a shield against negligence claims. Avoid routine retention of full prompts and chat histories unless necessary for the record. Implement timely legal holds to suspend deletion when litigation is reasonably anticipated.
Filter: Retain the authoritative PDFs used for verification but consider implementing a policy to periodically segregate privileged AI artifacts; restrict access on a need to know basis.
Train and Audit: Use enterprise grade tools (not public facing "free" versions) that offer clear data retention limits and opt-outs for model training. Train all contributors (including contractors) on the verification protocol and confidentiality expectations. Periodically audit a sample of filings for adherence to the protocol and update checklists as court guidance evolves.
Be prepared to attest, not to disclose raw logs. If challenged, be ready to provide authoritative sources relied upon and, if needed, a sworn affidavit of verification steps.
Closing Thoughts
Generative AI is a powerful tool transforming legal practice, but the courts remain focused on a timeless expectation: counsel must stand behind their filings. In the AI era, that means being prepared to demonstrate through authoritative sources and, where appropriate, sworn attestations that every assertion has undergone meaningful human verification.
A lean, protected verification trail is a strong defence against the professional risks of AI, provided preservation obligations are honoured once litigation is reasonably anticipated.
AI exhaust is not merely a byproduct; in the right circumstances, it can constitute evidence. Managing it deliberately by balancing diligence, privilege, privacy, and proportionality will become an increasingly important professional skill in Canadian litigation.
Disclaimer: This post is for informational purposes only and does not constitute legal advice. It reflects Canadian law with an Alberta emphasis as of February 28, 2026, and may not reflect subsequent changes; verify all cited authorities and court notices against official sources before relying on them.
Comments