AI in Family Law: A Cautionary Tale from Ko v. Li, 2025 ONSC 2766

The recent decision in Ko v. Li, 2025 ONSC 2766, delivered by Justice Myers on May 6, 2025, serves as a stark reminder for Ontario family law practitioners of the ethical and professional risks associated with the use of artificial intelligence (AI) in legal practice. This case, involving a complex estates and family law dispute, not only addressed substantive issues like setting aside a divorce order but also spotlighted the dangers of relying on unverified AI-generated legal documents. Below, we explore the key takeaways for family law lawyers, emphasizing the need for diligence, competence, and human oversight when integrating AI tools into practice.

Case Overview

In Ko v. Li, the applicant, Hanna Ko, sought to invalidate a 2020 divorce order, pursue equalization and support claims, and remove an estate trustee following the death of Xiang Guo Li. The respondents included the deceased’s children (estate trustees) and another claimant spouse, Mingjie Cheng. The Court set aside the divorce order due to fraud and duress, ordered disclosure from the estate trustees, and consolidated related estates applications (paras. 63, 82, 86). However, the decision’s most striking feature was the Court’s response to the applicant’s factum, which contained suspected AI-generated “hallucinations” (paras. 2–28).

The AI Issue: Hallucinated Citations

Counsel for the applicant, submitted a factum citing cases to support arguments for setting aside a divorce order and removing an estate trustee. However, the citations were problematic. One case was linked to an unrelated commercial real estate case and another linked to a site that returned a 404 error and could not be located (para. 6). Yet another case was misrepresented by counsel as supporting trustee removal, when it dismissed such an application (para. 11). Finally, another citation linked to an unrelated wrongful dismissal case.

When questioned, counsel could not provide copies of the cases or confirm their accuracy, admitting uncertainty about whether AI was used (para. 8). Justice Myers suspected the factum was generated by AI, such as ChatGPT, which is known to produce fabricated citations or “hallucinations” (para. 14). The Court ordered counsel and her lawyer’s attendance at a further hearing to show cause why she should not be cited for contempt, citing potential breaches of duty relating to obstruction or interference with the due administration of justice (paras. 29–31).

Legal and Ethical Implications

The Court delineated several key duties for family law lawyers, with a focus on the responsible use of AI (paras. 15–22):

  • Lawyers must accurately represent the law to the Court (para. 16);
  • Lawyers must not fabricate case precedents or miscite cases for propositions that they do not support (para. 17);
  • Lawyers must competently utilize technology, conduct legal research, and prepare court documents (para. 18);
  • Lawyers are responsible for supervising staff and reviewing materials prepared under their signature (para. 19);
  • Lawyers must ensure human review of materials generated by non-human technologies, such as AI (para. 20);
  • Lawyers must read cases before submitting them as precedential authorities and not submit case authorities that do not exist or that stand for the opposite of the lawyer’s submission (para. 21).
  • Lawyers have a fundamental duty not to mislead the court (para. 22).

Citing Zhang v. Chen, 2024 BCSC 285, the Court emphasized that fake citations are tantamount to false statements and can lead to miscarriages of justice (para. 23). The contempt proceeding underscores the severity of these breaches, referencing R. v. Cohn, 1984 CanLII 43 (ON CA), which defines contempt as acts interfering or obstructing the due administration of justice (para. 29).

Lessons for Family Law Practitioners

Ko v. Li offers critical guidance for Ontario family law lawyers using AI tools like ChatGPT or other generative AI platforms:

  • Verify All Citations: Always cross-check case citations using trusted legal databases (e.g., CanLII, Westlaw, Quicklaw) to confirm their existence and relevance. In Ko v. Li, the failure to verify led to citations that were either non-existent or contrary to the submitted propositions (paras. 5–13).
  • Conduct Human Review: AI-generated drafts must be meticulously reviewed by counsel. Accordingly, implement a robust review process to catch errors or fabrications.
  • Supervise Staff and Technology: Lawyers must supervise clerks, paralegals, or AI tools used in document preparation. In Ko v. Li, counsel’s uncertainty about AI use suggested inadequate oversight (para. 8).
  • Understand AI Limitations: Currently, generative AI can produce plausible but inaccurate legal citations. Educate yourself on AI’s potential for “hallucinations” and prioritize primary sources over AI outputs. Stay informed about AI’s capabilities and risks through continuing professional development.
  • Prepare for Judicial Scrutiny: Courts are increasingly vigilant about AI-related errors. Ko v. Li aligns with cases like Benjamin v. Costco Wholesale Corp., 2025 US Dist. LEXIS 78895, which addressed similar issues in the American context (para. 28). Be prepared to substantiate all submissions in court.

Practical Steps for Safe AI Integration

To harness AI’s benefits (e.g., efficiency in drafting or research) while mitigating risks, consider the following:

  • Use AI as a Starting Point: Treat AI outputs as drafts, not final products. For example, use AI to generate initial factum outlines, then verify all legal references manually.
  • Document AI Use: Maintain records of AI tools used and the review process applied to ensure transparency if questioned by the Court.
  • Train Staff: Ensure all team members understand AI’s limitations and the need for human verification.
  • Leverage Trusted Tools: Use AI tools designed for legal research (e.g., those integrated with verified databases) rather than general-purpose platforms.
  • Stay Updated: Monitor Law Society of Ontario guidelines and emerging case law on AI use.

Understanding AI “Hallucinations”: A Technical Primer

To better protect ourselves from AI-generated errors like those in Ko v. Li, family lawyers should understand why these systems sometimes produce “hallucinations” or fabricated information:

What Are Large Language Models?

Large Language Models (LLMs) like those powering ChatGPT, Claude, and similar tools are trained on vast datasets of text from the internet and other sources. They work by predicting what text should come next in a sequence, based on patterns learned during training.

Unlike traditional legal databases, LLMs do not “know” or “retrieve” facts. Instead, they generate responses based on statistical patterns in their training data. This distinction is crucial for legal professionals to understand.

Why Hallucinations Occur

AI hallucinations occur for several technical reasons:

  1. Pattern Completion: When an LLM encounters a request for a legal citation, it attempts to complete a pattern (e.g., “Smith v. Jones, 2023 ONSC 1234”) without accessing a verification database.
  2. Training Data Limitations: If a case wasn’t in the AI’s training data (which typically has a cutoff date), the AI may generate a plausible sounding but fictional citation.
  3. Context Window Constraints: LLMs have limits to how much information they can consider at once, potentially causing them to lose track of facts within a complex legal analysis.
  4. Confidence Despite Uncertainty: Most concerningly, LLMs express the same level of confidence whether generating factual or fabricated information, making hallucinations difficult to detect without verification.

Even the most advanced AI tools available to legal professionals today suffer from these limitations. Understanding these technical constraints reinforces why human verification remains essential, particularly for citations and legal authorities.

Conclusion

Ko v. Li is a wake-up call for Ontario family law practitioners. While AI can enhance efficiency, its unchecked use risks professional misconduct, contempt proceedings, and harm to clients. By prioritizing verification, human oversight, and competence, lawyers can integrate AI responsibly while upholding their duties to the Court, clients, and the administration of justice. As technology evolves, staying vigilant and informed will be critical to maintaining the integrity of family law practice.

Let’s continue to elevate the practice of family law in Ontario!

Connect with us on LinkedIn.

Cheryl Goldhart is a Mediator and Arbitrator who can make a difference in resolving your family disputes.

  • Four Decades of Specialized Family Law Practice: Cheryl brings a wealth of experience spanning nearly 40 years dedicated exclusively to family law.
  • Masters Degree in Counselling: Her Masters Degree in Counselling informs her uniquely empathetic approach to each case.
  • Certified Family Law Specialist: The Law Society of Ontario has certified Cheryl as a Family Law Specialist, recognizing her expertise in the area.
  • Accreditation as a Mediator by the OAFM: Cheryl’s expertise is reflected in her accreditation from the Ontario Association for Family Mediation.
  • Designated ADR Professional by Ontario’s ADR Institute: As a highly respected arbitrator, Cheryl’s designation reflects her recognized expertise in family law arbitration.
  • Recipient of Numerous Awards and Honors: Among Cheryl’s many awards, honours and accolades is the prestigious Award for Excellence in Family Law from the Ontario Bar Association.

Legal Disclaimer: See Privacy Policy

Disclaimer: The information provided in this blog post is intended for general informational purposes only and should not be considered as legal advice. Consult with a qualified family law attorney for advice regarding your specific situation. Goldhart Mediation & Arbitration is not responsible for any actions taken based on the information presented in this blog.

Scroll to Top