Guidance From Canadian Law Societies’ on AI in Legal Practice

I. Introduction

The AI Revolution in Legal Practice

The legal profession stands at a technological inflection point. Since ChatGPT’s dramatic public debut in late 2022, generative artificial intelligence has rapidly shifted from a futuristic concept to an everyday tool transforming how legal services are delivered. Words like ‘seismic’ and ‘revolutionary’ have become commonplace descriptors for AI’s potential impact on legal practice, as noted in the Law Society of Alberta’s Generative AI Playbook.

Regulatory Response Across Canada

This technological transformation has prompted a wave of regulatory response across Canada. As of early 2025, five provincial law societies have issued some guidance on AI use: Alberta led early efforts, followed by Manitoba, Saskatchewan, British Columbia, and most recently Ontario, whose Futures Committee released their anticipated white paper in April 2024. This growing body of guidance reflects the profession’s recognition that AI adoption is not merely permissible but increasingly essential, while acknowledging the significant ethical challenges these tools present.

This discussion is intended only as a high-level consideration of certain aspects of the guidances published by the Canadian law societies referred to above. It does not purport to be exhaustive, nor does it establish new professional standards. Readers should consult the full text of each law society statement—and any later updates—to ensure they are acting on the most current and complete information applicable to their jurisdiction and practice area.

Portions of this discussion were prepared with the assistance of generative artificial-intelligence tools and have been reviewed and verified by the author.

Balancing Innovation and Professional Values

The regulatory frameworks emerging across Canada’s law societies share common themes while reflecting unique jurisdictional approaches. The Law Society of Ontario’s guidance, for instance, arrives with practical companion documents including a quick-start checklist, best practice tips, and a professional obligations summary – pragmatic tools designed to bridge the gap between abstract ethical principles and daily implementation.

These regulatory efforts reflect a delicate balancing act. On one hand, Canada’s law societies explicitly acknowledge lawyers’ duty of technological competence, with Alberta’s AI Playbook citing the Code of Conduct requirement that lawyers “develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice.” On the other hand, the guidance emphasizes that competent practice requires understanding risks as well as benefits associated with technology.

The regulatory approaches across Canadian jurisdictions are neither unduly restrictive nor naively permissive. Instead, they establish frameworks that encourage innovation while prioritizing core professional values – frameworks that will inevitably evolve as AI capabilities advance and legal practice continues to transform.

For today’s practitioners, these guidance documents serve as essential roadmaps for navigating an increasingly AI-augmented profession. They provide both permission to embrace these powerful tools and guardrails to ensure that their implementation upholds the fundamental ethical obligations that define the legal profession.

II. Understanding AI Technologies in Legal Context

Key Terminology and Definitions

The proliferation of AI terminology has created considerable confusion within the profession. Law Society guidance documents have responded by establishing clear definitions to ensure licensees operate with a common understanding of these technologies.

Artificial Intelligence represents the broadest category, encompassing “any machine-based system that can make predictions, recommendations or decisions influencing real or virtual environments for a given set of human-defined objectives” – Alberta’s Generative AI Playbook. As the Law Society of Alberta notes, lawyers have actually been using AI for decades through familiar tools like spam filters, spell-checkers, and search algorithms in electronic research databases.

Large Language Models (LLMs) represent a specific subset of AI focused on language processing and text generation. These sophisticated programs are trained on vast datasets of books, articles, and internet content to understand language patterns and semantics. They generate responses based on statistical probability—predicting which word combinations are most likely to follow a given prompt. This statistical foundation is crucial for understanding their limitations.

Generative AI, as defined by the Law Society of Ontario, creates “new content (text, code or other media, such as music, art or photos) using generative models.” Unlike earlier AI forms that merely reproduced existing material, generative AI synthesizes original-seeming content from informational prompts. This creative capability powers its most transformative applications in legal practice, from drafting to research.

Consumer vs. Enterprise AI Solutions

It’s equally important for practitioners to distinguish between specific products and their creators. Tools like ChatGPT, Claude, Grok and Google’s Gemini are consumer-facing products from their respective companies (OpenAI, Anthropic, Xai and Google, respectively). This landscape is rapidly evolving, with new offerings emerging constantly, including legal profession-specific models from companies like LawDroid, Harvey.AI, and established legal research providers like LexisNexis.

There is a crucial distinction between public consumer tools and private or enterprise solutions. Public tools like the free version of ChatGPT were not built with confidentiality in mind and may use inputs to further train their systems. Enterprise solutions, in contrast, often provide stronger privacy protections and sometimes allow organizations to train models on their own internal documents. Several larger law firms have already developed in-house AI tools specifically for legal research by their lawyers and staff.

Understanding these distinctions enables practitioners to match appropriate technologies to specific tasks. Consumer-grade tools might be suitable for general research or drafting non-confidential materials, while sensitive client matters require enterprise-grade solutions with appropriate safeguards. Technology selection should be informed by a clear understanding of each tool’s capabilities, limitations, and terms of use.

AI as Co-Pilot, Not Autopilot

Beyond technology definitions, law societies have identified specific legal applications where AI can enhance practice. These include document generation, legal research, contract analysis, client relationship management, and administrative efficiencies. In each domain, guidance emphasizes that AI should complement rather than replace professional judgment—serving as a powerful assistant rather than an autonomous advisor.

As Thompson Rivers University professors Jon Festinger, KC, and Robert Diab note in the Law Society of BC’s LawCast podcast, AI should be seen as a “co-pilot” rather than an “autopilot” in legal practice. Rather than viewing AI as a replacement for lawyers, the better framing is to understand it as a tool or assistant that works alongside human legal professionals. Both academics emphasized that in areas requiring professional judgment, human oversight remains essential.

III. Core Professional Obligations When Using AI

Competence: Understanding AI’s Capabilities and Limitations

The integration of AI into legal practice does not alter the fundamental professional obligations that bind all licensees, but it does create new contexts in which these obligations must be fulfilled. Guidances from law societies have identified five core professional duties that intersect with AI use.

The duty of competence stands as the foundation for responsible AI adoption. Section 3.1 of Ontario’s Rules of Professional Conduct and similar provisions across other provinces establish that lawyers must provide competent legal services. In the AI context, this means practitioners must understand the capabilities and limitations of any technology they employ. As the LSO’s Professional Obligations guide reflects, this understanding extends beyond basic operation to encompass awareness of key risks such as AI hallucinations—instances where systems fabricate information when lacking sufficient data. This was an issue of concern in the Ontario case of Ko v. Li, 2025 ONSC 2766 – further details regarding the AI highlights of the case can be found in my article here. Practitioners must implement verification processes to ensure generated content meets professional standards, particularly for jurisdiction-specific matters requiring specialized knowledge.

The Manitoba Law Society specifically warns that lawyers “must apply their independent and trained judgment when acting for clients. Professional judgment cannot be delegated to generative AI and remains your responsibility at all times.” This sentiment is reflected in other law society guidances and emphasizes that technology should enhance, not replace, professional judgment.

Confidentiality: Protecting Client Information

Equally fundamental is the duty of confidentiality. Section 3.3 of Ontario’s Rules of Professional Conduct mandates “strict confidence” for all client information, a standard echoed across provincial codes of conduct. The Ontario White Paper highlights the Samsung incident, where engineers inadvertently exposed proprietary code by pasting it into ChatGPT, as a cautionary tale of how easily confidentiality can be compromised through AI tools. Law societies advise practitioners to exercise extreme caution with client information, anonymous inputs where possible, and understand how data provided to AI systems may be retained and used. Licensees will need to take care to safeguard client information and be aware of the confidentiality risks that come with the use of generative AI tools.

The BC Law Society guidance recommends that “if redacting the data is not possible, then you could explore whether client consent to use the tool with such information is viable. Any consent obtained from the client must be fully informed and voluntary consent after disclosure in writing or orally with a written record of the communication.” This highlights the seriousness with which confidentiality risks should be treated.

Supervision and Billing Considerations

Supervision and delegation responsibilities extend to AI implementation. Just as lawyers remain responsible for work delegated to human assistants, they maintain accountability for AI-generated content. The LSO’s Professional Obligations guidance explicitly draws this parallel, noting that “using generative AI tools is akin to receiving assistance from a non-licensee employee.” Practitioners must provide clear guidelines to staff regarding appropriate AI use, implement verification processes, and identify tasks that require human judgment rather than algorithmic assistance.

Billing practices present novel ethical considerations in the AI era. The law societies emphasize that charges must remain fair, reasonable, and transparent regardless of the technology employed. The LSO’s Professional Obligations guidance specifically addresses whether AI costs can be passed to clients as disbursements, requiring that such charges be fair, disclosed in a timely fashion, and billed at actual rather than estimated cost. Saskatchewan’s guidance indicates that alternative fee arrangements may be entered into that account for AI-enhanced efficiency while maintaining fairness to clients.

Court Disclosure Requirements

Finally, practitioners face evolving obligations regarding court disclosure. The Federal Court of Canada now requires litigants to disclose in writing if they used AI to create or generate content in court filings, with this disclosure appearing in the first paragraph of such documents. While provincial courts have not yet issued similar directives, the Manitoba Law Society notes that The Court of King’s Bench Practice Direction Re: Use Of Artificial Intelligence In Court Submissions requires that when artificial intelligence has been used in the preparation of materials filed with the court, the “materials must indicate how artificial intelligence was used.” The law societies advise practitioners to stay informed about emerging requirements. More fundamentally, the obligation not to mislead courts or tribunals requires thorough verification of all AI-generated legal references, as demonstrated by the Colorado case where a lawyer was suspended after submitting AI-fabricated case citations without verification. A similar sentiment was echoed in the previously referenced Ontario case of case of Ko v. Li.

Across all these obligations, there is the consistent principle that technology may change the tools of practice, but not the core ethical responsibilities that define the profession. AI should enhance rather than diminish a practitioner’s ability to fulfill these fundamental duties.

IV. Key Risks Identified by Law Societies

Confidentiality and Security Vulnerabilities

In their guidance documents, Canadian law societies have clearly articulated several significant risks associated with generative AI use in legal practice. These risks require careful management to maintain professional standards and protect client interests.

Confidentiality and security concerns are at the forefront of identified risks. As the Law Society of Alberta’s AI Playbook emphatically states, “The risk of inadvertent disclosure of confidential client or proprietary information cannot be overstated.” Public generative AI platforms were not designed with legal confidentiality standards in mind. Information provided in prompts may be retained by the AI provider, used for further training their systems, and potentially exposed to third parties. The previously noted Samsung example cited in LSO White Paper illustrates how quickly proprietary information can be compromised when engineers uploaded source code to ChatGPT, after which there was no way to retrieve or delete the compromised data. This risk extends to all client information, including documents uploaded for refinement or analysis.

The Problem of “Hallucinations”

AI hallucinations and unreliable research present perhaps the most insidious risk to legal accuracy. Generative AI’s tendency to fabricate information is not a flaw but a feature of its design. As Alberta’s guidance explains, these tools “are not tied to a foundation of truth or reality and are designed to provide creative responses to queries.” This can result in fabricated case names, citations, legal principles, or factual assertions that appear authoritative but are entirely fictional. This creates significant risk for practitioners relying on AI for legal research without rigorous verification.

In the Law Society of BC’s podcast, Professor Jon Festinger vividly captures this limitation by describing AI as ” an irresponsible 14-year-old that you’re asking questions to and will sometimes tell you what it thinks you want to hear and sometimes will just make stuff up and sometimes will run away from you and sometimes will run towards you.” This analogy powerfully illustrates why human verification remains essential.

The “Black Box” Problem and Bias Risks

The “black box” problem compounds these challenges by making it difficult to assess how AI generates its outputs. The Law Society of Alberta’s AI Playbook notes that “because Gen AI operates as a black box, it is difficult to assess the validity of the inputs it relies on or trace how the system produces its outputs.” This opacity creates accountability challenges and makes identifying potential errors more difficult. Without understanding the reasoning process, practitioners may struggle to evaluate the reliability of AI-generated content. That being said, AI models are developing systems which provide the option for users to see the “reasoning”, to some extent, in it arriving at a particular response to a query.

Bias in AI outputs represents another significant concern. Since AI models are trained on internet data they may perpetuate existing societal biases. Alberta’s AI Playbook cites a Bloomberg study finding that AI image generators produced images of high-paying jobs dominated by lighter-skinned subjects, while darker-skinned subjects appeared more frequently for lower-paying occupations. Similar gender biases were identified. These biases can undermine fair representation and potentially violate human rights legislation if uncritically incorporated into legal work.

Copyright and Knowledge Limitations

Copyright infringement risks arise from how generative AI tools are trained. As the Alberta AI Playbook explains, “While it may seem like Gen AI tools create new material from independent thought processes, that is not how they function.” These systems are trained on internet-scraped data that may include copyright-protected materials. When AI generates outputs resembling these protected works, practitioners using this content may inadvertently infringe copyrights. The guidance notes ongoing legal uncertainty about who owns AI-generated content and advises caution.

Knowledge limitations create reliability issues for time-sensitive matters. Generally, AI models have specific knowledge cutoff dates. For example, as of this writing, ChatGPT’s models have been updated to include information through June 2024. Regardless of these improvements, AI systems generally have cutoff dates beyond which they lack awareness of recent legal developments, potentially leading to outdated or inaccurate advice on current laws, regulations, and precedents. To overcome this limitation, different AI models have varying capabilities to access the Internet, which may be utilized by its users to have an AI model access more current information.

Beyond these technical challenges, there are also risks to the attorney-client relationship. The LSO cautions in its White Paper that if generative AI is used to interact directly with clients (such as through chatbots), it may inadvertently provide unauthorized legal advice or create misunderstandings. Accordingly, certain core aspects of the client relationship cannot be delegated to technology and require direct licensee involvement and professional judgment.

The identification of these risks is not intended to discourage AI adoption but rather to ensure its responsible implementation. As the Law Society of Ontario notes in its Quick-Start Checklist, generative AI offers “immense opportunities” for legal practitioners, but realizing these benefits requires a clear-eyed understanding of the accompanying challenges.

V. Recommended Risk Management Approaches

Human Verification: The Cornerstone of Responsible AI Use

Canadian law societies have moved beyond merely identifying AI risks to providing concrete management strategies that enable practitioners to harness these tools responsibly. Their recommendations offer a pragmatic framework for mitigating key concerns while leveraging AI’s benefits.

Human verification processes stand as the cornerstone of responsible AI use. The guidances provided by the law societies emphasize that AI-generated content must undergo rigorous human review before reliance or client delivery. As the LSO’s Quick-Start checklist advises, practitioners should “integrate a system or process of human verification to review AI-generated results and ensure their accuracy and reliability.” This verification should not simply be cursory but should include independent research to confirm the validity of any legal citations, principles, or factual assertions. Practitioners should not rely on AI to judge its own accuracy. Practitioners should also identify domains where human judgment is essential and exclude these from AI delegation entirely.

Data Safeguards and Documentation

Data safeguards and anonymization techniques provide critical protection for client confidentiality. Alberta’s AI Playbook advises: “Never include confidential or potentially identifying information in prompts” and “Use only non-identifiable information in prompts.” The LSO Quick-Start Checklist suggests establishing “additional protocols to protect confidential client information from inadvertent disclosure”. When working with sensitive client matters, practitioners should redact or anonymize all identifying details and consider whether the use of public AI tools is appropriate at all.

Audit trails and documentation create accountability and demonstrate due diligence. The LSO Quick-Start Checklist recommends that practitioners “establish a systematic process for recording all prompts and inputs you or your employees provide to the AI tool.” This documentation serves multiple purposes: it enables quality control, creates evidence of proper AI use in case of disputes, and helps identify patterns of effective or problematic interactions with AI systems. Maintaining records of verification steps demonstrates the practitioner’s commitment to accuracy and can be invaluable if AI-generated content is later questioned.

Client Communication and Staff Training

Client communication protocols govern when and how to inform clients about AI use. While not mandating universal disclosure, the LSO White Paper identifies factors to consider when deciding whether to communicate AI use to clients. These include whether the use will be disclosed publicly (such as in court filings), whether clients would reasonably expect the material to be prepared by a human practitioner, whether client information will be input into AI systems, and whether AI use creates reputational or other risks for the client. Practitioners should develop consistent approaches to client communication that respect transparency while avoiding unnecessarily technical explanations.

Staff training and policy development ensure consistent, organization-wide AI governance. The LSO “8 Best Practice Tips” specifically recommends that firms with employees consider developing firm policies about the appropriate use of AI systems. These policies should establish clear boundaries regarding permissible AI use cases, provide guidance on prompt creation, specify verification requirements, and address confidentiality concerns. Ontario’s Quick-Start Checklist suggests offering “continuous training to AI users to ensure they utilize the tool in a manner consistent with your legal and professional obligations” and gathering regular feedback on tool performance and improvement opportunities.

Due Diligence and Transparent Billing

Vendor due diligence helps practitioners select appropriate tools and understand associated risks. As noted in the LSO Quick-Start Checklist, before using AI perform due diligence and assess an AI vendor’s “experience, reputation, reliability, financial stability, and compliance with legal standards including data security and privacy laws”. Terms of service review is also particularly important, as these agreements govern data handling practices and may contain provisions conflicting with professional obligations. For example, some public AI tools explicitly state they are not intended for professional advice, potentially creating tension with legal practice use.

Billing transparency ensures clients understand how AI affects service costs. The LSO Quick-Start Checklist advises practitioners to “decide whether to pass on charges related to AI usage to clients” and if so, ensure the fees charged are “fair, reasonable, and promptly disclosed.” Regardless of the billing approach chosen, practitioners should clearly document and explain AI-related charges to avoid client confusion or disputes.

By implementing these risk management strategies, practitioners can create a framework that allows them to leverage AI’s capabilities while maintaining professional standards. As the LSO Quick-Start Checklist notes, these approaches enable practitioners to “effectively manage potential risks, protect [their] clients’ interests, and ensure the responsible integration of generative AI” in legal practice.

VI. Practical Implementation Guidelines

Effective Prompt Engineering

Moving beyond theoretical risk management, Canadian law societies offer practical guidance for the day-to-day implementation of AI in legal practice. These recommendations provide a roadmap for practitioners seeking to operationalize AI tools while maintaining professional standards.

Effective prompt engineering emerges as a critical skill for maximizing AI utility. The LSO’s “8 Best Practice Tips” specifically highlights that “crafting successful prompts can significantly impact the quality and relevance of AI-generated responses.” The guidance recommends practitioners consider using frameworks like the CLEAR approach to structure more effective instructions (i.e. Concise, Logical, Explicit, Adaptive, and Reflective. However, from a legal perspective consider – Context, Legal task/objective, Explicit output, Audience and Refine). Practitioners should learn to write prompts that clearly specify the desired format, tone, and structure of responses, include relevant context and constraints, and provide explicit instructions regarding legal standards or jurisdictional considerations. Well-crafted prompts substantially improve output quality and reduce hallucination risks.

Terms of Service and Appropriate Use Cases

Terms of service review must precede any AI implementation particularly as it is essential to understand how vendors will use information provided to their systems. Alberta’s AI Playbook specifically notes that OpenAI’s Usage Policies prohibit “engaging in the unauthorized practice of law or offering tailored legal advice without a qualified person reviewing the information.” Similar restrictions exist in other platforms’ terms. Practitioners must understand these limitations and ensure their use cases comply with vendor requirements while meeting professional obligations. This review should extend to data retention policies, as many public AI tools retain user inputs for training purposes.

Appropriate use case selection helps practitioners match AI capabilities to suitable tasks. There are a variety of areas where AI can enhance practice, including document generation (agendas, memos, contracts), summarizing legal documents, brainstorming for trial preparation, and administrative tasks like client intake. As a starting point it may be advisable to start with lower-risk applications before progressing to more sensitive contexts. Practitioners should carefully consider whether particular matters contain highly confidential information better excluded from AI tools, or whether certain complex legal questions require traditional research methods rather than AI assistance.

Understanding Limitations and Phased Implementation

Understanding AI limitations is essential for responsible implementation. AI systems generally have specific knowledge cutoff dates, limited understanding of regional or specialized legal concepts, and varying capabilities across different tasks. The LSO Quick-start Checklist recommends practitioners “conduct thorough research and experiment with the AI tool to gain a comprehensive understanding of its capabilities and limitations.” This experimentation should include testing how the system handles prompts related to the practitioner’s specific practice area to identify strengths and weaknesses.

The Law Society of BC’s podcast with law professors Jon Festinger and Robert Diab offers particularly valuable insights on how to test AI tool limitations. Professor Diab describes testing a legal-focused AI tool with specific legal questions, finding it gave excellent answers to some queries while providing only “so-so” responses to others. He emphasizes that “your effectiveness with these tools is really going to depend on how well you already have internalized the area of law you’re working with” suggesting that AI may be most helpful to those who already have substantial knowledge in a field rather than those seeking to compensate for knowledge gaps.

Implementation phasing allows for controlled adoption. Rather than immediate organization-wide deployment, consider a measured approach: start with low-risk, non-client-facing applications, then gradually expand to more complex use cases as experience and confidence grow. This phased approach enables practitioners to develop verification protocols, identify potential issues, and refine AI integration before applying these tools to sensitive client matters.

Staff Education and Quality Control

Staff education ensures consistent, responsible AI use across organizations. The LSO’s Professional Obligations guide recommends “providing relevant training to employees on the use of any generative AI technology, including its limitations, potential biases, and ethical pitfalls.” This education should cover not only technical operation but also ethical considerations, verification requirements, the importance of maintaining human judgment in client service and emphasizing confidentiality in all AI interactions.

Regular review processes maintain quality control. The LSO Professional Obligations guide suggests “regularly reviewing AI-generated content” and implementing processes to “verify accuracy and compliance with firm policies and professional obligations.” This ongoing monitoring helps identify emerging issues, refine AI use protocols, and ensure consistent application of professional standards. Reviews should examine both the AI outputs themselves and the verification processes to ensure continued efficacy.

Cybersecurity integration recognizes that AI tools create new potential vulnerabilities. The LSO Quick-Start Checklist advises practitioners to “determine what security measures the vendor has in place to protect the AI tool from unauthorized access” and establish appropriate safeguards for sensitive information. This may include access controls limiting which staff can use AI tools for particular purposes, encryption for sensitive communications, and security measures protecting the integrity of AI-human workflows.

Staying current on AI developments ensures continuing compliance. Law societies acknowledge the rapidly evolving nature of AI technology and regulatory responses. The LSO’s “8 Best Practice Tips” emphasizes that “AI is not the future. It is here and now. It is also evolving rapidly.” The guidance recommends practitioners join online communities, follow technology experts, attend conferences, or subscribe to newsletters to maintain awareness of emerging capabilities, limitations, and best practices.

These practical implementation guidelines provide a framework for translating abstract principles into concrete workflows. By following this structured approach, practitioners can integrate AI tools in ways that enhance their practice while safeguarding professional standards and client interests.

VII. Emerging Standards and Court Directives

Federal Court Requirements

The regulatory landscape surrounding AI in legal practice continues to evolve, with courts and legislatures increasingly responding to the technology’s growing presence. Law societies are advising practitioners to remain vigilant about these developments, which may significantly impact how they incorporate AI into their work.

The Federal Court of Canada has established the most explicit requirements regarding AI use in legal proceedings. On December 20, 2023, the Court issued two significant documents: “Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence” and “Notice to the Parties and the Profession: The Use of Artificial Intelligence in Court Proceedings.” As the LSO’s Professional Obligations guide details, the Federal Court “requires litigants to inform the court and other parties if they have used AI to create or generate new content in preparing a document filed with the court.” This disclosure must appear “in writing in the first paragraph of each such document submitted.” The Federal Court also urges caution when submitting documents that contain legal references or analytics that were generated by AI, emphasizing the importance of using only “well-recognized and reliable sources.”

Provincial Court Developments

Provincial courts have been slower to issue formal directives, but this is changing. The Alberta Court of King’s Bench and Court of Appeal issued notices in 2023 regarding AI use in court proceedings, though these are less prescriptive than the Federal Court requirements. Manitoba’s guidance specifically references the “Court of King’s Bench Practice Direction Re: Use Of Artificial Intelligence In Court Submissions” which requires that “when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.” It is anticpated that provincial courts across Canada will develop their own AI policies as the technology becomes more prevalent in litigation, creating a patchwork of requirements practitioners must navigate.

Emerging Case Law on AI Use

Emerging case law is beginning to shape judicial attitudes toward AI use. The LSO’s Professional Obligations guide references several notable decisions, including Floryan v. Luke et al. (2023 ONSC 5108), Cass v. 1410088 Ontario Inc. (2019 ONSC 6959), and Drummond v. The Cadillac Fairview Corp. Ltd. (2018 ONSC 5350). Though these cases predate widespread AI adoption, they take judicial notice of the use of AI. Alberta’s AI Playbook cites the cautionary tale of a Colorado lawyer (People v. Zachariah C. Crabill 23PDJ067) who was suspended for a year and a day for submitting AI-generated legal citations without verification and then falsely attributing the errors to a legal intern. These cases signal judicial skepticism toward unverified AI-generated content. In Ontario, the case of Ko v. Li, 2025 ONSC 2766 reflects the professional obligations related to using AI-generated content and the potential consequences of failing to verify AI responses, which can lead to citations that are either non-existent or contrary to the submitted propositions

AI limitations were noted in the Law Society of BC’s podcast, Professor Jon Festinger mentions an Air Canada case where “Air Canada’s chatbot gave some advice, the chatbot being AI, to an Air Canada customer about an Air Canada policy and the chatbot was flat out wrong on the policy.” When the customer pursued compensation based on the chatbot’s information, the court ruled that Air Canada cannot disavow the mistake of its own chatbot. This establishes an important principle that organizations remain accountable for AI-generated information provided to clients or customers.

Legislation and International Influences

Proposed legislation may significantly impact AI use in legal contexts. Law societies specifically highlight Canada’s proposed Bill C-27 – Artificial Intelligence and Data Act (AIDA) as legislation practitioners should monitor. It is noted that the Bill was terminated as a result of PM Trudeau proroguing Parliament. Although this legislation did not pass with the prior Trudeau government, it may be revived by the Carney government in the same or similar form. As Alberta’s AI Playbook notes, AIDA “would set the foundation for the responsible design, development and deployment of AI systems that impact Canadians” and “establish national requirements for the design, development, use, and provision of AI systems.” This proposed legislation, if eventually passed, could create new compliance obligations for practitioners using AI tools, particularly regarding transparency, explainability, and bias mitigation.

International developments are influencing Canadian approaches. While not explicitly referenced in all guidance documents, law societies acknowledge that AI regulation is a global concern. The European Union’s AI Act, the most comprehensive AI regulatory framework to date, may influence Canadian standards through its classification of AI systems by risk level and imposition of corresponding obligations. Practitioners serving international clients should be particularly attentive to these cross-border requirements.

Evolving Professional Standards

Professional standards are gradually emerging through law society guidance. The LSO White Paper notes that it acknowledges that generative AI is a rapidly evolving area and invites feedback from licensees on the paper and experiences with the technology. This collaborative approach suggests that standards will evolve through dialogue between regulators and practitioners rather than through rigid pronouncements.

Disclosure norms are developing even where not formally required. Alberta’s AI Playbook recommends practitioners “disclose the use of Gen AI any time it is relied upon” as a best practice for addressing copyright concerns. This suggestion goes beyond current court requirements and indicates a trend toward greater transparency about AI use, even in contexts where disclosure is not mandatory. It is anticipated that clients and courts will increasingly expect to be informed when AI has played a significant role in document preparation or legal analysis.

Varying approaches across jurisdictions create complexity for multi-province practitioners. While several Canadian law societies have reached consensus on certain fundamental principles, their specific guidance differs in emphasis and detail. Practitioners operating across provincial boundaries need to reconcile these varying standards, generally adhering to the most stringent requirements applicable to their practice. The emergence of national standards, whether through legislation which has been proposed like AIDA or through coordination among law societies, would simplify compliance for practitioners working across multiple jurisdictions.

As this landscape continues to evolve, law societies emphasize the importance of staying informed. The LSO’s Quick-Start Checklist specifically advises practitioners to “keep up with the latest developments in AI to ensure compliance with evolving legal regulations, ethical responsibilities, guidelines, and standards.” This ongoing vigilance is essential for maintaining compliance in a rapidly changing regulatory environment.

VIII. Conclusion and Future Outlook

A Balanced Approach to AI Adoption

The integration of generative AI into legal practice represents both a significant opportunity and a complex challenge for the profession. As this analysis of Canadian law society guidance documents has demonstrated, regulatory bodies are taking a balanced approach: neither resisting technological change nor abandoning core professional values in its pursuit.

The current guidance from law societies establishes a foundation for responsible AI adoption. By emphasizing enduring professional obligations—competence, confidentiality, supervision, billing transparency, and candor toward courts/tribunals—regulators have created a framework that can adapt to evolving technology while preserving the essential character of legal practice. This approach acknowledges that while tools may change, the fundamental ethical responsibilities of legal professionals remain constant.

At the same time, law societies have provide increasingly specific and practical guidance for addressing novel challenges presented by generative AI. From data security protocols to verification processes, from prompt engineering techniques to client communication strategies, these resources offer concrete pathways for practitioners to implement AI responsibly. The Law Society of Ontario’s comprehensive suite of implementation tools exemplifies this practical approach, providing licensees with checklists, best practices, and clear explanations of professional obligations in the AI context.

Future Trends in Legal AI Regulation

Looking forward, several trends are likely to shape the continued evolution of AI in Canadian legal practice:

First, we can expect increased regulatory specificity as technology capabilities and applications mature. The current guidance documents represent initial frameworks that will inevitably be refined through experience, feedback, and emerging challenges. Law societies have explicitly acknowledged this iterative process, with the LSO inviting feedback from licensees on the White Paper.

Second, court engagement with AI-generated content will continue to grow more sophisticated. The Federal Court’s disclosure requirements likely foreshadow similar approaches from provincial courts. As judges encounter more AI-generated submissions, we can expect more nuanced jurisprudence addressing acceptable AI use in advocacy, with potential sanctions for insufficient verification or disclosure. Practice directions and rules will likely become more detailed regarding AI use in litigation.

Third, specialized AI tools tailored to Canadian legal practice will proliferate. Current guidance frequently references general-purpose AI platforms like ChatGPT, but as the Alberta AI Playbook notes, companies like “LawDroid, Rally, Harvey.AI and LexisNexis are also building legal profession-specific models.” These specialized tools may offer enhanced accuracy for Canadian legal research, improved confidentiality protections, and better integration with practice management systems. Law societies will need to assess whether these specialized tools warrant different regulatory approaches than general-purpose AI.

Fourth, client expectations regarding AI disclosure and use will continue to evolve. While current guidance leaves considerable discretion regarding client communication about AI use, market forces may drive greater transparency. As clients become more sophisticated about AI capabilities and limitations, practitioners may find that proactive disclosure of AI use becomes a competitive advantage rather than merely a regulatory consideration.

Fifth, the role of competence obligations will expand as AI becomes more integrated into practice. The baseline expectation that practitioners understand “technology relevant to the nature and area of the lawyer’s practice”, as per Alberta’s AI Playbook, will increasingly encompass AI literacy. This may eventually influence continuing professional development requirements, with law societies potentially mandating technology-focused education similar to ethics requirements.

The Human Element Remains Essential

Throughout these developments, the principle of “responsible innovation” will remain central. This recognizes that technological advancement need not come at the expense of professional values—indeed, when implemented thoughtfully, AI can enhance practitioners’ ability to fulfill their core obligations to clients and the administration of justice.

The guidance documents examined here represent not an endpoint but the beginning of an ongoing conversation about technology’s role in legal practice. By establishing clear principles while remaining adaptable to technological change, Canadian law societies have positioned the profession to harness AI’s benefits while preserving the human judgment, ethical commitment, and professional responsibility that define the practice of law.

As professors Festinger and Diab emphasized in the BC Law Society podcast, AI should be viewed not as a replacement for lawyers but as a tool that may become integral to practice. Professor Diab noted, ” I think it’s just something that we, you know we have to grapple with and to try to be on top of as soon as we can. I think that generative AI marks a clear break in the development of you could say the digital revolution. I mean I think this is really on the scale of something like the advent of the browser or the web, you know it is a, it is a change of that magnitude you know. ” While AI may radically transform how legal work is done, it cannot replace the fundamentally human aspects of legal judgment and client service.

Disclosure: Artificial intelligence was used to generate content in this document.

Additional Resources

Law Society Guidance

Court Directives

AI Research and Tools

Federal Legislation and Guidance

Let’s continue to elevate the practice of family law in Ontario!

Connect with us on LinkedIn.

Cheryl Goldhart is a Mediator and Arbitrator who can make a difference in resolving your family disputes.

  • Four Decades of Specialized Family Law Practice: Cheryl brings a wealth of experience spanning nearly 40 years dedicated exclusively to family law.
  • Masters Degree in Counselling: Her Masters Degree in Counselling informs her uniquely empathetic approach to each case.
  • Certified Family Law Specialist: The Law Society of Ontario has certified Cheryl as a Family Law Specialist, recognizing her expertise in the area.
  • Accreditation as a Mediator by the OAFM: Cheryl’s expertise is reflected in her accreditation from the Ontario Association for Family Mediation.
  • Designated ADR Professional by Ontario’s ADR Institute: As a highly respected arbitrator, Cheryl’s designation reflects her recognized expertise in family law arbitration.
  • Recipient of Numerous Awards and Honors: Among Cheryl’s many awards, honours and accolades is the prestigious Award for Excellence in Family Law from the Ontario Bar Association.

Legal Disclaimer: See Privacy Policy

Disclaimer: The information provided in this blog post is intended for general informational purposes only and should not be considered as legal advice. Consult with a qualified family law attorney for advice regarding your specific situation. Goldhart Mediation & Arbitration is not responsible for any actions taken based on the information presented in this blog.

 

Scroll to Top