My speaking notes for the IAPP 2026 panel on “AI and Privacy Regulation: Global Trends, Developments, and Impacts on Innovation”
By Eloïse Gratton, Partner, National Co-Chair, Privacy and Data Management, Osler, Hoskin & Harcourt
I recently had the pleasure of participating in a panel at the IAPP Canadian Privacy Symposium 2026 alongside David Fraser (McInnes Cooper) and Brian Kennedy (Senior Privacy Counsel, OpenAI). Our discussion explored the rapidly evolving intersection of AI and privacy regulation, including how enforcement trends, legal uncertainties, and practical realities are shaping the landscape for organizations operating in Canada and beyond. Below is a summary of my speaking notes for this panel.
1. Privacy Regulators as De Facto AI Regulators
- In practice, AI-related matters almost always land first on the desks of privacy lawyers. From there, colleagues in commercial, competition, IP, and other practice areas are brought in, but privacy lawyers typically remain the central coordinators. A similar dynamic is playing out at the regulatory level. In the absence of comprehensive, AI-specific legislation in Canada or a dedicated AI regulator, privacy commissioners are stepping in to fill a governance gap.
- Since AI is fundamentally a data-driven technology, privacy regulators have a natural and legitimate foothold. However, this “default regulator” role creates some tension: Privacy laws are designed to be technology-neutral and were not built with AI (especially generative AI) in mind. Regulators are assessing AI practices through frameworks originally intended to protect personal information, not to govern complex AI systems.
- Even recently modernized privacy regimes are showing strain. In Quebec, for example, the updates introduced by Law 25 were significant, but gaps remain when you try to apply them to AI. There is also a risk of regulatory overreach, albeit well-intentioned.
- Privacy regulators are increasingly issuing guidance on AI (such as the December 2023 principles on responsible and trustworthy generative AI). While framed as non-binding, these documents suggest that many of the expectations may, in practice, be necessary to comply with existing privacy laws, blurring the line between guidance and de facto regulation.
2. How Regulators Are Interpreting Key Privacy Concepts in the AI Context
- Core concepts like “collection,” “use,” “consent,” “purpose limitation,” and the scope of the “publicly available information” exception are all being stress-tested in the context of AI. One of the central questions across global investigations is whether consent is required to train AI models on publicly accessible data: A strict and conservative interpretation may say yes, but there are credible counterarguments and at scale, obtaining express consent is often impracticable.
- There is a long-standing recognition that individuals have diminished expectations of privacy in content they make publicly accessible. In Canada, courts, regulators, and arbitral bodies have been fairly consistent on this point. Canadian courts have also recognized that individuals can reasonably expect publicly available information to be indexed, aggregated, and made discoverable, traditionally by search engines, and now, arguably, by AI systems.
- There is also an important nuance around how the data is used: In many AI contexts, personal information may be filtered, de-identified, or transformed shortly after collection. It is not being used to make decisions about, or target, specific individuals; instead, it is used to train models and understand language patterns.
- Where data is processed with strong technical safeguards and is not used in a way that relates back to identifiable individuals, the privacy impact is materially different and arguably reduced. Rather than a complete shift, what we are seeing is pressure on existing concepts, forcing regulators to interpret them in ways that may feel stricter in some cases, but that are still grounded in established legal principles.
3. International Convergence and Divergence
- In Europe, the picture is more nuanced than it may appear: While the European Data Protection Board (EDPB) and national authorities have taken AI risks seriously, there are signs of pragmatism. Some high-profile investigations into AI training have been narrowed, transferred to lead supervisory authorities under the GDPR’s one-stop-shop mechanism, or remain ongoing without definitive conclusions. In Germany, regional authorities like the Baden-Württemberg Data Protection Authority have indicated that individuals’ reasonable expectations around publicly available data may extend to certain AI training uses, depending on context and safeguards. In France, the CNIL has taken a relatively pragmatic stance, acknowledging that information made publicly accessible is not inherently private and that individuals may reasonably expect some degree of reuse by third parties, though not without limits.
- Outside Europe: In South Korea, the Personal Information Protection Commission (PIPC) has issued guidance recognizing that the use of publicly available data for AI development can be permissible, provided that appropriate safeguards, such as minimization, de-identification, and transparency, are in place.
- The key lesson from abroad is the value of a layered safeguards approach: accepting that some level of data collection, particularly from publicly accessible sources, may be necessary for AI development, while imposing robust technical and organizational protections across the lifecycle.
- Convergence may not come from identical legal interpretations, but from a shared emphasis on risk mitigation and accountability. The question for Canada is whether it will articulate that balance more explicitly, or continue to rely on case-by-case interpretation.
4. Legal Uncertainty and the Risk of Chilling Innovation
- Non-binding guidance and fact-specific enforcement outcomes are increasingly operating as a de facto rulebook. Illustrative examples:
- The Quebec CAI’s Val-des-Cerfs decision treated “de-identified” training data as still being personal information (because it was not irreversibly anonymized) and treated AI-generated indicators as a “new” collection of personal information triggering necessity and notice requirements.
- The Clearview AI line of matters amplifies this dynamic: a small number of high-profile, highly fact-specific AI files can become the “precedent” that other companies feel they must operationalize, even where their data sources, purposes, technical architectures, and risk profiles are very different.
- This can result in uncertainty and potentially over-compliance that can chill investment and slow responsible AI adoption. There is a real risk that heightened enforcement pushes developers to exclude Canadian sources or Canadian-user content from training to reduce regulatory exposure, resulting in materially worse products for Canadians: weaker performance on Canadian context, institutions, and bilingual reality. This is in tension with the Government of Canada’s stated direction on AI, which has consistently emphasized AI’s role in improving service delivery and enabling advances across sectors such as health care, agriculture, and scientific discovery.
- The more sustainable path is proportionality: privacy rights are fundamental, but they are not absolute. A risk-based approach that credits robust safeguards can protect individuals while still leaving room for responsible AI development in Canada.
5. Data Minimization and Rethinking Traditional Privacy Concepts
- In Canada, “data minimization” is best understood as a necessity and proportionality test tied to a defined purpose, not a requirement to use the absolute least privacy-invasive dataset in all cases. For LLM training, model capability and safety often depend on the breadth and representativeness of language data, which creates tension with the “collect only what is necessary” standard.
- In practice, minimization usually becomes a set of design choices and governance controls:
- Dataset scoping that excludes sources predictably high in sensitive personal information or re-identification risk.
- Pre-processing to suppress or mask direct identifiers before training.
- Deduplication and sampling strategies to avoid repeated ingestion of the same personal information.
- Post-training controls that reduce the likelihood the model will output personal information (e.g., prompt filtering and refusal behaviors).
- The traditional privacy concepts of “collection,” “use,” and “consent” are being stress-tested, but they have not collapsed. Rather, we likely need clearer application of the existing concepts and an acknowledgment that not every “collection” or “use” in an AI pipeline has the same privacy impact.
6. Transparency and Accountability in Practice
- “Meaningful transparency” does not mean providing a full technical teardown of a model. It means being open about information practices in a way that lets people understand what is happening, what the system can and cannot do, and what choices and rights they have. Effective transparency is usually “layered”: Plain-language privacy notices and just-in-time disclosures at the point of collection. Deeper detail in help-center materials, FAQs, or product documentation for users who want more. For AI systems specifically, persistent user-facing information about limitations, for example, clear interface language that outputs may be incorrect, that the tool is not a substitute for professional advice, and that users should avoid entering sensitive personal information if the product is not designed for it.
- Privacy impact assessments (PIAs) remain useful, but on their own they are rarely sufficient for complex AI systems: PIAs should go beyond describing the dataset and the legal purpose. They need to test the full lifecycle, including what types of personal information are in the training data, what minimization and de-identification steps are applied, how re-identification risks are managed, and how individuals can exercise their rights. The most effective approach is layered governance rather than a one-time document, and organizations should treat the PIA as a living artifact supported by continuous monitoring and periodic re-assessment.
- There should be a differentiation between foundation model developers and downstream deployers: Foundation model developers control upstream design choices that shape systemic risk: what data is ingested, what safeguards are built into training and evaluation, and whether the model is tested for leakage. Downstream deployers are accountable for what they do with the model in a particular context, end-user relationships, specific purposes, sector-appropriate guardrails, and operational controls. This aligns with the accountability concept under Canadian privacy law: each organization is responsible for personal information “under its control.” Responsibility and accountability should track real decision-making authority across the AI supply chain.
7. Privacy by Design for Generative AI
For generative AI, “privacy by design” is an end-to-end engineering and governance discipline. It means building privacy protections into every stage of the lifecycle:
- Data intake/collection: Constraining what is ingested by limiting sources, excluding predictably high-risk categories, honoring opt-outs, and filtering sensitive information before it enters the training pipeline.
- Pre-training: Reducing identifiability through tokenization, deduplication, and masking or removing direct identifiers.
- Post-training: Testing and shaping model behavior, evaluating for memorization and leakage, tuning refusals for prompts that seek private information, and designing products so interactions are not unnecessarily linked back to identifiable accounts.
- Deployment: Clear usage policies, in-product warnings about limitations, controls to support user choices (including training opt-outs), and accessible privacy request pathways.
8. Reconciling Global AI Product Design with Local Privacy Expectations
- AI products are built and updated globally, but privacy expectations and regulator tolerance for risk are set locally. The goal is to avoid “one-country-at-a-time” design while still meeting each jurisdiction’s requirements.
- A workable approach is to separate what can be standardized globally from what must be localized: Global controls should address baseline privacy and safety risks: minimization and filtering at intake, strong security and access governance, testing for leakage, and refusal behaviors for prompts seeking private or sensitive information. Localization should cover jurisdiction-specific notices and user-facing language (including French/Quebec requirements for example), user rights workflows, and any hard constraints on collection or secondary use.
- For cross-border AI arrangements, contracts should include clear compliance representations, role allocation, and enforceable technical and organizational measures that match the actual processing chain.
9. Looking Ahead: Collaborative Governance or Adversarial Enforcement?
- Right now, both dynamics are at play, and which path dominates will depend on choices made by regulators and industry alike. On the collaborative side, there is more process and dialogue in major files: lengthy engagement through submissions, opportunities to respond to preliminary views, and some acknowledgment of mitigating measures implemented during investigations.
- An innovation-friendly regulatory ecosystem in Canada would:
- Keep interpretations anchored in the statutory text while applying “flexibility, common sense, and pragmatism.”
- Credit robust technical safeguards as meeting legal obligations.
- Be explicit about how privacy rights are reconciled with other public interests, including Charter values, access to information, and economic competitiveness.
- AI literacy initiatives, regulator and industry technical dialogue, and clearer prospective guidance (rather than standards emerging only through retrospective enforcement) are essential complements.
10. AI and the Legal Profession
- AI is already impacting research, drafting, due diligence, discovery, and knowledge management. The best use cases are not about replacing judgment but about reducing friction, moving more time from mechanical tasks to analysis, strategy, and client counseling. However, the ethical and quality-control implications are non-negotiable: Generative systems can be confidently wrong, can omit key qualifiers, and can hallucinate citations or facts. Verification is a core workflow requirement.
- “AI competence” is quickly becoming part of baseline professional competence, knowing what the tool is good at, where it fails, and how to supervise it. Privacy and confidentiality present a second major concern. Using AI in practice requires diligence on data handling: whether prompts or uploads are retained, who can access them, where processing occurs, and whether inputs are used for model improvement.
- The practical takeaway: AI can assist the work, but it cannot be the “author” in any professional sense. Supervision, documentation of review steps, and clear client communication about how tools are used will be increasingly necessary and expected.
This post is based on a panel discussion at the IAPP Canadian Privacy Symposium 2026, “AI and Privacy Regulation: Global Trends, Developments, Impacts on Innovation,” featuring David Fraser (McInnes Cooper), Eloïse Gratton (Osler, Hoskin & Harcourt), and Brian Kennedy (OpenAI). The views expressed are only those of the author.
This content has been updated on May 5, 2026 at 12 h 44 min.