AI Governance Canada at Crossroads
EU leads with oversight while the US withdraws—Canada must choose a path
Key Takeaways
1.
Global Divergence: Europe is assuming the role of global rule-maker on AI, while the U.S. retreats into zero governance—widening the transatlantic gap.
2.
Canada at a Standstill: Once seen as a regulatory trailblazer, Canada’s AI bill (AIDA) now stalls under political and industry pressure.
3.
Federal Policy Hits Roadblocks: Canada’s AI framework is emerging piecemeal through provincial regulations, industry bodies, and case law
The Global AI Governance Gap
Rapid advancements in AI technologies are outpacing the laws meant to govern it, amplifying risks like bias, misinformation, and deepfakes.
According to cybersecurity firm SlashNext, AI-driven phishing attacks have surged 1,265% since late 2022, with credential phishing up 967%. The message is clear: AI-enabled fraud is escalating, and regulation is no longer optional—it’s a necessity.
However, regulatory responses remain fragmented. The EU, China, the U.S., and Canada are each taking divergent paths—some fast and firm, others hesitant, and some with no regulation at all. Establishing global guardrails before harm outpaces response is the real challenge.
Canada – AIDA Stalls Amid Uncertainty (from leader to laggard)
Why the delay?
Once seen as a pioneer, Canada’s proposed Artificial Intelligence and Data Act (AIDA) has lost momentum. Political resistance, Big Tech lobbying, and bill’s own structural flaws have delayed its progress and weakened its impact. Canada was one of the first signatories to the global AI treaty—now, it risks falling behind in a race it helped start.
At its core, AIDA is a risk-based approach, targeting “high-impact” AI systems—those that affect human rights, health, or psychological well-being.
AIDA was designed to achieve two goals:
- Empower Canadian businesses to meet international standards and compete globally
- Protect citizens from harmful AI systems, especially those developed in weakly regulated jurisdictions
In creation, it aimed to align with both EU and U.S. frameworks enhancing its interoperability. But with the U.S. retreating from regulation, Big tech lobbying against stricter regulation, Canada is caught in the middle which direction its regulation should take.
Key Concerns:


Scope is too broad: AIDA is part of the sprawling Bill C-27, which tries to cover too much. Bill C-27, which also includes changes to consumer privacy protection and the creation of a new personal information and data protection tribunal.
Ambiguity: AIDA is undermined by vague definitions, a weak compliance framework, and no clear penalties. Its central concept—“high-impact” AI—is undefined. The Act outlines broad categories but fails to specify thresholds or criteria, leaving critical questions unanswered:
- Who decides? It’s unclear whether impact is determined by companies, regulators, or third parties.
How to mitigate? - The Act offers no guidance on what risk mitigation looks like.
- How to monitor? It calls for ongoing monitoring but provides no clarity on standards, frequency, or accountability.
In contrast, the EU AI Act provides well-structured risk tiers, making compliance more predictable and enforceable. AIDA’s vagueness, by comparison, creates uncertainty and compliance risk.
These structural flaws have triggered both political resistance and industry backlash.
Political Resistance: AIDA has faced significant opposition in Parliament, with critics arguing that the bill is vague, overreaching, and poorly drafted. Detractors claim the government is rushing complex legislation without clearly defined terms or sufficient safeguards. Some have labeled the bill “draconian,” accusing the government of attempting to censor the internet and infringe on Canadians’ privacy rights. Concerns have also been raised about the broad and ambiguous definition of “high-impact” AI, which could lead to regulatory overreach and unintended consequences for businesses and innovation.
Big Tech Lobbying: Industry players such as Amazon have aggressively pushed for less stringent oversight, weakening the bill’s initial ambition. They argue that AIDA’s approach to classification is too broad—focusing on industry sectors rather than the actual use or risk of the AI application.
While AIDA has not been officially shelved, its future remains uncertain. Until the government revisits its structure, clarifies key terms, and narrows its scope, the path forward for AI governance in Canada is unclear.
As a stopgap, Canada has introduced the Artificial Intelligence and Data Act: Interim Guidance on Responsible AI, which offers a voluntary, non-binding framework of best practices. However, it lacks the authority to mandate compliance or drive meaningful change.
The EU AI Act: Global Gold Standard
The EU AI Act remains the world’s most comprehensive AI legislation, establishing a structured, risk-based framework with robust enforcement mechanisms and penalties. It mandates organizations to conduct risk assessments, ensure transparency, and implement safeguards before deploying AI systems. While partially enacted, the Act will be fully implemented by 2026. The Act introduces a tiered system that classifies AI systems based on their risk levels: minimal, limited, high, and unacceptable.

The EU AI Act requires designated third-party bodies—known as Notified Bodies—to assess compliance for high-risk AI systems. These independent auditors evaluate whether developers and deployers adhere to the regulations, particularly around transparency, fairness, and data governance.
Despite the Act’s thoroughness, critics argue it could stifle innovation, particularly for smaller startups unable to meet its complex requirements. Some have even suggested that the Act’s heavy regulations might disadvantage European companies when competing against less-regulated counterparts, such as U.S. Big Tech firms.
Making Rules Work: EU Adds Tools to Ease the Strain
Recognizing this risk, the EU is adjusting its course. In April 2025, the European Commission introduced the AI Continent Action Plan, a strategic effort to strengthen Europe’s AI ecosystem without diluting regulatory safeguards.
The Action Plan includes three key pillars:
AI Factories: The EU is funding a network of AI “factories”—centers of excellence offering startups and SMEs access to computing power, open datasets, testing environments, and AI development support. These facilities aim to level the playing field for smaller companies that lack Big Tech’s scale.
Infrastructure and Compute Access: To ensure Europe remains competitive, the EU is channeling public and private capital into high-performance computing (HPC) clusters and shared cloud infrastructure, providing critical resources for advanced AI model training and foundational development.
AI Service Desk: A centralized hub will guide startups and SMEs through the regulatory process, offering hands-on support with risk classification, documentation, and compliance assessments. This reduces the compliance burden on smaller companies and helps them navigate the complexities of the AI Act.
In short, while the EU remains firm on the need for guardrails, it’s combining rigor with support to ensure that compliance is achievable without stifling growth.
U.S. AI Policy: Chasing Growth at all Costs
The U.S. has made a decisive pivot on AI policy. In January 2025, President Trump revoked Executive Order 14110—Biden’s landmark framework for AI safety, transparency, and national security. That order had required companies to submit safety test results for high-risk models and tasked federal agencies with setting standards to address cyber threats, biosecurity, and civil rights risks.
The new approach trades guardrails for speed. The Trump administration is prioritizing rapid deployment, appointing Chief AI Officers across federal agencies, and pushing public sector innovation.
U.S. AI policy is increasingly focused on geopolitical maneuvering rather than AI safety—tightening export controls, restricting chip access to China, and monitoring international AI partnerships. Rather than joining global efforts to build common guardrails, the U.S. is doubling down on technological dominance—even if it means sidelining governance.
Canada’s AI Future: Stuck in the Middle
Canada’s AI regulatory path remains uncertain, with AIDA stalled by political gridlock, industry pressure, and structural flaws.
As the EU advances strict AI rules and the U.S. leans toward deregulation, Canada faces a strategic choice: align with Europe’s rights-based, risk-driven model—or stay in step with the U.S., its largest trading partner and single biggest investor.
Big Tech’s influence looms large. U.S. firms have lobbied hard for lighter oversight—and with 58% of Canadian venture capital in 2024 coming from the U.S., economic dependence complicates the regulatory calculus.
However, in the absence of federal regulations. Canada has other mechanisms in place to guide responsible AI.
Treasury Board Policy Instruments
These apply to federal departments and agencies using automated decision systems.
Directive on Automated Decision-Making (ADM Directive)
Requires institutions to assess and mitigate risks before deploying AI systems. Emphasizes transparency, human oversight, and documentation throughout the system’s lifecycle.
Algorithmic Impact Assessment (AIA)
A mandatory risk assessment tool that evaluates systems based on factors such as fairness, transparency, and data sensitivity. Higher-risk systems trigger stricter obligations.
Regional Laws (Ontario and Québec)
Provincial governments are establishing AI frameworks tailored to their jurisdictions.
Ontario – Bill 194 (Enhancing Digital Security and Trust Act, 2024)
Enacted in 2024, this law applies to public sector bodies, including children’s aid societies and school boards. It requires disclosure of AI use, risk management, and accountability frameworks. The Information and Privacy Commissioner provides oversight.
Québec – Law 25
Applies to public and private organizations. Requires individuals to be informed of automated decision-making, with access to explanations and human intervention. Includes enforcement provisions and financial penalties.
Sector-Specific Regulatory Bodies in Canada
Regulatory bodies are advancing oversight of AI use in their sectors.
Law Societies (Ontario, Alberta, B.C.)
Provincial law societies issue guidance on responsible generative AI use in legal practice. Violations may result in fines or license suspension.
Office of the Superintendent of Financial Institutions (OSFI)
Draft Guideline E-23 addresses AI model risk in financial institutions, focusing on decision-making areas like loan approvals and fraud detection.
AI Poses Novel and Complex Risks That Existing Legal Systems Can’t Fully Handle
Although there are currently no AI-specific global laws, three forces are shaping how AI is governed today: legacy laws, voluntary standards, and emerging case law. Together, they offer some guardrails—but they fall short of fully addressing the scale and complexity of the new risks AI introduces.
Legacy laws are being stretched to cover AI—with real consequences. In 2019, Goldman Sachs and Apple came under fire when their credit card algorithm reportedly offered women significantly lower credit limits than men with similar financial profiles. The incident triggered an investigation under the Equal Credit Opportunity Act (ECOA), sending a clear signal: anti-discrimination and consumer protection laws still apply, even in AI contexts.
Voluntary standards have stepped in to fill some of the regulatory vacuum. Organizations like ISO and IEEE are developing frameworks—such as ISO/IEC JTC 1/SC 42 and Ethically Aligned Design—that promote transparency, accountability, and alignment with human values. These standards offer companies a blueprint for responsible AI, particularly in jurisdictions where legislation is still catching up.
Case law is also starting to shape the legal landscape. In The New York Times v. OpenAI, the newspaper alleges that OpenAI used its copyrighted articles without permission to train generative AI models like ChatGPT. The case raises complex questions about fair use, intellectual property rights, and the boundaries of data scraping—issues that courts are only beginning to grapple with. Though unresolved, it’s a landmark case that could set key precedents around consent, data ownership, and accountability in the AI era.
Still, each of these governance tools—laws, standards, and case law—has critical limitations when it comes to AI.
Consider deepfakes—AI-generated audio, video, or images that can impersonate real individuals. Existing laws on privacy, defamation, and intellectual property may occasionally apply, but they weren’t designed for threats this complex or scalable. Enforcement is often slow and inconsistent. Legal ambiguity clouds everything from determining malicious intent to assigning liability—should the creator, platform, or model developer be held responsible?
Even the EU AI Act, often seen as the gold standard of AI regulation, illustrates the challenge. Deepfakes aren’t classified as “unacceptable” or even “high-risk.” Instead, they are subject to minimal transparency requirements—like disclosing that content is AI-generated. But such disclosures can be overlooked, ignored, or manipulated, offering little real protection in practice.
And that’s before the next wave hits. AI systems are becoming more autonomous and agentic—able to collaborate, communicate, and make decisions independently. Yet today’s laws are built around single systems and human control. They don’t account for distributed AI, networks of autonomous agents, or shared responsibility across platforms. That gap is growing—and fast.
Conclusion
Taken together, existing laws, voluntary standards, and emerging case law constitute an initial response to a fast-moving technological frontier. Yet in the absence of a cohesive, forward-looking global regulatory framework, these efforts remain fragmented—insufficient to match the scale, speed, and complexity of AI. The power of AI lies in its capacity to transform economies, influence behavior, and drive decisions across critical domains—from healthcare and education to national security and finance. Its potential for public good is extraordinary. But so too are the risks: bias, misuse, concentration of power, and large-scale unintended consequences. These challenges demand more than piecemeal oversight. To unlock AI’s benefits safely and meaningfully, we need robust, coordinated global guardrails

Annual Conference 2025
Join us in vibrant Montréal for the 27th Annual Corporate Governance Conference, taking place August 24–27, 2025, at the Fairmont The Queen Elizabeth. Register today to connect, learn, and lead in the evolving world of governance.