The European Union’s AI Act has set a new precedent in the regulation of artificial intelligence, establishing the world’s first comprehensive legal framework for AI systems. Now, California - a global tech hub - is poised to join the EU in shaping the future of AI governance with its own AI legislation.
The EU AI Act: A Comprehensive Risk-Based Approach
The EU AI Act, largely taking effect by 2026, categorizes AI systems into four risk levels: unacceptable, high, limited, and minimal risk. This framework ensures that the strictest regulations are applied to AI systems that pose the greatest risks, such as those used in healthcare, law enforcement and critical infrastructure. High-risk AI systems are required to undergo rigorous testing, continuous risk management and compliance with strict EU standards.
Generative AI models, like ChatGPT, which fall under general-purpose AI, are also regulated under the Act. While not deemed high-risk by default, these models must adhere to specific requirements, including transparency in data use, compliance with EU copyright laws, and cybersecurity protocols.
California’s AI Bill: Pioneering AI Safety in the U.S.
California’s proposed AI bill, which is awaiting Governor Gavin Newsom’s signature, aims to establish critical safety protocols for large-scale AI systems—particularly those requiring over $100 million in data for training. Although no current AI models meet this threshold, the bill anticipates future advancements in AI capabilities. Companies will be required to test their models thoroughly and publicly disclose safety measures to prevent potential misuse, such as the disruption of critical infrastructure or the creation of hazardous materials
The bill has been described as taking a "light touch" approach, aiming to protect innovation while implementing essential safeguards against AI-related risks. This contrasts with the EU’s more stringent framework, which encompasses a broader range of AI applications and imposes more extensive compliance requirements.
EU vs. California: A Comparison of Approaches
Both the EU and California are striving to balance innovation with safety in their AI regulations, but their methods differ significantly. The EU AI Act is comprehensive, covering a wide array of AI applications with a global reach. It applies to any company deploying AI within the EU, regardless of where the company is based, and includes strict prohibitions on certain types of AI, such as those used for social scoring or cognitive behavioural manipulation
California’s bill, by contrast, targets only the largest AI systems and is currently limited to the state’s jurisdiction. However, its forward-looking nature, focusing on systems that could pose significant risks in the future, reflects California’s status as a leader in tech innovation. Despite the differences, both regulatory frameworks emphasize the importance of transparency, safety, and the need to mitigate the risks posed by rapidly advancing AI technologies.
Global Implications: Shaping the Future of AI Governance
The introduction of California’s AI bill signals a potential shift toward broader AI regulation in the United States, potentially paving the way for federal legislation. When combined with the EU AI Act, these regulations are likely to set the standard for AI governance worldwide, influencing how other regions approach the complex challenges of AI development and deployment.
For businesses operating globally, these developments underscore the importance of staying ahead of regulatory changes. Companies will need to conduct thorough audits of their AI systems, update internal policies, and ensure compliance with varying regulations across different jurisdictions. The evolving landscape of AI regulation presents both challenges and opportunities, particularly for those in the data privacy and cybersecurity fields, where demand for expertise is expected to grow significantly.
Preparing for the Future
As AI regulation continues to evolve, companies must be proactive in their preparation. This includes conducting detailed audits of AI tools, updating compliance strategies, and ensuring that their AI systems meet both current and future regulatory requirements across different regions.
For professionals in data privacy and cybersecurity, this period will present new opportunities. We’re starting to see the very start of this. The growing demand for expertise in navigating these new regulations is likely to drive significant opportunity for career diversification, making it a key area for professional development.
To learn more about the impact of these regulations on your business or career, download our global hiring guide or reach out directly to Tom Woods, Head of Data Privacy Search or Phil Redhead, Head of In-House Legal Search.