Generative artificial intelligence holds enormous potential to create positive human impact, but that potential is far from guaranteed. The benefits depend heavily on how effectively these tools are used—and on whether the approach to their deployment is AI-centric or genuinely human-centric.
Since ChatGPT first captured global attention in November 2022, regulatory bodies worldwide have been scrambling to address the rise of AI. Now, two years later, we’re beginning to see some common threads in AI regulation. This article examines those trends and explores the question: What does effective AI governance look like, and how well are regulatory bodies meeting that standard?
Public & Private AI Use: Why is Regulation Needed?
Regulating AI is a daunting task, to say the least. AI is impacting a wide range of sectors, from manufacturing and logistics to agriculture and healthcare. Governments are also recognising its value, as AI’s pattern recognition and big data capabilities enable insights that would take human analysts months to uncover in large datasets like census information.
In short, AI tools are becoming embedded in critical infrastructure globally. Government, law enforcement, logistics, commerce, and even households are all adopting AI in various forms. With this rapid adoption comes a growing demand for policies that address privacy, ethics, and safety in the AI space.
What Does AI Regulation Look Like Right Now?
AI governance is emerging in the form of fledgling policies around the world, though most countries are still in the process of catching up with the rapid advancements in generative AI. For instance, Australia has yet to introduce specific AI governance laws but is exploring how existing regulatory frameworks might apply to AI tools.
The United Kingdom and Switzerland have taken a sector-specific approach, focusing on industry-specific regulations rather than broad AI legislation. In contrast, the European Union is working towards comprehensive legislation with its AI Act, which is expected to become fully applicable by 2026.
What Needs to Happen Next?
EY has identified an important trend in the current landscape of AI regulation: most governments are adopting a risk-based approach. This involves evaluating the immediate risks of AI deployment and then crafting either broad or targeted policies to mitigate those risks. For a safer future, regulations need to be both clear and informed by subject matter expertise.
Consistency across jurisdictions would also be beneficial, especially as AI becomes more globally integrated. On top of governmental regulations, companies should be encouraged to develop their own AI policies that align with these regulatory standards and address the specific ethical and operational challenges AI presents.
In conclusion, it has taken nearly two years for AI regulation to gain traction, and there is still no unified global approach. This isn’t surprising, given that governments are still wrestling with how to regulate older technologies like the internet. There is much more ground to cover, and AI regulations will likely continue to evolve as AI becomes more embedded in critical sectors like logistics, agriculture, and healthcare.
That said, it’s promising to see AI policies emerging with relative speed. But it’s crucial that governments remain committed to open, ongoing conversations about this transformative technology and ensure that subject matter experts are involved in every step of the regulatory process.




Leave a Reply