How countries are regulating AI

In the aftermath of the AI boom, the focus now shifts to regulation in 2024. While the EU has reached a deal on a proposal to regulate AI, in the US, the Biden administration has issued an executive order. There are a hodgepodge of evolving rules and regulations that AI companies will have to deal with in the coming the months and years.

Yahoo Finance Legal Reporter Alexis Keenan deep dives into global AI regulation policies—comparing how different countries are setting up different rules for AI.

For more engaging interviews and conversations surrounding the world of AI, click here.

Editor's note: This article was written by Eyek Ntekim

Video Transcript

[MUSIC PLAYING]

JOSH LIPTON: After the boom comes the reality check. AI was the term of 2023, but the big question this year, how will regulation play its part? Most agree the market needs a referee, a moderator, someone to balance opportunities and risk in this vast unknown terrain. But who's going to lead the charge?

The EU has first-mover advantage agreeing to the contours of a sweeping new law to regulate the space, but the US needs to catch up fast. The Biden administration issued its executive order, a pathway to a more expansive plan. How do these crucial guardrails evolve in the critical few months ahead?

RACHELLE AKUFFO: This is the regulatory landscape for AI-- the US has an executive order, the EU has actual legislation. So what are these and other nations doing to regulate risk? And what risks are driving that regulation? Yahoo Finance's Alexis Keenan has the details. Hey, Alexis, something I know you're all across the board on.

ALEXIS KEENAN: I'm working on it, Rachelle. So yeah, this question about how much regulation is needed in the US, that's still a matter of opinion for some people. Some say that existing laws are adequate to curb the risks.

But one risk, of course, became apparently clear yesterday after that deepfake impersonating President Biden told New Hampshire primary voters to hold off. So the deepfakes, these are one of the major risks that IP lawyers, tech lawyers talk about also because of the speed with which they can upload and transpire online. Also, AI that impacts government infrastructure, that's another big area of concern for global nations.

But let's take a look at the emerging global landscape as it concerns regulation. The EU, of course, was the first G7 mover, but it was really Singapore that was first out with regulation. But it was then that the EU came out with its really what's trying to be comprehensive, sweeping antitrust-- I cover that a lot too, AI law. And what it does is it classifies AI into risk categories where you have high-risk uses like medical technology, for example, requiring approvals before they are ever taken to market.

There are also some AI uses though that are outright banned by this EU law. And by the way, this is just in draft form. It's agreed in principle, not fully approved.

Things that go into that outright ban category are those national infrastructure applications, as well as manipulative algorithms. So the first part of this EU act, it is expected to be adopted around the spring time and then the rest of the act hopefully 6 to 24 months out. There was a leak of that law, the final draft of it also that came down the pike yesterday.

As for the US and the UK, they're taking a much more hands off, more lighter touch approach with legal proposals, legislation on the table. But the US so far taking this kind of ad hoc, more market-driven approach. Not surprising there, so far the US has no comprehensive law, not even one drafted. But there are also no state regulations on the table, but you have those existing laws.

And the lawyers that I talk with in this space, they name things like product liability that governs accidents and injuries. They also talk about our existing copyright laws that is protecting creative works already, they say. But you also talked about the Biden administration's executive order that has some requirements around transparency for AI models, for those large language models that we've been talking so much about, also national security, and also requires some labeling of AI generated content.

Now, UK carrying out some voluntary compliance tests. If they don't have success there, then they plan to regulate. Japan, for its part, it's participating in this G7-wide effort to gain some agreement among those nations in an AI framework that would then be more comprehensive across nations.

China for its part, it has a set of 50 rules. They too have no comprehensive legislation, no comprehensive regulation. They have these 50 rules that focus a bit more politically centered on things like news distribution, deepfakes, and chat bots, as well as data sets. So guys, that's kind of the landscape and where we stand right now, but so much more to come in 2024 and even 2025.

Advertisement