BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

Breaking Down AI Governance In The U.S. And EU, And What Comes Next

Ivana Bartoletti, Wipro’s Global Chief Privacy And AI Governance Officer And AI Expert At The Council of Europe, Talks About How Laws From Around The World Fit Together To Regulate The Technology.

Following

As AI picks up speed around the world, people everywhere are saying it needs to be regulated. The EU passed the first law to do that, the EU AI Act, in March. In the U.S., the Biden Administration issued an executive order on AI last October. But what are these regulations actually doing on a global scale? How do they work together, what more is needed, and what about the rest of the world? I talked to Ivana Bartoletti, who is global chief privacy and AI governance officer at Wipro and an AI expert at the Council of Europe, about these issues.

This conversation has been edited for length, clarity and continuity. An excerpt was included in Thursday’s Forbes CIO newsletter.

How are the U.S. and EU approaches to AI regulation different? How are they the same?

Bartoletti: I think the similarities and concerns are the same. I think there is a general understanding in Europe, as well as in the U.K. and in the U.S., about the great potential that AI is bringing, obviously, but also around the fact that there are risks that go with it. The risks are especially around security, and especially around fairness and bias, democratic viability, especially with all the biggest issues from the polarization deepfakes. On this there is a great alignment, and not just with the U.S. and Europe, but also across the world.

The idea of the European Union is very much product-based. Not all AI products are the same, and we are wrapping controls around these systems based on these risks. The risks are divided in two broad categories. One is risks on individuals, and that is rooted into the human rights and fundamental rights underpinning the EU in the Special Human Rights Charter. For example, AI systems that may have an impact on people’s freedoms, liberty, that may lock people out of essential services—it could happen in the welfare state, but it could also happen with banking in the care and allocation of credit, education and work possibilities—that category of AI systems has very big controls around it. Quite a lot of bureaucracy as well around them.

There is another category of AI systems that form part of something which is already high risk. I’m thinking about aviation, machinery, health machines—all these things that already have controls wrapped around them because of the risk that they may pose to people. In that case, if an AI is part of that, that is subject to the same protection.

The U.S. approach is slightly different. The European AI Act is horizontal—regardless of where the AI is deployed, based on the risk level, they are regulated. In the U.S. the process is more sectorial—they’re looking at more AI in healthcare, AI in education, AI in the workplace, AI financial services. It’s also a very strong idea of governance around public agencies and governments. The recent requirements that Vice President Kamala Harris put forward are very much based on government agencies, although the impact is far greater than that. Although the requirements are targeted toward government agencies, they also cover all the supply chain which is at any point involved in a government agency. In reality, it offers a framework that goes beyond the direct agency itself, [and] also offers a framework that can be deployed by [the] private sector as well.

In addition to that, what you have in the U.S. is a proliferation of laws at the local level. You think about what is happening in New York with AI used in an employment context, where AI needs to undertake an audit to ensure that there is no bias before deployment.

It’ll be interesting to see how things evolve moving forward. Clearly, the European Union approach is based on the idea that there are things that are not allowed, that are prohibited. There is a total ban. That is because the European Union approach really takes into consideration the values that underpin the European Union, and also the European Charter of Human Rights. For example, AI that manipulates people—I’m thinking about toys meant to push emotions on children or people. It’s called affective computing. It’s something that was considered science fiction until a few years ago. It underpins a lot of personalization services and our social media systems. Affective computing, when it can manipulate people, become[s] not just high risk, but at some level, that can be also banned.

The U.S. doesn’t have a law that is quite as formal as the one that the EU passed regarding AI regulation, but there’s certainly a lot of talk about it. From where you sit, what do you think should be in a U.S. law to augment what exists and work with the EU laws in place already?

First of all, to an extent we can say AI is already regulated. It’s important to bear that in mind, in the sense that AI does not exist in isolation. Already, artificial intelligence exists within liability legislation, copyright law, privacy legislation. It was one of your commissioners of the FTC that said AI is not an excuse to breach the legislation that is already in place.

With this in mind, I think it’s very important what is happening at the moment in the U.S. with the privacy data protection legislation that has been presented. Federal legislation would certainly help organizations comply with privacy, without having to comply with the mosaic of privacy legislation that [Europe] has at a country level. A lot of the harms around artificial intelligence, they are privacy harms. They do relate to personal data. Even something like fairness and bias—fairness in the processing of data and fairness in the output—is also a privacy matter.

In addition to that, things like algorithmic transparency—the fact that individuals can be notified when they’re dealing with a machine and not with a human being—have a right to challenge an output of an algorithm. That is something that in Europe, forget the European AI Act, you already have in the General Data Protection Regulation.

The proposed privacy legislation in the U.S. that is being discussed at the moment is a rather complex one, and to an extent in some bits, I find it even more difficult to comply [with] than GDPR in Europe, which is already quite complex.

In Europe, a lot of data protection regulators and data protection authorities, they have been the ones that’s holding the rights of individuals in AI. And in fact, it’s probable that a lot of data protection authorities will become AI regulators. This will happen in many countries. In Germany, for example, the federal German data protection authorities, they’ve come forward and said we want to be the AI regulators. That is normal because if you think about it, a lot of privacy harms, they are AI harms, and vice-versa. We are impacted with artificial intelligence on a very individual basis.

If you think about what is happening with the New York Times versus OpenAI, that case could change the way that AI is going to be developed in the future. Copyright legislation certainly does apply to AI. Non-discrimination legislation already applies to AI. The FTC has been quite firm in saying we are going to monitor how companies are going to be fair in the way that they use artificial intelligence. We’re not just going to allow companies to do what they want.

In reality, that is quite a pragmatic way of doing this, even compared to the European [Union]. It’s basically saying: OK these are the requirements. This is what needs to be done. Let’s come up with a standard. It will apply to government agencies. In reality, it will be a far greater impact that offers a framework for [the] private sector as well. And then there will be legislation governing the use of AI in specific sectors. The FTC’s done its job. I don’t think companies can wait for AI legislation to ensure that AI is fair, secure, robust and safe.

You mentioned GDPR, which has had a spillover effect outside of the EU and has prompted companies everywhere to change the way they do business online for all users. Do you see the EU AI Act having the same kind of impact on tech companies?

Certainly the European AI Act, being the first legislation that has come around AI, will have an impact in the sense that companies that market AI products in Europe will have to comply. Certainly, at a time when everybody’s discussing AI laws, the fact that the European Union has actually done it is important. However, I do think that countries will act on this in a different way, and I do think this is actually a good thing because we don’t know what is the best way to regulate artificial intelligence.

I have been in favor of the European AI Act. There are things I like, there are things I like less, but I am also very much in favor of pragmatic approaches, and we will have to see what will happen to the European AI Act, because a lot of it is still to be discussed. There’s a European AI Office set up. Countries will have their own agencies. There will be a lot of templates that will have to be discussed. We don’t know what a conformity assessment exactly looks like. We’re still waiting for standards: companies [that] produce high risk AI, but comply with standards, will not have to do a conformity assessment. In the next couple of years.

I think the spillover effect will be in the sense that companies that plan marketing in Europe will have to comply. The spillover effect will happen in the sense that other countries are also enacting the right protections. But I don’t think that the European AI Act is the universal law of AI. There’s a lot of stuff that applies to artificial intelligence, which is equally significant, including privacy legislation. In this sense, I feel that having a federal bill would be really, really important for the U.S.

We’ve been talking about policies in the EU and U.S., which are two huge players in AI, but certainly not the only ones. In terms of coming up with regulations and making policies that impact AI globally, where do other nations and international organizations fit in?

One player is the U.K. In the newspapers, it’s reported that the U.K. is thinking about legislation. At the beginning, they were saying that they were not that keen because they didn’t know what they were going to regulate, but now there seems to be a shift of mind. The U.K. hosted the Bletchley Park [AI Safety Summit], with a lot of countries coming, including China, which is quite significant. And the second meeting is going to happen soon in South Korea, so that group of countries is aligned on robust, safe artificial intelligence.

You have China: a massive player. China has actually been a front runner when it comes to regulation of AI, especially around algorithmic decision-making. Of course, a lot of it applies to private sector and not public sector. But China’s been [in the] quite early stages involved in regulation.

A very interesting approach is coming from India, which is a huge global player for many different reasons. There is a code of conduct for companies using AI produced by Nasscom which is the [nongovernmental] body bringing together the IT services industry companies. There’s also an interesting approach, which is very pragmatic, based on robustness, safety, not wanting to rush into legislation.

What we’re seeing is different attempts coming from different countries. Different ways of dealing with it. Of course, AI is very much about the economy. What we have seen over recent years has been every single country enacting AI strategies to grow with AI. We’re seeing a massive growth around investment strategies on a country level. But alongside that, we’ve also seen a lot of the risks coming up. The risks around deepfakes, which are a big problem, especially this year, when you have over 60 countries going for election.

Then there’s this big debate around global governance, where you have different visions. You have the idea of having something similar to the [International] Atomic Energy Agency, where you have a body under the UN, or something similar, looking as sort-of a center for AI. Maybe doing assessment of other countries’ regulations. We don’t know. So that’s one idea. There is [suggested] a bigger role played by the OECD. Some people are talking about a licensing model on a global level. There’s all sorts of things coming out. Do we need a new body? I personally think that we have a lot of bodies, and it would be good to leverage what we have.

I think there is a lot of similarities across countries. We all know that debate. We’re dealing with the same things: robustness, safety, data controls and human oversight, fairness, bias. But we’re also dealing with something else, which I think brings together a lot of countries. That is market competition. I do not think that we can talk AI policy and AI governance without talking about how AI is actually exacerbating the market concentration of power. And this is something that both the U.S., in an antitrust way, and the EU, with the Digital Markets Act, but also other countries are trying to grapple with. Is AI going to solidify and exacerbate existing concentration of power in the digital and technological sphere, or is [it] actually opening up to new things [and players]. In fact, you see the big players producing LLMs, producing the other big tech, because they require a huge amount of data and a huge amount of computation. I think what we’re seeing that brings similarities between the U.S. and the EU is the understanding that there has to be some control over the market. The U.S. has been doing this in a sort-of antitrust way, which is different from the EU which does it in a sort-of top-down approach with things like the Digital Markets Act, which looks at bringing transparency in and an openness from the big tech players. I think these two issues are very much related.

Send me a secure tip