BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

The Global AI Dilemma: How It Should Be Regulated

Plus: The Biden Administration’s Steps To Cement U.S. Dominance In AI Technology, Red Hat Makes AI More Accessible, Apple’s Powerful New AI Chips, TikTok Sues The Federal Government

This is the published version of Forbes’ CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday.


While much of the news about President Joe Biden’s foreign policy this week is centered on weapons aid for Israel in its ongoing war with Hamas, there have been some pretty huge policy developments on the tech front. Through these moves, Biden is striving to cement the U.S. position as world leader in AI technology.

Yesterday, Biden traveled to Racine, Wisconsin to announce Microsoft’s planned $3.3 billion AI data center there. The new center—on the site of the planned Foxconn manufacturing campus, which the Taiwanese company pulled out of—will create 2,000 permanent jobs, according to the White House. This will be the nation’s first manufacturing focused AI co-innovation lab, Microsoft said. The company pledges to train more than 100,000 people across Wisconsin in generative AI by 2030, and announced partnerships with nearby technical schools to create AI educational programs.

And while Biden is also campaigning for re-election (and Wisconsin is a swing state), the conditions that make this new data center possible flow out of tech policies—including infrastructure investments and an executive order governing AI—that have been made throughout Biden’s time in office.

“When you connect the dots between the infrastructure investment, the chips investment, the climate technology investment, the work to set new AI safety standards and cybersecurity protection—you put those things together? That actually helps substantially in enabling the whole tech sector to invest and grow and create new jobs in the United States,” Microsoft President Brad Smith told the Washington Post.

Administration officials have also reportedly revoked export licenses that allowed Intel and Qualcomm to export semiconductors to Chinese tech manufacturing giant Huawei. Last month, Huawei released its first AI-powered laptop, which was powered by an Intel processor. Intel and Qualcomm did not immediately respond to Forbes’ request for comment, but a more thorough crackdown on sharing AI technology may be on the way. Reuters reported this morning that the Biden Administration plans to put guardrails on U.S.-developed AI tech, like ChatGPT (currently not available in China), in order to protect national interests.

China has its own tech and AI industries, which employed about 7.25 million people in the country in 2022, according to Statista. The absence of U.S. products and investments won’t necessarily handicap its technological possibilities. However, as geopolitical tensions between the U.S. and China increase, it allows for more separation—and less opportunity for China using innovation from the U.S. against it.

As every country in the world is grappling with AI, there are many ideas on how to regulate it. The EU passed the first official law in March, but it’s an issue in front of every other world and technology leader. I talked to Ivana Bartoletti, chief privacy and AI governance officer at Wipro and an AI expert for the Council of Europe, about the differences between EU and current U.S. laws governing AI, as well as the direction that regulation may go next. An excerpt from our conversation is later in this newsletter.

NOTABLE NEWS

Red Hat is bringing AI everywhere. The IBM-owned open source enterprise technology and consulting company announced this week it was expanding its Red Hat Lightspeed technology, which turns written prompts into code snippets, to bring generative AI to its Linux and hybrid cloud application platforms, Forbes senior contributor Adrian Bridgwater writes. To make this all possible, and to help customers power AI workloads across the hybrid cloud, Red Hat is collaborating with chip maker AMD, whose GPUs will facilitate the system.

These new developments will help make AI technology more accessible and easy to use, potentially reducing the complexity of enterprise IT. In his remarks at the Red Hat Summit, CEO Matt Hicks said these developments were at the intersection of open source and AI.

“As the open source element expands in the AI universe, it will be a force multiplier. Why?” he asked. “Well, I first fell in love with technology when I was first exposed to Linux and learned that I could change and contribute to the products being created—so my passion for technology is now reinvigorated by seeing what AI can do now. I feel fortunate to be experiencing the convergence of AI and open source.”

ARTIFICIAL INTELLIGENCE

Apple is planting itself firmly in the AI hardware race, announcing earlier this week its new line of iPad Pros will include a processor it says is an “outrageously powerful chip for AI.” This new M4 chip is an upgrade from previous iPad generations’ M2 chips, and the M3 chips in current MacBook laptops. At a launch event on Tuesday, Apple Vice President of Platform Architecture Tim Millet said the neural engine powering these AI capabilities has been in Apple chips for years, but this one is 60 times more powerful than the company’s earliest models. He added this new chip is more powerful than any neural processing unit currently in any PCs.

So there’s a ton of AI processing power in the newest iPads, which can be preordered now and hit stores next week. But what can a user do with that? Millet gave one example: In Apple’s Final Cut Pro app, users will be able to isolate the subject of a 4K video and remove the background with a single tap. A good function for creators, but what else? Apple may be saving its talk about practical applications for its upcoming Worldwide Developers Conference on June 10.

The announcement brought a tiny bump to Apple’s stock price, which surged following its earnings report last week. Reports of weakening demand, especially in China, had lowered many investors’ near-term outlooks for the company, but sales—in total and in the China region—exceeded expectations.

LEGAL ISSUES

This week, TikTok made good on its vow to challenge the new law forcing its ban or sale to a non-Chinese entity in nine months. The short video social media platform, owned by Chinese company ByteDance, sued the U.S. government in federal court Tuesday, assailing the new law as an overreach that not only violates the First Amendment rights of the company—and the 170 million U.S. users of the social platform—it also installs a new set of rules on freedom of speech.

Forbes’ Alexandra S. Levine breaks down the arguments TikTok makes in its lawsuit. One of the big ones: American social media companies already pose the same risk. Misuse of user data and taking advantage of a social platform’s ability to influence people can be done by any application, regardless of where the parent company is located. Also, TikTok argues that ByteDance cannot sell just that one social app. Its algorithm—part of what makes it so valuable—is proprietary, and TikTok’s operations are deeply entwined with the rest of ByteDance, which owns a large suite of social apps.

BITS + BYTES

AI Policy Expert Ivana Bartoletti Breaks Down AI Governance In The U.S. And EU, And What Might Come Next

As AI picks up speed around the world, people everywhere are saying it needs to be regulated. The EU passed the first law to do that, the EU AI Act, in March. In the U.S., the Biden Administration issued an executive order on AI last October. But what are these regulations actually doing on a global scale? How do they work together, what more is needed, and what about the rest of the world? I talked to Ivana Bartoletti, who is global chief privacy and AI governance officer at Wipro and an AI expert at the Council of Europe, about these issues.

This conversation has been edited for length, clarity and continuity. A longer version is available here.

The U.S. doesn’t have a law that is quite as formal as the one that the EU passed regarding AI regulation, but there’s certainly a lot of talk about it. From where you sit, what do you think should be in a U.S. law to augment what exists and work with the EU laws in place already?

Bartoletti: First of all, to an extent we can say AI is already regulated. It’s important to bear that in mind, in the sense that AI does not exist in isolation. Already, artificial intelligence exists within liability legislation, copyright law, privacy legislation. It was one of your commissioners of the FTC that said AI is not an excuse to breach the legislation that is already in place.

With this in mind, I think it’s very important what is happening at the moment in the U.S. with the privacy data protection legislation that has been presented. Federal legislation would certainly help organizations comply with privacy, without having to comply with the mosaic of privacy legislation that [Europe] has at a country level. A lot of the harms around artificial intelligence, they are privacy harms. They do relate to personal data. Even something like fairness and bias—fairness in the processing of data and fairness in the output—is also a privacy matter.

If you think about what is happening with the New York Times versus OpenAI, that case could change the way that AI is going to be developed in the future. Copyright legislation certainly does apply to AI. Non-discrimination legislation already applies to AI. The FTC has been quite firm in saying we are going to monitor how companies are going to be fair in the way that they use artificial intelligence. We’re not just going to allow companies to do what they want.

In reality, that is quite a pragmatic way of doing this, even compared to the European [Union]. It’s basically saying: OK these are the requirements. This is what needs to be done. Let’s come up with a standard. It will apply to government agencies. In reality, it will be a far greater impact that offers a framework for [the] private sector as well. And then there will be legislation governing the use of AI in specific sectors. The FTC’s done its job. I don’t think companies can wait for AI legislation to ensure that AI is fair, secure, robust and safe.

We’ve been talking about policies in the EU and U.S., which are two huge players in AI, but certainly not the only ones. In terms of coming up with regulations and making policies that impact AI globally, where do other nations and international organizations fit in?

What we’re seeing is different attempts coming from different countries. Different ways of dealing with it. Of course, AI is very much about the economy. What we have seen over recent years has been every single country enacting AI strategies to grow with AI. We’re seeing a massive growth around investment strategies on a country level. But alongside that, we’ve also seen a lot of the risks coming up. The risks around deepfakes, which are a big problem, especially this year, when you have over 60 countries going for election.

Then there’s this big debate around global governance, where you have different visions. You have the idea of having something similar to the [International] Atomic Energy Agency, where you have a body under the UN, or something similar, looking as sort-of a center for AI. Maybe doing assessment of other countries’ regulations. We don’t know. So that’s one idea. There is [suggested] a bigger role played by the OECD. Some people are talking about a licensing model on a global level. There’s all sorts of things coming out. Do we need a new body? I personally think that we have a lot of bodies, and it would be good to leverage what we have.

I think there is a lot of similarities across countries. We all know that debate. We’re dealing with the same things: robustness, safety, data controls and human oversight, fairness, bias. But we’re also dealing with something else, which I think brings together a lot of countries. That is market competition. I do not think that we can talk AI policy and AI governance without talking about how AI is actually exacerbating the market concentration of power. And this is something that both the U.S., in an antitrust way, and the EU, with the Digital Markets Act, but also other countries are trying to grapple with. Is AI going to solidify and exacerbate existing concentration of power in the digital and technological sphere, or is [it] actually opening up to new things [and players]. In fact, you see the big players producing LLMs, producing the other big tech, because they require a huge amount of data and a huge amount of computation. I think what we’re seeing that brings similarities between the U.S. and the EU is the understanding that there has to be some control over the market. The U.S. has been doing this in a sort-of antitrust way, which is different from the EU which does it in a sort-of top-down approach with things like the Digital Markets Act, which looks at bringing transparency in and an openness from the big tech players. I think these two issues are very much related.

FACTS + COMMENTS

A recent study from Microsoft showed employees across industries like to use AI, but they are afraid to let their bosses know.

75%: Portion of full-time office workers who say they are using AI at work—though more than three-quarters are using their own tools rather than company-provided ones

52%: Portion of employees using AI at work who are reluctant to divulge it, with many fearing it may make them look replaceable

‘Everyone is trying to not show that they’ve automated their work’: University of Pennsylvania Wharton School professor Ethan Mollick told Forbes

STRATEGIES + ADVICE

When you’re an executive, it seems you’re always busy, and that can put a strain on your mental health and well-being. Here are some tips to reduce the hustle and be happier at work.

Want to be a better leader? These five books can teach you skills including how to build community at work, balance the demands of your job with the rest of your life, and have difficult conversations.

VIDEO

QUIZ

Elon Musk’s xAI is about to close a mammoth funding round, Bloomberg reported. What did sources say the company’s valuation would be after it’s done?

A. $752 billion

B. $1 trillion

C. $18 billion

D. $10.8 billion

See if you got the answer right here.

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.