BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Trustworthy AI: String Of AI Fails Show Self-Regulation Doesn’t Work

Following

If 2023 was the year of AI hype, 2024 is the year that trust in AI becomes mission-critical.

A string of AI fails has shown that a self-regulatory, pro-innovation approach to AI can leave us less safe, without actually making AI better. Without effective guardrails, we may be speeding towards another AI winter.

The EU AI Act and Biden’s AI Executive Order are raising the bar with comprehensive rules. Organizations that cut corners in the race to be first-to-market may find themselves locked out of lucrative government contracts and R&D opportunities.

AI isn’t new. Neither is AI risk. AI experts have long warned of the real-world harms poorly-designed AI tools can cause. The AI Incident Database chronicles incidents dating back to 1983, where a human operator averted nuclear disaster by identifying a computer-generated false alarm. The database now contains 629 incidents.

Best sellers like Weapons of Math Destruction and documentaries like Coded Bias exposed the public to a range of AI failures, making AI harms a mainstream topic well before ChatGPT entered the hype cycle. Throughout this time, concrete measures to mitigate AI risk were proposed. The EU’s GDPR introduced new rules in 2016 to address AI bias, biometrics, profiling and algorithmic decision-making. The ACM Conference on Fairness, Accountability, and Transparency has been running since 2018. Scores of AI frameworks and standards have been published.

Yet too many organizations continue to move fast and break things. They develop and release AI products without assessing or mitigating risks or even confirming their products work as advertised. When they rely on ‘black box’ models, they’re often hard pressed to explain how their AI works, or if it works at all. Yet without a clear legal requirement that is likely to be enforced, they can reap profits with an MVP and deal with regulatory or consumer blowback after someone has been harmed and if they feel compelled by regulators, public pressure or a genuine desire to fix the problem. This lawful-but-awful approach turns us into human guinea pigs. And as stories of peoples harm by AI continue to make headlines, it erodes trust in AI and fuels AI fears.

Regulation aside, companies are now facing the dual challenge of assuaging consumer AI and privacy concerns. A recent Pew Research study found that 70% of those surveyed have little to no trust in companies to make responsible decisions about how they use AI in their products. The Conversation reports that Gen Z is ditching “smart” consumer goods for “dumb” ones in part due to privacy concerns.


2023: The Year “AI Safety” Became Urgent

In 2023, “AI Safety” suddenly became an urgent policy matter. Global leaders convened at Bletchley Park for the first-ever UK AI Safety Summit. They unanimously adopted the Bletchley Declaration, in which they pledged to focus on identifying AI Safety risks and building risk-based domestic policies. They include transparency and accountability mechanisms for developers; evaluation metrics; testing and evaluation tools; and support for public sector capability and scientific research.

So what changed?

“What changed is that generative AI services became incredibly popular, across the world, almost overnight,” Dr. Gabriela Zanfir-Fortuna, the Future of Privacy Forum’s Vice President for Global Privacy told me. “This brought AI to the forefront of public attention.” The risks themselves have not dramatically changed, she said. The privacy community has been grappling with these issues for decades. And Data Protection Authorities have been de facto regulating AI. “We are in fact dealing with old problems that have a new face, and, granted, some complications.”

Global Data Privacy Expert Debbie “the Data Diva” Reynolds fears businesses that haven’t yet mastered privacy and cybersecurity will forge ahead with risky AI projects. New tools make it easy to launch AI projects “on the fly,” and AI FOMO is a powerful driver. If they do, they will exponentially increase risks to their businesses and to society.

Both Current and Future AI Harms Are AI Safety Issues

At the Summit, both VP Kamala Harris and European Commission VP for Values and Transparency Věra Jourová rejected the “false debate” that pits known AI risks against the future existential risks of “frontier AI.” For victims, current AI harms can feel “existential,” and they deserve serious regulatory attention. Examples include:

  • A senior kicked off of healthcare due to a faulty algorithm;
  • A father wrongfully imprisoned due to biased facial recognition technology;
  • A woman threatened by an abusive partner with explicit deepfake photos.

The EU and the White House affirmed they would regulate both existing and future AI risks through comprehensive rules and a supportive ecosystem designed to harness innovation responsibly, promote healthy competition, and protect privacy, equality and fundamental rights.

In contrast, UK PM Rishi Sunak remained steadfast in the UK’s light-touch, “pro-innovation” approach to AI regulation, arguing that it would be premature to regulate before really understanding the risks of frontier AI.


The ‘Regulation Stifles Innovation’ Argument Assumes It Will Lead To Shared Prosperity. It Does The Opposite.

We are all crash test dummies now...

Proponents of self-regulation argue that rigid rules stifle innovation. This, in turn, hurts humanity by withholding the promises of a bright AI future. While some unfortunate incidents may occur, they argue, this is an inevitable but necessary consequence of advancing AI for the benefit of humanity.

This framing rests on the flawed assumption that innovation will ultimately lead to shared prosperity for all, according to Professor Renée Sieber and Ana Brandusescu, Ph.D. (candidate), who specialize in public governance of AI. “Shared prosperity is an economic concept where the benefits of innovation are distributed equitably among all segments of society, rather than disproportionately favoring specific groups, in this case, the AI industry. Rules and regulations also are needed to ensure that all benefit and share prosperity from AI. Otherwise, wealth is concentrated in the hands of industry, which stifles progress and widespread benefits.” Self-regulation didn’t work with the internet, with social media, with the food industry, or in any other impactful area.

Indeed, many of these AI harms fall disproportionately on people who are already marginalized, underserved and over-surveilled. So this cavalier, experimental approach amplifies inequality while devaluing the people it harms. It reduces them to crash test dummies for AI pet projects. And it places the additional burden of addressing algorithmic injustice on the impacted communities, who have no choice but to step into the regulatory void.

To illustrate, social justic organizer, poet and author Tawana Petty led a tireless and successful campaign against the expansion of a Detroit surveillance program called Project Greenlight. That is time and energy that could have been spent elsewhere had effective guardrails been in place.

Of course, Sieber and Brandusescu note that even with regulation, law enforcement and national security carve-outs that are “incompatible with civil rights” often apply. This is despite the fact that national security has a poor record in protecting the poor, the marginalized, and the refugee. Rather than try to fix certain AI technologies, Sieber and Brandusescu argue moratoria or outright bans of certain ones, like facial recognition technology, would be the most responsible approach.

‘They were feeling like they were being watched, but they were not being seen.” ~Tawana Petty, in an interview with Khari Johnson at RightsCon 2022.

The OpenAI And LAION 5B Debacles Show Self-Regulation Doesn’t Work

OpenAI was created to advance general purpose AI in a way that “benefits humanity,” prioritizing purpose over profit. Yet it could not resist the AI arms race. Its non-profit board was split between accelerationists and decelerationists. When competition from rival Anthropic heated up, the accelerationists side won. OpenAI released ChatGPT to the public earlier than planned. It suffered a data breach, produced ‘hallucinations’ and toxic content, and was quickly used to supercharge scams. Various regulators began investigating OpenAI. People complained that their personal or proprietary data had been used without their permission to train ChatGPT. Lawsuits piled up. And pressure mounted.

These were all foreseeable risks. AI is not a magic wand that makes all regulations disappear. Copyright, Competition, Consumer Protection and Privacy Laws still apply. Meanwhile, LLM-specific risks had already been highlighted in papers such as Timnit Gebru et al.’s, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

The desire to allow a company like OpenAI self-regulate was thrown into the spotlight when the Board fired Altman, then re-hired him, and then was almost entirely ousted. The exact reasons are still unclear (OpenAI hasn’t been very ... open about it. Investigations by UK and EU regulators may soon shed some light). This got many asking, if they can’t govern themselves in the best interests of the company, how can we trust them to govern potentially revolutionary AI in the interests of humanity?

For Pivot podcast co-host Scott Galloway, the answer is “We can’t.” Open AI had become a de facto for-profit, and for-profits “should be trusted to do nothing else” than earn money. “Shareholder value smothered the concern for humanity in its sleep.” Swisher and Galloway agreed that government must regulate AI now to avoid a repeat of government failures we saw with the internet.

The LAION 5B exposé shows that even without a profit motive, unbridled techno-optimism coupled with immature privacy and risk management practices can facilitate abuse and perpetuate harm. In their drive to “democratize” machine learning, indiscriminate web scraping introduced Child Sexual Abuse Materials (”CSAM”) into the data pipeline, re-victimizing CSAM victims while making it easier to target new ones.

LAION is a not-for-profit that aims “to make large-scale machine learning models, datasets and related code available to the general public.” The LAION 5B dataset contains over 5 billion images, scraped from the open web and accessible via links. The sheer scale of it makes it virtually impossible to effectively filter out all harmful imagery. Yet they proceeded, for the benefit of humanity.

In December 2023, Stanford published a disturbing exposé of the LAION 5B image dataset, showing that it contained CSAM images. This was a foreseeable and likely risk, not least for the reasons mentioned above. In 2021, AI researcher and Time100/AI lister Abeba Birhane and colleagues sounded the alarm regarding a smaller LAION dataset in their paper, “Multimodal datasets: misogyny, pornography, and malignant stereotypes.” And six months before the Stanford exposé made news, they again sounded the alarm in, “On Hate Scaling Laws For Data-Swamps.”

Yet it was shortly after the Stanford exposé that LAION announced they were suspending access to the dataset to conduct a safety review. They added that they would be required to delete all the CSAM images under the GDPR’s Right to Be Fogotten provision (Article 17), after speaking with a German data protection authority.

CSAM images weren’t they only sensitive images included in the LAION 5B dataset without people’s knowledge or consent. Ars Technica reported that an artist found her private medical images in the dataset.

Had the LAION team conducted a Data Protection Impact Assessment prior to compiling a dataset so large they couldn’t manage risks effectively, as required under the GDPR and privacy legislation in many jurisdictions, they would have determined that the risks could not be sufficiently mitigated. They would have been required to consult with their data protection authority. If they still couldn’t reduce the residual risk to an acceptable level, they would have had to either terminate the project or narrow its scope. Applying other privacy principles, like data minimization, they would have had to skip the indiscriminate crawl-and-dump approach in favour of more thoughtful data curation. This would have avoided the problems they now face. And it aligns to the recommendations Abeba Birhane et al. had made in the papers that preceded the Stanford exposé.

Once again the question arises: When entities like LAION say they’re working to “benefit humanity,” who does that include?


As VP Harris noted at the AI Safety Summit, we cannot rely on all actors to voluntarily adhere to trustworhty AI principles. Sieber and Brandusescu empasize that real accountability requires hard law backed by actual legislative and judicial power that can punish organizations for non-compliance. True, some may take voluntary action when confronted with evidence of AI fails or harms, as IBM IBM did according to Dr. Buolamwini’s memoir Unmasking AI. But there are far too many examples like the ones above that suggest they will be a tiny minority.




2024: The Year Trust in AI Becomes Mission-Critical

The EU and the White House are broadly aligned on their regulatory objectives, according to Yonah Welker, a Board Evaluator for the European Commission focused on social AI, robotics, learning and accessibility. The EU would regulate with hard law - the EU AI Act - while the Whitehouse would use a sweeping Executive Order that builds on its Blueprint for an AI Bill of Rights.

The Executive Order binds government agencies, but it will have cascading effects for industry. They have introduced stringent transparency and accountability obligations, including risk assessments, conformity assessments, and audit requirements for impactful AI uses. AI providers will be expected to prove their products work as advertised - for everyone, not just select demographics - and commit to continuous improvement. Most notably, it raises the bar in procurement and R&D. Providers that fail to meet high minimum standards in the Office of Management and Budget’s draft AI policy could be frozen out of lucrative government contracts and R&D opportunities as of August 2024, and eventually the EU market if they fail their conformity assessments.1

Developing trustworthy AI takes time. It takes strong checks and balances. Effective gaurdrails. And a genuine desire to benefit humanity. All of humanity.

It’s a marathon, not a sprint. But if we want to avoid another AI winter, it’s mission-critical.
















Follow me on Twitter or LinkedIn