BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

The Prompt: The Mystery Of GPT2

Plus a move toward hyperrealism and Alaska Airlines’ new AI tool.

Following

The Prompt is a weekly rundown of AI’s buzziest startups, biggest breakthroughs, and business deals. To get it in your inbox, subscribe here.

Hello and welcome back to The Prompt.

On Monday, a mysterious chatbot called ‘gpt2-chatbot’ surfaced on a popular LLM benchmarking platform, sparking wild speculation over its origin and possible links to OpenAI’s next major update. When Sam Altman tweeted, “I do have a soft spot for gpt2,” he threw the rumor mill into overdrive. As some AI researchers noted, the chatbot’s capabilities match and in some cases even exceed the capabilities of GPT-4 — OpenAI’s latest multimodal large language model.

Meanwhile, OpenAI has found itself the target of a privacy complaint filed by a Vienna-based nonprofit called Noyb, which claims ChatGPT provided incorrect information about individuals including an unnamed public figure, in violation of the EU’s GDPR privacy laws. The complaint alleges that ChatGPT repeatedly produced inaccurate information about the person’s date of birth and OpenAI refused requests to correct or remove it and declined to disclose how the information was processed. OpenAI did not respond to a request for comment.

Now, let’s get into the headlines.

MARKET MOVES

In a quarterly earnings call last Thursday, Mark Zuckerberg told investors that it would be years before Meta’s generative AI tools start making money. Meta, which recently integrated its ChatGPT competitor Meta AI across its flagship apps, plans to spend billions more on AI infrastructure and hopes the effort will eventually help improve the quality of ads, where most of its revenue comes from. Post earnings, Meta’s stock had dropped 12% as of market close on Monday.

Microsoft also reported earnings Thursday. It brought in $12.5 billion in operating income from its cloud computing division, which many companies use to train AI. The company has deftly positioned itself to take advantage of the current AI frenzy by backing key players like OpenAI and Mistral and launching its own AI models. After a brief pop post earnings, its stock price has largely stayed flat, closing at $402.25 on Monday.

ETHICS + LAW

Facebook and Instagram are teeming with thousands of sexually explicit ads for “AI girlfriend” apps, a WIRED investigation found. The ads, which feature AI-generated pornographic images of women and NSFW chats, are clear violations of Meta’s ad policies prohibiting adult content. Amid the mainstreaming of artificial intelligence AI-generated pornographic content has flooded the internet, spreading to widely used platforms like YouTube, Reddit, Twitter, Etsy and Ebay.

REGULATION + POLICY

The Department of Homeland Security has handpicked some of the most prominent names in AI and tech to serve on its new AI safety board. OpenAI’s CEO Sam Altman, Anthropic’s Dario Amodei, Microsoft’s Satya Nadella, Google’s Sundar Pichai, Nvidia’s Jensen Huang and AMD’s Lisa Su are among the more than 20 tech leaders and academics serving on The Artificial Intelligence Safety and Security Board. They will advise the agency on the safe and responsible development and use of AI within U.S. critical infrastructure and across sectors like transportation, defense and energy.


AI DEAL OF THE WEEK

Cognition Labs, the startup behind an AI coding assistant called Devin, raised $175 million in a funding round led by Founders Fund at a $2 billion valuation, according to The Information. The heady valuation comes after an early demo of Devin was criticized for not being the capable “AI software engineer” that its founders had claimed it to be. A viral video created by longtime software engineer Carl Brown showed how the AI had failed to complete technical tasks like running commands.

In an interview, Brown told me it’s sometimes difficult to train fully functional and efficient AI coding assistants because there isn't as much good data available. The internet, he claimed, hosts much less code than English text and the codebases that do exist are often incomplete.

Also notable: Coding automation startup Augment Inc. raised $227 million in a Series B round from Index Ventures, Sutter Hill Ventures and Eric Schmidt’s Innovation Ventures at a $977 million valuation.


DEEP DIVE

On Thursday, video avatar startup Synthesia launched an AI model trained on data collected from almost 1,000 professional actors that it claims can create “lifelike digital personas. The Express-1 model, which can replicate minute facial expressions and tone, helps Synthesia’s avatars understand context so they can respond to different situations appropriately.

With its avatars used by over 55,000 companies including Amazon and Microsoft, the London-based unicorn is part of a wave of startups trying to make synthetic content appear more natural and expressive. Among them is Hume, which has developed voice-based AI models that it claims can detect human emotions.

While companies are using such tools to generate training videos for their staff or explainer videos for customers, others are experimenting with personal use. Billionaire venture capitalist Reid Hoffman recently made his own deepfake double, trained on 20 years of his video content. (You can watch him interview his AI here.) “I think these avatars, when built thoughtfully, have the potential to act as mirrors to ourselves—ones that reflect back our own ideas and personality to examine and consider,” Hoffman wrote in a LinkedIn post.

I caught up with Synthesia CEO Victor Riparbelli about the role these hyper realistic avatars could play in the future. (This interview has been edited for brevity and clarity.)

What motivated you to create AI video avatars and what can they do?

When we launched the product in 2020, it was pretty clear that the avatars were great for internal low stakes use cases [like training videos] where the alternative was to read a long PDF document. As we make them better and better, people are leaning into using them externally as well. Now these avatars actually understand what they’re saying. So, if you're doing content in healthcare, for example, you want the avatar to feel empathetic or speak a little bit slower. If it's an avatar selling used cars, you’d want it to be more upbeat and excited.

How do you see these avatars being used in the future?

By the end of the year, I hope we have avatars that cannot just talk to the camera, but that can sit in a chair, maybe they can have conversations. Ultimately, what we want to be able to create is videos that look and feel 100% lifelike and to get the avatars to do exactly what you want them to do, like if you want it to walk on stage and do an Apple style keynote, they should be able to do that.

What measures are in place to reduce the misuse of this technology?

We take a pretty strong stance on what you can and cannot create. There are a lot of restricted categories. News is one of them. In order to do content moderation, we use a combination of machine learning as well as an in-house content moderation team. If someone goes against our rules, they'll get an email telling them, ‘Hey, this video has not been generated because you've broken these policies.’ And because we do content moderation at the point of creation, that content never gets generated.


YOUR WEEKLY DEMO

In its early days, ChatGPT became popular for creating travel itineraries, prompting airlines like Air Canada and Qatar Airways to roll out their own AI chatbots that can plan travel and help customers book flights. Last week, Alaska Airlines launched an AI tool that searches available flights and serves up options that cater to a traveler’s desired budget, location or travel time frame.

But when I tried out what the company called its “groundbreaking” generative AI tool, it struggled with a simple request to “take me to a place that’s good for a skiing adventure.” “Oops! Our Al surfboard caught an unexpected wave. We're paddling back to you with care!” it responded.

After I paraphrased and tried again, it suggested flights to Boston (not the first skiing spot that comes to mind), Bozeman and Jackson Hole. But even then, it spouted nearly identical reasons for why I should travel to each of these three destinations — not very helpful. In another instance, it returned three flights to and from Jackson Hole. (ChatGPT suggested going to the Swiss Alps, citing an iconic view of The Matterhorn.) Alaska Airlines said the tool is currently being piloted and is available to a subset of its users. “Part of the testing process is identifying and addressing these kinds of bugs,” spokesperson Emily Reno said in an email.

AI chatbots may not always work in favor of the airlines that launch them. In one case, an Air Canada bot told a passenger who’d recently lost a loved one that they could receive a partial refund under the airline’s bereavement policy. But when the airline refused to pay it, arguing the chatbot is a separate legal entity and is responsible for its own actions, the passenger took them to Canada’s online version of small claims court and won.


QUIZ

Tesla’s humanoid robot, which CEO Elon Musk said could one day make more revenue than Tesla’s cars once it launches at the end of next year, is called:

  1. Atlas
  2. Optimus
  3. Phoenix
  4. MenteeBot

Check if you got it right here.


MODEL BEHAVIOR

Scammers used AI-generated voice messages to sell fake rights of a BBC TV show host’s likeness to an ad agency, which then used her images for insect repellent ads, according to a report from The Guardian. But Liz Bonnin, the host of show Our Changing Planet, had not given anyone permission to use her face. The scammers made away with £20,000 (about $25,000) after selling Bonnin’s images to the ad agency.

Manipulated media is increasingly being used for impersonation. Earlier this week, Baltimore police officials said that an audio clip of a local high school headmaster making racist comments was actually an AI-generated voice clone made by the school’s gym teacher.

Sign up here to get The Prompt weekly.

Follow me on Twitter or LinkedInSend me a secure tip

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.