BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Making The Big Decisions: ChatGPT And The Future

Following


After starting with a survey of his journey through the tech world, Sam Altman gave us some insights on what it means to be responsible for a technology that has as much raw power (social power, political power, etc.) as AI.

When asked what the most difficult decisions have been in crafting systems like chatGPT, Altman said the research directions are the most difficult, and the most important.

Also, he added, there’s the balance of trying to limit the potential for harm. Directing the behavior of the AI engine, and creating limits for use, are in some ways, he suggested, the hardest part of the process. As an example, he asked rhetorically: “should chatGPT give legal advice?”

“It seems very reasonable to say that it shouldn’t do it,” Altman said, citing hallucinations and other potential problems. “On the other hand, there are a lot of people in the world who can’t afford legal advice … (for whom) maybe it’s better than nothing.”

To an extent, he argued, users are smart and can learn to use technology responsibly – but they might need some help.

“What we’d like to see is a world where we don’t do things that have a high potential for misuse,” he said.

But creating that world is easier said than done. How do you make sure that the system has the right guardrails – and does that kind of security even really exist?

Altman presented an interesting idea that blends technological boundaries with personal responsibility, suggesting that there should be a kind of guide or handbook that users have to consult, to learn the right and wrong ways to use something like chatGPT, always, of course, with a caveat: ride at your own risk.

“It’s not going to be like terms of service,” he said, acknowledging how accustomed most people are to just ignoring the long text docs attached to platforms. The guide, as he presented it, would be something much more integral to user on-boarding, however that gets enforced.

Altman gave us another example, as well: people can and sometimes do incite violence. Should chatGPT be able to incite violence? No. However, he asked, should chatGPT be able to take a message that a human wrote, inciting violence, and translate it?

There we start to see the gray areas and problematic nature of boundaries that may not be so clear for everyone. As Altman said, you can drill down into “one category” for a long time, trying to fine-tune safety standards. But, like the human world, the AI world will have a tendency to get messy.

Another suggestion that Altman gave was for systems that are highly customizable, but still have defined boundaries. He pointed out that in such a system, most users won’t be comfortable with what some other users are going to do with the tech, but the overall boundaries will be clear, and the defaults will be toward a higher standard of use. That makes a lot of sense when you consider, for example, how we use cars, or fast food, etc.

But he made another point, too.

“Ultimately, OpenAI should not be making these determinations,” he said, implying that regulation from outside is key. Many of us have this notion, but it’s the logistics that get difficult. Who will police AI?

“The goal of the tool,” Altman said, “is to serve its user.” That sort of sums up the calculus around how we will likely use AI in the future: it’s a powerful force, but it works with humans, and for humans, too.

Follow me on LinkedInCheck out my website

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.