After starting with a survey of his journey through the tech world, Sam Altman gave us some insights on what it means to be responsible for a technology that has as much raw power (social power, political power, etc.) as AI.
When asked what the most difficult decisions have been in crafting systems like chatGPT, Altman said the research directions are the most difficult, and the most important.
Also, he added, there’s the balance of trying to limit the potential for harm. Directing the behavior of the AI engine, and creating limits for use, are in some ways, he suggested, the hardest part of the process. As an example, he asked rhetorically: “should chatGPT give legal advice?”
“It seems very reasonable to say that it shouldn’t do it,” Altman said, citing hallucinations and other potential problems. “On the other hand, there are a lot of people in the world who can’t afford legal advice … (for whom) maybe it’s better than nothing.”
To an extent, he argued, users are smart and can learn to use technology responsibly – but they might need some help.
“What we’d like to see is a world where we don’t do things that have a high potential for misuse,” he said.
But creating that world is easier said than done. How do you make sure that the system has the right guardrails – and does that kind of security even really exist?
Altman presented an interesting idea that blends technological boundaries with personal responsibility, suggesting that there should be a kind of guide or handbook that users have to consult, to learn the right and wrong ways to use something like chatGPT, always, of course, with a caveat: ride at your own risk.
“It’s not going to be like terms of service,” he said, acknowledging how accustomed most people are to just ignoring the long text docs attached to platforms. The guide, as he presented it, would be something much more integral to user on-boarding, however that gets enforced.
Altman gave us another example, as well: people can and sometimes do incite violence. Should chatGPT be able to incite violence? No. However, he asked, should chatGPT be able to take a message that a human wrote, inciting violence, and translate it?
There we start to see the gray areas and problematic nature of boundaries that may not be so clear for everyone. As Altman said, you can drill down into “one category” for a long time, trying to fine-tune safety standards. But, like the human world, the AI world will have a tendency to get messy.
Another suggestion that Altman gave was for systems that are highly customizable, but still have defined boundaries. He pointed out that in such a system, most users won’t be comfortable with what some other users are going to do with the tech, but the overall boundaries will be clear, and the defaults will be toward a higher standard of use. That makes a lot of sense when you consider, for example, how we use cars, or fast food, etc.
But he made another point, too.
“Ultimately, OpenAI should not be making these determinations,” he said, implying that regulation from outside is key. Many of us have this notion, but it’s the logistics that get difficult. Who will police AI?
“The goal of the tool,” Altman said, “is to serve its user.” That sort of sums up the calculus around how we will likely use AI in the future: it’s a powerful force, but it works with humans, and for humans, too.