BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

What’s The Big Difference? ChatGPT Compares Sam Altman’s Talks At Harvard And MIT

Following

As Sam Altman mentioned in his talks at MIT and Harvard recently, it is a uniquely interesting time for entrepreneurs and others who are interested in the power of AI.

That made me think – why not use AI to analyze Altman's remarks, and present the findings?

So I put excerpts from both of his interviews into his own tool, ChatGPT, to see what came out.

First, I asked ChatGPT to analyze the transcripts of both remarks, and report on various aspects of what he said at the two schools.

Here's what came back, in addition to longer sets of points and key themes from each of the talks:

Content:

  • At Harvard, Altman emphasizes the unpredictable nature of AI progress and the need for safe and aligned research. He highlights OpenAI's focus on developing AI that is "pretty smart" without constant human oversight and its mission to transform education.
  • At MIT, Altman focuses on AI's integration into society, comparing it to past technological revolutions. He stresses the importance of startups building enduring businesses and discusses OpenAI's efforts in developing more capable AI systems while prioritizing safety. He also highlights the potential of AI to accelerate scientific discovery.

Overlap:

  • Both talks underscore the importance of AI's transformative potential and the need for safety in its development.
  • Altman encourages young entrepreneurs and researchers to pursue their goals with conviction and to take risks in the rapidly evolving AI landscape.

Now, in “Content,” we see that ChatGPT puts more emphasis on Altman’s cautionary remarks in the Harvard talk. In the MIT talk, that point is relegated to the second to last sentence where it mentions “safety.”

In “Overlap,” the AI notes that both talks included thoughts on “the need for safety in (AI) development.”

Keep in mind I heard the full audio of both talks. And I agree with ChatGPT, that Altman said a bit more on AI safety at Cambridge. But in boiling things down to the level that ChatGPT delivered, we might be missing some of the nuances of how Altman addressed the need for caution, deliberation, regulation, etc.

In fact, if I have one overall criticism of the ChatGPT approach, it’s oversimplification. That, and tone or style of writing.

In other words, in terms of content, there was so much more to what Altman said! A lot of quotes, brilliant insights, etc. that deserve to see the light of day (check out my write-ups on each segment of both talks.)

Also, all due respect to ChatGPT, this section seemed sort of useless:

Delivery:

  • At both events, Altman delivers his insights with a measured and deliberate speaking style, emphasizing key points with pauses.
  • He engages with the audience, acknowledging their aspirations and concerns.

Yes, presumably, he did those things. And yes, from what I heard, that’s a fair characterization. I don’t know what ChatGPT would have expected: hectoring the audience? Rapid-fire monologues?

It’s almost a little humorous.

The final contrast was perhaps more salient:

Contrast:

  • The Harvard talk delves more into the specifics of OpenAI's research approach, emphasizing language models and robotics, while the MIT talk focuses on AI's broader societal implications and its impact on startups and scientific discovery.
  • At Harvard, Altman discusses the future of education and the importance of AI proficiency for students, whereas at MIT, he talks more about the potential job impacts of AI and the need for balanced regulations.

Here, ChatGPT gets high marks for accuracy. All of that is mostly what I heard, too. For example, see our piece on the ChatGPT Origin Story. Where I would give ChatGPT less of a gold star is, again, in tone. Its delivery is kind of bloodless (which makes sense for a sentient AI with no actual blood, or brain) – and the sense that it likes to use the word “delve” a lot. Also, some of the report is pretty nebulous. A focus on “AI’s broader societal implications and its impact on startups and scientific discovery” sort of begs explication: what did he say about that?

So, if you want the bird’s-eye view, balanced with precision and brought to you in perfect “King’s English,” this article was for you. If you want a human response – just look through the blog feed.

Follow me on LinkedInCheck out my website

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.