Reflections on AI

Lex Fridman interview with Sam Altman

Mar 29, 2023 Source

Fridman and Altman (CEO of Open AI) spoke a great deal about AI in general, ChatGPT, GPT-4, and the overall approach OpenAI are taking to AI safety.

Broadly Altman is very optimistic about the potential for AI to transform the world in positive ways, e.g. healthcare, education. He seemed most excited about the potential for AI to advance scientific research and assist humanity in developing a deeper understanding of the universe.

Altman does appreciate the risks. He doesn’t think OpenAI should provide their source code publicly anymore. He does believe they should continue rolling out capabilities (hence ChatGPT) to give the world time to adjust and acclimate to what is possible with AI, where the dangers are, what needs to be regulated, etc.

Altman does not believe GPT-4 is AGI (artificial general intelligence). He is skeptical that the current architecture of systems like GPT-4 can lead to AGI, though he wouldn’t rule it out. He said something like “If an oracle told me that GPT-7 will end up being AGI, I would think: ‘Oh, interesting, okay’“. His expectation is that there are more fundamentally new ideas that will need to come along to get us the rest of the way there.

One more interesting point that came up, though I don’t recall the conversation being mostly about automation or economic impact: Altman pointed out that computers became better than humans at chess years ago, and yet we don’t watch computers play chess. In fact, watching humans play chess is more popular than it’s ever been (what he said - I haven’t fact-checked). This does suggest that just because AI can perform a task better than humans, it doesn’t always follow that we will collectively replace ourselves with AI for that task.