Speaking on Joe Rogan’s podcast, OpenAI CEO Sam Altman offered a tantalizing view of the future where neural devices bridge the gap between thought and communication.
On the Joe Rogan Experience podcast, Altman detailed his vision for a future where neural devices, when paired with advanced AI like GPT-5 or 6, could potentially visualize and display a person’s inner monologue.
He noted, “I think we can do [things] like reading your thoughts with an external device at some point, [like] reading your internal monologue.”
Altman said these devices could display a “soup” of words right in a user’s field of vision.
“In your field of vision, the words in response were being displayed, that would be the pong. That’s still soup, [but] that’s a very valuable tool to have,” he commented, emphasizing that such a development is “inevitable.”
View Sam Altman’s conversation with Joe Rogan below:
These technologies are reminiscent of Elon Musk’s Neuralink, which has met numerous stumble blocks, including being rejected by the US Food and Drug Administration (FDA) in late 2022.
These devices under the umbrella of brain-machine interfaces (BFIs) have come on leaps and bounds this year, with AI models helping researchers ‘read thoughts’ non-invasively.
Just this week, researchers used AI to decode speech from non-invasive brain recordings. AI-integrated BFI technology shows immense promise in restoring speech and movement to those with brain injuries or neurodegenerative diseases such as ALS.
Regarding our growing symbiosis with technology, Altman remarked, “We’re already a little down that path. Like if you take away someone’s phone and they have to go function in the world today, they’re at a disadvantage relative to everybody else.”
He hypothesized that in the near future, virtual reality might become so immersive that some individuals might seldom want to disconnect, likening it to the modern-day smartphone’s presence in our lives.