Meta hasn’t pivoted from the metaverse to AI, says Mark Zuckerberg. He believes that AR glasses will become an important hardware platform for AI assistants.
Mark Zuckerberg announced yesterday in a new interview with The Verge that his company is aiming to develop general artificial intelligence.
At the same time, he disputed that Meta has pivoted from the metaverse to AI. “I don’t know how to more unequivocally state that we’re continuing to focus on Reality Labs and the metaverse,” Zuckerberg said, noting that Meta still invests more than $15 billion a year in the metaverse.
In a video, Zuckerberg explains how AI and the metaverse could one day converge in the not-too-distant future, making the heavy investments in the metaverse worthwhile:
People are also going to need new devices for AI and this brings together AI and the metaverse. Because over time, I think a lot of us are going to talk to AIs frequently throughout the day. And I think a lot of us are going to do that using glasses because glasses are the ideal form factor for letting an AI see what you see and hear what you hear. So it’s always available to help out.
Zuckerberg reiterates what CTO Andrew Bosworth said in his 2023 end-of-year-review: that glasses are the ideal form factor for AI assistants because they show the world from a human perspective, are socially acceptable, can be worn all day, and allow wearers to focus on the real world rather than the hardware.
The basic idea here is that glasses are a better hardware platform for AI assistants than current experiments like the Humane Ai Pin or Rabbit R1. With the Ray-Ban Meta smart glasses, the company has already launched glasses with preliminary AI functions, with more to come.
According to Bosworth, the unexpected emergence of large language models could change Meta’s AR roadmap. The company’s smartglasses could become much more useful than expected, while Meta’s first true AR glasses could get a much earlier and stronger focus on contextual AI than expected.