Meta is set to revolutionize the user experience of its Ray-Ban smart glasses with the introduction of advanced multimodal AI features, offering a glimpse into the future of wearable interaction. While the features are entering an early access phase, they showcase the glasses’ ability to comprehend and respond to users’ surroundings through the lenses and microphones.
In a recent Instagram reel, Meta’s CEO, Mark Zuckerberg, provided a firsthand look at the capabilities of multimodal AI. Holding up a shirt, Zuckerberg engaged with the glasses’ AI assistant, asking for suggestions on matching pants. The AI responded by describing the shirt and proposing suitable pant options, highlighting the glasses’ potential in aiding fashion choices.
The AI assistant’s prowess was further demonstrated as it accurately described a lit-up California-shaped wall sculpture in a video shared by Meta’s CTO, Andrew Bosworth. Bosworth elaborated on additional features, including the ability to seek assistance in captioning photos and requesting translation and summarization—features commonly found in AI-powered products from tech giants like Microsoft and Google.
Zuckerberg had previously hinted at the glasses’ conversational capabilities in an interview, suggesting that users can interact with the Meta AI assistant throughout the day, seeking information about their surroundings or asking questions.
The early access testing phase will be limited to a select group of users in the United States who choose to opt in. Bosworth provided instructions for opting in, ensuring a controlled introduction of the multimodal AI features to gather valuable user feedback.
Also Read: