Finance News

Meta-accelerated sound-driven AI drive

Free unlock edited abstracts

Mark Zuckerberg is enhancing the voice capabilities of meta-AI this year as the social media giant pushes plans to generate revenue from rapidly developing technologies.

Meta plans to introduce improved voice capabilities to its latest open source large language model, the expected Llama 4 in the coming weeks.

One said the company is particularly focused on bringing conversations between users and their voice models closer to two-way natural conversations, allowing users to break rather than stricter greeting formats.

The sound push comes as CEO Zuckerberg outlined a bold plan to make the 1.7-ton Silicon Valley company a “AI leader,” saying 2025 is the year for many of its AI products as the group competes with competitors like OpenAI, Microsoft, Microsoft and Google to commercialize the technology.

Two people familiar with the matter said that this led the company to research its AI assistant Meta AI for proxy tasks such as booking and video creation. It is also considering introducing paid ads or sponsored posts to search results for its AI assistant, one of the people said.

Zuckerberg revealed plans this year to build an AI engineering agent with the ability to code and solve problems for intermediate engineers, which he said could be a “very big market.”

Mehta declined to comment.

Chris Cox, the organization’s chief product officer, highlighted some of its plans for Camel 4 on Wednesday, saying it would be a “Omni model” and the speech would be “local”. . . Instead of converting speech to text, send the text to LLM, delete the text and convert it to speech. ”

“I believe this is a huge deal for interface products, you can talk to the internet and ask any questions about the idea. I think we’re still tangling our heads around a powerful force,” he said at the Morgan Stanley Technology, Media & Telecom conference.

Two people familiar with the matter said Mehta has also been discussing the measures the latest llama models should have and whether to lower them.

The discussion took place during a series of launches from competitors and warnings from the newly appointed “AI TSAR” David Sacks, who said he wanted to make sure we wanted to make sure our AI model was politically free of bias or “wake up.”

Openai released the voice mode last year and focused on giving it a different personality, while Grok 3, created by Elon Musk’s XAI and available on the X platform, launched voice features to select users late last month.

The company said the Grok model was designed specifically to have fewer guardrails, including a “irrelevant mode” that deliberately responded in a “obtrusive, inappropriate and offensive” way.

Meta launched a “Sanctimonious” version of its AI model, a third traversal of its AI model, after criticizing Llama 2 for refusing to answer innocent questions last year.

Allowing users to interact with AI assistants using voice commands is a major feature of Meta’s Ray Bans smart glasses, which has caused a big blow to consumers lately. The team has accelerated its plans to make lightweight headphones that can usurp smartphones as consumers’ primary computing device.

Other reports by Melissa Heikkilä of London

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
×