Highlights:

  • The first improvement enables Meta AI to tailor prompt responses based on the information users provide during WhatsApp and Messenger chats.
  • The second feature enables Meta AI to consider the user’s personal information when generating responses.

Meta Platforms Inc. is enhancing its AI chatbot by introducing a new version that tailors its responses based on user-provided information.

The company shared the update in a blog post released this morning.

Meta AI is an AI assistant introduced for Facebook, Messenger, and Instagram in 2023. With capabilities like OpenAI’s ChatGPT, Meta AI can browse the web for information, translate text, generate images, and assist users with programming tasks.

The latest update introduces two new personalization features to the service.

The first improvement enables Meta AI to tailor prompt responses based on the information users provide during WhatsApp and Messenger chats. For instance, if a user asks for a list of data visualization programs compatible with Windows, the chatbot can infer that the user is on a Windows device. In future software-related queries, Meta AI will avoid recommending programs that are exclusive to macOS.

Enhanced Meta AI chatbot is initially available in the U.S. and Canada on the mobile versions of Facebook, Messenger, and WhatsApp.

The second new feature launched recently enables Meta AI to consider the user’s personal information when generating responses. For example, the chatbot could personalize its reply to the prompt “find upcoming concerts nearby” based on the user’s location. Additionally, Meta AI can factor in other data, like the genre of music videos the user has watched in the past week.

The feature is accessible on Facebook, Messenger, and Instagram.

Meta AI is driven by the Llama 3.2 series of large language models, which the company open-sourced in September. This LLM series includes two multimodal models with 10 billion and 90 billion parameters. These models are capable of processing text as well as analyzing images uploaded by the user.

The two models are built on an enhanced version of Meta’s previous Llama 3.1 LLM series, which was limited to processing text. To develop the multimodal Llama 3.2, the company modified the original text-only design by incorporating an “adapter” module that enables image processing. This module consists of several interconnected layers of artificial neurons.

LLMs convert the data they generate into mathematical structures known as embeddings. Some embeddings are designed to store text, while others are optimized for images. The adapter module in Llama 3.2 eliminates the technical discrepancies between these two types of embeddings, enabling the model series to process images, despite being based on the text-only Llama 3.1 series.

Last week, Meta announced plans to launch a new version of its LLM series, called Llama 4, later this year. Meta AI will likely be upgraded to the new model with a future update. Additionally, the company is reportedly planning to integrate an internally powered search engine into the chatbot for web browsing.