Highlights:
- Mistral, established by AI professionals from Google LLC and Meta, stands out among startups aimed at providing open-source models for public use.
- It is widely anticipated that Mixtral 8x22B will surpass the performance of Mistral AI’s earlier Mixtral 8x7B model.
Paris-based open-source generative AI startup Mistral AI has launched Mixtral 8x22B, a new large language model, as part of its strategy to remain competitive with the major players in the industry.
This recently introduced model is anticipated to surpass the performance of the company’s earlier version, the Mixtral 8x7B. Numerous specialists regard it as a highly formidable competitor to more prominent competitors, including OpenAI’s GPT-3.5 and Meta Platforms Inc.’s Llama 2.
The startup, having secured USD 415 million in funding this past December and now valued at over USD 2 billion, describes its latest model as the most potent to date. It features a 65,000-token context window, indicating the volume of text it can simultaneously process and refer to. Moreover, the Mixtral 8x22B model is equipped with up to 176 billion parameters, which represent the count of internal variables it leverages for decision-making and forecasting.
Mistral, established by AI experts formerly with Google LLC and Meta, stands among a group of AI startups dedicated to developing open-source models accessible to all. In a move that diverged from the norm, the company initially released its new model through a torrent link shared on the social media platform X. Subsequently, Mistral made the Mixtral 8x22B model accessible on the Hugging Face and Together AI platforms, allowing users to retrain further and adapt it for more specific applications.
Just days following the release of new models by its competitors, the startup unveiled the Mixtral 8x22B. On Tuesday, OpenAI introduced GPT-4 Turbo with Vision, the latest addition to its GPT-4 Turbo lineup, equipped with visual capabilities, allowing it to process photos, drawings, and other user-uploaded images. Later the same day, Google released Gemini Pro 1.5 LLM to the public, offering developers a free version with a limit of up to 50 requests daily.
In a bid to match its competitor, Meta also announced its plans to unveil Llama 3 later in this month.
The Mixtral 8x22B model is widely anticipated to surpass the performance of Mistral AI’s former Mixtral 8x7B model, which nearly outperformed GPT-3.5 and Llama 2 in several critical benchmarks.
The model utilizes a sophisticated, sparse “mixture-of-experts” (MoE) architecture designed for efficient computation and superior performance on a broad spectrum of tasks. This sparse MoE strategy seeks to offer users an amalgamation of distinct models, each tailored to excel in various types of functions, as a means to enhance performance while optimizing costs.
“At every layer, for every token, a router network chooses two of these groups (the ‘experts’) to process the token and combine their output additively. This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token,” Mistral AI states on its website.
The distinctive architecture of Mixtral 8x22B, despite its vast size, necessitates only about 44 billion active parameters for each forward pass. This renders it quicker and more economical to operate compared to models of similar magnitude.
Therefore, the introduction of Mixtral 8x22B marks a significant achievement in the field of open-source generative AI, offering researchers, developers, and enthusiasts alike the chance to experiment with some of the most cutting-edge models without facing obstacles like restricted access and substantial expenses. It is accessible under the liberal Apache 2.0 license.
Feedback from the AI community on social media has been largely favorable, with enthusiasts expressing optimism that it will provide substantial benefits for tasks including customer service, drug discovery, and climate modeling.
While Mistral AI has received significant commendation for its commitment to open-source principles, it has not been without its detractors. The company’s models are classified as “frontier models,” implying they carry a risk of being misused. Additionally, given that the company’s AI models are freely downloadable and can be built upon by anyone, the startup is unable to control or prevent the use of its technology for nefarious purposes.