Highlights:
- The popular LLM series Llama 2, created by Meta Platforms Inc., served as the model for Pi-3 Mini, which Microsoft researchers designed.
- Microsoft claims that the Pi-3 Mini outperformed Llama 2 in the 16,000-question MMLU neural network evaluation test covering hundreds of disciplines.
Microsoft open-sources Pi-3 Mini, a small language model featuring 3.8 billion parameters that can surpass neural networks over ten times its size.
According to the business, the Pi-3 Mini is small enough to function on the 2022 iPhone. On the other hand, even with the most sophisticated data center graphics card, the most significant large language models available are frequently too intricate to fit.
The decoder-only Transformer architecture, a popular language model, serves as the foundation for the Pi-3 Mini. One kind of neural network that analyzes a word’s meaning by looking at its context is called a Transformer. These models usually approach the task by examining the text preceding and after the target word.
The decoder-only Transformer is one version of the architecture that makes judgments with less contextual knowledge. It simply examines the prose that comes before a word, not the text that comes before or after it. Compared to typical Transformer models, decoder-only models frequently perform better at text production jobs and require less hardware to operate.
The popular LLM series Llama 2, created by Meta Platforms Inc., served as the model for Pi-3 Mini, which Microsoft researchers designed. The researchers repurposed the tokenizer from Llama 2, which converts text into a format that language models can comprehend more readily. Because of their comparable designs, Llama 2’s open-source tools can be utilized with the Pi-3 Mini.
The underlying architecture is not the reason behind Pi-3 Mini outperforming significant LLMs. Instead, “the innovation lies entirely in our dataset for training,” said researchers at Microsoft who developed the model.
The dataset is an enlarged version of the database that the business utilized to create Pi-2, a small language model from a prior generation. The dataset for the Pi-3 Mini contains 33 million tokens of data. A token is a data point consisting of a few letters or digits.
The web’s highly filtered data was used to train the Pi-3 Mini. Microsoft claims that its researchers only used data that could improve the reasoning abilities of the model. They eliminated everything else from the dataset, including webpages that offered some helpful information but not enough to optimize the process of artificial intelligence learning.
In two stages, Microsoft trained the Pi-3 Mini. Initially, it gave the model access to the filtered dataset that its researchers had taken from the public domain. After that, it received synthetic data—that is, training data produced by an AI—along with an even larger subset of the dataset from the initial training phase.
Microsoft compared the Pi-3 Mini’s performance against two more extensive open-source language models. One of the benchmarks was a version of Meta’s Llama 2 with 70 billion parameters. Microsoft claims that the Pi-3 Mini outperformed Llama 2 in the 16,000-question MMLU neural network evaluation test covering hundreds of disciplines.
Despite using a lot less hardware than Meta’s model, the Pi-3 Mini was still able to perform better. Microsoft researchers were able to run the model on an iPhone 14 during testing.
The researchers also provided a sneak peek at two larger, as-yet-unreleased variants of the model in the publication describing Pi-3 Mini. Seven billion and 14 billion parameters are included. The two models outperformed Pi-3 Mini in the MMLU test, scoring six percent and nine percent higher, respectively.