Highlights:
- Along with Aya, Cohere is launching the most extensive multilingual instruction dataset to date, comprising 513 million data points across 114 diverse languages.
- The Aya model is derived from the Aya Project, a massive project that was started in January 2023 with the participation of over 3,000 academics from 119 different countries.
Aya, an open-source artificial intelligence large language model that is “massively multilingual” and capable of operating in 101 different languages, was unveiled by Cohere for AI, a nonprofit research lab managed by the artificial intelligence startup Cohere Inc.
The company claims that with over 100 languages under its belt, Cohere’s Aya’s capability more than doubles the number of languages covered by current open-source models.
The AI team announced, “Aya helps researchers unlock the powerful potential of LLMs for dozens of languages and cultures largely ignored by most advanced models on the market today.”
Along with Cohere’s Aya, the business is launching the most extensive multilingual instruction dataset to date, comprising 513 million data points across 114 diverse languages. This dataset will be available for researchers to integrate into their models. To give AI technology a head start in serving larger audiences, the dataset includes rare annotations from speakers of rare languages worldwide as well as underserved languages.
The Aya model is derived from the Aya Project, a massive project that was started in January 2023 with the participation of over 3,000 academics from 119 different countries. The goal of the project is to create a multilingual generative AI model that would draw from the contributions of individuals worldwide. Even though many models concentrate on English, just 5% of people speak it at home. This implies that the field of AI technologies undervalues many other languages.
“As LLMs, and AI generally, have changed the global technological landscape, many communities across the world have been left unsupported due to the language limitations of existing models. This gap hinders the applicability and usefulness of generative AI for a global audience, and it has the potential to further widen existing disparities that already exist from previous waves of technological development,” said the Cohere for AI team.
To be of assistance, 204,000 infrequent human-curated annotations in 67 languages, spanning a wide range of linguistic applications, are included in the dataset that is being made public. AI models employ annotations to improve learning outcomes by providing context to language-processing data, enabling more accurate categorization and comprehension. As a result, scientists will have access to an incredibly high-quality dataset that they may utilize to construct reliable AI language models, which may incorporate linguistic analysis and language preservation.
The language research center Ethnologue claims that there are currently more than 7,000 languages spoken worldwide. About 40% of all languages are endangered, with many having fewer than 1,000 speakers. Of those, only 23—including English—represent more than half of the world’s population.
Research and development can benefit from initiatives like Aya, adding new languages to a vastly multilingual dataset. This will facilitate the inclusion and accessibility of more groups and allow academics to employ AI technology.
Additionally, the dataset broadens coverage to include more than 50 previously underrepresented languages, such as Somali and Uzbek, that are rarely included in private models. While prominent languages like English, French, and Russian are well covered by commercial and open-source models, the developers of Aya made a concerted effort to include a large number of underrepresented languages in their dataset.
According to the researchers, the model outperforms existing open-source models like mT0 and BigScience’s Bloomz on benchmarks and benchmarks well against other massively multilingual models. The researchers claimed that Aya routinely outperformed other “leading open-source models” in human evaluations, receiving scores of 75–90%, and in simulated win rates, 80–90%.