Aya Expanse 8B and 32B open-weights release delivers state-of-the-art multilingual performance
AI Impact Summary
Cohere For AI introduces Aya Expanse models (8B and 32B) designed to close multilingual performance gaps using data arbitrage, multilingual preference training, safety tuning, and model merging. In benchmarks across 23 languages, Aya Expanse 32B outperforms Gemma 2 9B and Llama 3.1 70B, while Aya Expanse 8B beats Gemma 2 9B and Ministral 8B, demonstrating strong cross-language transfer at smaller sizes. The release of open weights, along with internal components like Arbiter (reward model) and Reward-Based Routing, provides a concrete blueprint for building multilingual models—even as production teams must plan for safety, governance, and integration with existing pipelines. This capability upgrade enables faster experimentation and potential deployment of multilingual assistants and content localization, giving teams a clear path to stronger multilingual features.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info