No Language Left Behind: Scaling Human-Centered Machine Translation NLLB Team, Marta R. Costa-jussà * , James Cross * , Onur Çelebi * , Maha Elbayad * , Kenneth Heafeld * , Kevin Hefernan * , Elahe Kalbassi * , Janice Lam * , Daniel Licht * , Jean Maillard * , Anna Sun * , Skyler Wang * ,§ , Guillaume Wenzek * , Al Youngblood * Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hofman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran Pierre Andrews † , Necip Fazil Ayan † , Shruti Bhosale † , Sergey Edunov † , Angela Fan † ,‡ , Cynthia Gao † , Vedanuj Goswami † , Francisco Guzmán † , Philipp Koehn † ,¶ , Alexandre Mourachko † , Christophe Ropers † , Safyyah Saleem † , Holger Schwenk † , Jef Wang † Meta AI, § UC Berkeley, ¶ Johns Hopkins University Abstract Driven by the goal of eradicating language barriers on a global scale, machine translation has solidifed itself as a key focus of artifcial intelligence research today. However, such eforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by frst contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifcally, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and efective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overftting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 diferent translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb. ∗. Equal contribution, alphabetical order †. Research and engineering leadership, equal contribution, alphabetical order ‡. Corresponding Author. Email: angelafan@fb.com.