Eagle 7B : Soaring past Transformers with 1 Trillion Tokens Across 100+ Languages (RWKV-v5)

Key Points:

  • Release of RWKV-v5 Eagle 7B language model with significant multi-lingual performance improvements and a commitment to making AI accessible to a wide range of languages
  • The project’s dedication to expanding the multi-lingual dataset to cover a substantial portion of the world’s population and its future plans for continued development and innovation
  • The RWKV-v5 Eagle 7B model’s demonstration of impressive performance benchmarks and scalability, signaling a significant advancement in language model technology


The RWKV Project recently announced the release of the RWKV-v5 Eagle 7B language model, which is a 7.52B parameter model licensed under the Apache 2.0 license. The model showcases significant multi-lingual performance improvements across 23 languages, with particularly noteworthy advancements in common sense reasoning and world knowledge benchmarks. Additionally, the new model demonstrates a substantial performance jump compared to the previous version (RWKV v4) and is positioned to narrow the performance gap with the top models in the market.


RWKV supports a diverse range of languages, covering approximately 4 billion people or 50% of the world’s population. The team plans to expand the multi-lingual dataset further to eventually encompass 100% of the world’s languages, aiming to ensure that no language is left behind. Through this inclusive approach, the project aims to provide affordable and strong language-specific models on accessible hardware, aligning with their goal of making AI support everyone.


The release of the RWKV-v5 Eagle 7B model marks a significant milestone, not only in achieving impressive performance benchmarks but also in showcasing the scalability of the model architecture to achieve competitive performance levels with lower inference costs. The project outlines future plans for continued development, including a detailed paper on the RWKV v5 architecture changes, further token training, and the introduction of new models based on the v5 Eagle, demonstrating their commitment to continuous advancement and innovation in the field of language models.



Prompt Engineering Guides



©2024 The Horizon