上海的陆家嘴

Beijing, China – September 8, 2024

In a significant development in the field of artificial intelligence and programming, Zero-One万物正式开源了其最新的Yi-Coder系列模型。 This move marks another important milestone in Zero-One万物的 commitment to advancing the state of AI-assisted programming tools.

The Rise of Yi-Coder

Yi-Coder is the latest member of the renowned Yi series of models, which has been making waves in the AI domain. Known for its exceptional code generation capabilities, Yi-Coder is poised to become a game-changer for developers worldwide.

A Brief Introduction to Yi-Coder

The Yi-Coder series is specifically designed for coding tasks, offering two variants with 1.5B and 9B parameters. The Yi-Coder-9B model has proven to outperform other models with parameters below 10B, such as CodeQwen1.5 7B and CodeGeex4 9B. Its performance is even comparable to the DeepSeek-Coder 33B model.

Key Features of Yi-Coder

  1. Small Parameters, Strong Performance: Despite its relatively small parameter size, Yi-Coder excels in various tasks, including code generation, code understanding, debugging, and code completion. Its compact size makes it easy to use and deploy on the edge.

  2. 128K Long Sequence Modeling: Yi-Coder can handle context lengths of up to 128K tokens, effectively capturing long-term dependencies and making it suitable for understanding and generating complex, project-level code.

  3. Robust Code Generation: Supporting 52 major programming languages, Yi-Coder performs exceptionally well in code generation and cross-file code completion.

Model Achievements

Yi-Coder has achieved remarkable results in code generation benchmark tests. In the LiveCodeBench platform, Yi-Coder-9B-Chat’s pass rate reached 23.4%, making it the only model with a pass rate above 20% among models with parameters below 10B.

Code Editing and Completion Capabilities

In the CodeEditorBench, Yi-Coder-9B-Chat achieved outstanding results, consistently outperforming other models in the Primary and Plus subsets. In terms of code completion, Yi-Coder also demonstrated impressive performance, surpassing other similarly sized models in both contexts with and without retrieved information.

Long Sequence Modeling Performance

By emulating the popular long sequence evaluation methods in the text field, Yi-Coder successfully completed a 128K long sequence evaluation task, doubling the length of the CodeQwen1.5 64K long sequence evaluation.

Mathematical Reasoning Capabilities

Yi-Coder has also demonstrated impressive mathematical reasoning capabilities. In seven math question data sets, Yi-Coder 9B achieved an average accuracy score of 70.3%, surpassing DeepSeek-Coder 33B’s 65.8%.

Getting Started with Yi-Coder

If you’re eager to try Yi-Coder, simply click the Read More link to access the Yi-Coder README, which includes specific download and usage instructions.

Conclusion

The release of Yi-Coder series by Zero-One万物 represents a significant step forward in the field of AI-assisted programming. With its impressive performance, Yi-Coder is poised to become a valuable tool for developers worldwide. As AI continues to evolve, tools like Yi-Coder are likely to play a crucial role in shaping the future of programming.


>>> Read more <<<

Views: 0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注