新闻报道新闻报道

**阿里通义千问团队发布创新MoE模型Qwen1.5-MoE-A2.7B,性能强大而高效**

近日,阿里巴巴达摩院的通义千问团队在人工智能领域取得重大突破,推出了Qwen系列的首个混合专家(MoE,Mixture of Experts)模型——Qwen1.5-MoE-A2.7B。这款模型以其卓越的性能和高效的参数利用,吸引了业界的广泛关注。

Qwen1.5-MoE-A2.7B仅具备27亿个激活参数,却展现出与当前70亿参数的顶级模型,如Mistral 7B和Qwen1.5-7B相匹敌的性能。值得注意的是,与包含65亿个Non-Embedding参数的Qwen1.5-7B相比,Qwen1.5-MoE-A2.7B的Non-Embedding参数数量仅为20亿,减少了大约三分之二,实现了模型大小的显著压缩。

此外,该模型在训练成本上实现了显著优化,比Qwen1.5-7B降低了75%,这意味着在保持高性能的同时,资源消耗大大减少。更令人印象深刻的是,Qwen1.5-MoE-A2.7B的推理速度提升了1.74倍,这将对实际应用的响应时间和用户体验产生积极影响。

这一创新成果的发布,标志着阿里通义千问团队在大模型的效率和效能平衡方面取得了重要进展,为人工智能领域的模型优化提供了新的参考。Qwen1.5-MoE-A2.7B的成功研发,不仅展示了中国在AI技术领域的领先地位,也为未来更高效、更智能的模型设计奠定了基础。这一消息已在魔搭社区得到官方发布,引发了业内专业人士的热烈讨论和期待。

英语如下:

**News Title:** “Alibaba Qwen Releases Qwen1.5-MoE-A2.7B: High Performance, Low Energy Consumption, Challenging 7 Billion Parameter Models”

**Keywords:** Qwen1.5-MoE-A2.7B, Performance Boost, Cost Reduction

**News Content:**

The Qwen team at Alibaba DAMO Academy recently made a groundbreaking advancement in artificial intelligence with the launch of Qwen1.5-MoE-A2.7B, a powerful and efficient MoE (Mixture of Experts) model.

This marks the first MoE model in the Qwen series, showcasing exceptional performance while efficiently utilizing parameters. With only 2.7 billion active parameters, Qwen1.5-MoE-A2.7B rivals top-performing models with 7 billion parameters, such as Mistral 7B and Qwen1.5-7B. Notably, compared to Qwen1.5-7B, which has 6.5 billion Non-Embedding parameters, Qwen1.5-MoE-A2.7B possesses only 2 billion Non-Embedding parameters, a reduction of approximately two-thirds, demonstrating significant model compression.

Furthermore, the model achieves a remarkable optimization in training cost, reducing it by 75% compared to Qwen1.5-7B. This translates to a substantial decrease in resource consumption while maintaining high performance. Impressively, Qwen1.5-MoE-A2.7B boasts a 1.74 times faster inference speed, positively impacting response times and user experience in practical applications.

This innovative accomplishment signifies a significant stride for the Qwen team in balancing efficiency and effectiveness in large models, providing a new reference for optimization in the AI field. The successful development of Qwen1.5-MoE-A2.7B not only underscores China’s leading position in AI technology but also paves the way for more efficient and intelligent model designs in the future. The official announcement was made on the ModelScope community, sparking enthusiastic discussions and anticipation among industry professionals.

【来源】https://mp.weixin.qq.com/s/6jd0t9zH-OGHE9N7sut1rg

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注