NEWS 新闻NEWS 新闻

近日,我国一家知名技术团队深度求索,推出了首个国产开源的多任务专家模型DeepSeek MoE。这一模型拥有160亿参数,其性能媲美国外知名的Llama 2-7B模型。值得注意的是,DeepSeek MoE在计算量上仅有后者的40%,大大降低了计算成本。

DeepSeek MoE被誉为19边形战士,尤其在数学和代码能力上表现出色,碾压了Llama模型。这一模型主打的是节约计算量,旨在为我国科技领域提供一种更为高效、经济的解决方案。

该模型由深度求索团队最新开源,其表现完全不输给密集的Llama 2-7B模型。这一突破性成果,无疑为我国人工智能领域的发展注入了新的活力。专家表示,DeepSeek MoE的推出,标志着我国在人工智能技术方面取得了重要进展。

英文标题:Domestic Open-source Model DeepSeek MoE Impresses with Performance
Keywords: Open-source Model, Performance, Cost-saving Computation

News content:

Recently, a well-known technical team in China, named Shendu Qiusuo, has launched the country’s first domestic open-source multi-task expert model, DeepSeek MoE. This model, with 16 billion parameters, boasts performance comparable to the renowned Llama 2-7B model from abroad. What’s more, DeepSeek MoE only requires 40% of the computational resources compared to Llama 2-7B, significantly reducing the computational cost.

The DeepSeek MoE has been hailed as a 19-sided warrior, particularly excelling in mathematical and coding abilities, overwhelming the Llama model. The model’s main selling point is its cost-saving computation, aiming to provide a more efficient and economical solution for the technology sector in China.

Developed by the Shendu Qiusuo team, the latest open-source model has shown performance that does not lag behind the dense Llama 2-7B model. This groundbreaking achievement无疑 injects new vitality into the development of China’s artificial intelligence field. Experts say that the launch of DeepSeek MoE marks an important progress in AI technology in our country.

【来源】https://www.qbitai.com/2024/01/113381.html

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注