最新消息最新消息

近日,深度求索团队成功推出首个国产开源MoE(多出口)大模型DeepSeek MoE,其性能媲美国际知名大模型Llama 2-7B。这款拥有160亿参数的专家模型在计算量仅为40%的情况下,表现完全不输给密集的Llama 2-7B模型,尤其在数学和代码能力上更是形成了碾压。

DeepSeek MoE被誉为19边形战士,其优异的性能表现在节省计算量上。这款模型的推出,标志着我国在人工智能领域的大模型研究取得了重要突破。此次成果的公布,不仅提升了我国在开源大模型领域的国际地位,也为国内AI研究和应用提供了强有力的支持。

英文翻译:
News Title: Domestic open-source MoE large model impresses with international performance
Keywords: Domestic open-source, MoE large model, excellent performance, computing resource saving

News Content:
Recently, the DeepSeek team successfully launched the first domestic open-source MoE (multi-exit) large model. Its performance rivals that of the internationally renowned large model Llama 2-7B. This expert model with 16 billion parameters performs competitively with the dense Llama 2-7B model while only consuming 40% of the computing resources. Particularly impressive is its dominance over Llama in mathematical and coding abilities.

DeepSeek MoE is known as a 19-sided warrior, and its superior performance lies in its computing resource saving. The launch of this model marks an important breakthrough in large model research in China’s AI field. This achievement not only enhances China’s international status in the field of open-source large models but also provides strong support for domestic AI research and application.

【来源】https://www.qbitai.com/2024/01/113381.html

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注