NEWS 新闻NEWS 新闻

我国知名科技公司深度求索近日推出首个国产开源MoE(多出口)大模型,其性能表现优异,可与Llama 2-7B模型相媲美。这款名为DeepSeek MoE的模型,参数规模达160亿,其在计算量上比Llama 2-7B模型节省了60%,但却毫不逊色。

DeepSeek MoE被誉为“19边形战士”,尤其在数学和代码能力上,对Llama形成了碾压态势。这一突破性成果的取得,标志着我国在大型人工智能模型领域取得了重要进展。

此次发布的DeepSeek MoE模型,不仅性能出色,更是主打节约计算量。在当前AI模型日益庞大,计算资源需求不断提高的背景下,这一特点显得尤为可贵。它将有助于我国在AI领域的研究和应用走向更广泛的应用场景,推动AI技术的普及和发展。

英文翻译:
News Title: Domestic open-source MoE large model debuts with stunning performance comparable to Llama 2-7B
Keywords: DeepSeeking, domestic open-source, MoE large model, performance comparable, computing resource saving

News Content:
China’s renowned tech company DeepSeeking recently launched the first domestic open-source MoE (multi-exit) large model, which has excellent performance and can rival the Llama 2-7B model. This large model, named DeepSeek MoE, has a parameter scale of 16 billion and saves 60% of the computing resources compared to the Llama 2-7B model without compromising its performance.

DeepSeek MoE is known as the “19-sided warrior,” particularly outstanding in mathematical and coding abilities, which puts it in a dominant position against Llama. This groundbreaking achievement marks an important milestone in China’s large-scale artificial intelligence model research.

The release of DeepSeek MoE not only demonstrates excellent performance but also focuses on saving computing resources. In the context of increasingly large AI models and increasing demand for computing resources, this feature is particularly valuable. It will contribute to the broader application of AI research and development in China, promoting the popularization and development of AI technology.

【来源】https://www.qbitai.com/2024/01/113381.html

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注