Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

新闻报道新闻报道
0

北京 – 字节跳动旗下豆包大模型团队近日宣布开源其自主研发的通信优化系统COMET,该系统专为解决混合专家模型(MoE)在分布式训练中面临的通信瓶颈问题而设计。COMET通过创新的细粒度计算-通信重叠技术,实现了单层高达1.96倍的加速,端到端平均效率提升1.71倍。目前,COMET已成功应用于万卡级生产集群,为MoE模型的高效训练提供了强大支持,并累计节省了数百万GPU小时的计算资源。

MoE架构作为扩展模型规模的重要方向,其分布式训练面临着巨大的通信开销挑战。在Megatron-LM框架下,Mixtral-8x7B模型的通信时间占比可高达40%,严重制约了训练效率和成本。COMET的推出,旨在解决这一瓶颈,提升MoE模型的训练效率,降低计算资源消耗。

COMET的核心优势:细粒度计算-通信重叠

现有MoE系统方案通常采用粗粒度的计算-通信流水线,难以高效利用计算资源,尤其在动态路由、异构硬件环境下,性能损失显著。COMET通过以下两大关键机制,实现了更精准、细粒度的计算-通信重叠:

  1. 共享张量依赖解析: COMET将MoE层间传递的共享张量沿Token维度或隐层维度进行切割,使通信与计算的最小单元对齐。同时,动态调整数据块的计算顺序,优先计算本地数据块,并异步拉取远程Token,从而消除等待延迟。

  2. 自适应负载分配: COMET动态分配GPU线程块资源,精准平衡通信与计算负载,消除流水线气泡。通过将通信与计算任务分别封装在独立线程块中,COMET实现了在算子级别进行资源管理的能力,并根据输入规模和并行策略实时调整线程块分配。

万卡集群验证,显著提升训练效率

豆包大模型团队在多个大规模MoE模型中评估了COMET的端到端性能。实验结果表明,在8卡H800的实验集群中,COMET能够显著降低MoE模型(如Mixtral-8x7B、Qwen2-MoE等)的前向时延,降幅在31.8%-44.4%之间,且在不同并行策略、输入规模及硬件环境下均表现稳定。

开源共享,助力大模型发展

COMET的核心代码已开源,旨在与业界共享技术成果,共同推动大模型技术的发展。此外,COMET还可与豆包大模型团队此前发布的新一代稀疏模型架构UltraMem结合,实现协同优化。

MLSys 2025高分评审

COMET的出色性能和创新设计获得了学术界的认可。相关论文“Comet: Fine-grained Computation-communication Overlapping for Mixture-of-Experts”获得了MLSys 2025会议5/5/5/4的高分评审。

参考文献:

关于豆包大模型团队:

豆包大模型团队是字节跳动旗下专注于大模型技术研发的团队,致力于打造高效、易用的大模型解决方案,推动人工智能技术在各领域的应用。

(完)


>>> Read more <<<

Views: 0

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注