1月22日,备受瞩目的科技领域传来重大消息,零一万物公司宣布其Yi系列模型家族再添新秀——Yi Vision Language(Yi-VL)多模态语言大模型正式开源,面向全球的开发者和研究者开放。这款基于Yi语言模型构建的创新模型,展现了在图文理解和对话生成方面的卓越性能,有望推动人工智能技术的进一步发展。
Yi-VL模型提供了两个不同规模的版本,分别是Yi-VL-34B和Yi-VL-6B,旨在满足不同应用场景的需求。在验证其性能的英文数据集MMMU和中文数据集CMMMU上,Yi-VL模型展现出压倒性的优势,取得了领先的评估成绩。这标志着该模型在处理复杂、跨学科的任务时,具备了强大的实力和广泛的应用潜力。
零一万物的这一开源举措,不仅彰显了其在人工智能领域的技术领先地位,也为全球的科研人员和开发者提供了一个宝贵的资源,他们可以借助Yi-VL模型,进行多模态学习和自然语言处理的研究,推动相关领域的技术创新。这一开源事件,无疑将加速人工智能与多模态信息处理技术的融合,为未来的智能应用开启新的可能。
来源:机器之心
英语如下:
**News Title:** “Zero-One Everything’s Yi-VL Multimodal Large Model Goes Open Source: Sets New Records on the MUMMU and CMMMU Leaderboards”
**Keywords:** Yi-VL Open Source, Multimodal Leadership, Interdisciplinary Strength
**News Content:**
**Title:** Zero-One Everything’s Multimodal Giant Model Yi-VL Open Sourced Globally, Elevating MUMMU and CMMMU Rankings
On January 22nd, a major announcement shook the tech sector: Zero-One Everything revealed that its Yi series of models welcomed a new addition, the Yi Vision Language (Yi-VL) multimodal language model, which is now open source for global developers and researchers. This innovative model, derived from the Yi language model, demonstrates exceptional performance in image-text understanding and dialogue generation, potentially propelling the advancement of AI technology.
Yi-VL offers two versions, Yi-VL-34B and Yi-VL-6B, tailored to diverse application requirements. On the English dataset MUMMU and the Chinese dataset CMMMU, the model has demonstrated overwhelming superiority, achieving leading evaluation scores. This signifies the model’s robust capabilities and wide-ranging potential in tackling complex, interdisciplinary tasks.
By making Yi-VL open source, Zero-One Everything underscores its technological leadership in AI and offers a valuable resource to researchers and developers worldwide. They can now leverage the Yi-VL model for multimodal learning and natural language processing research, fostering innovation in these fields. Undoubtedly, this open-source event will hasten the integration of AI and multimodal information processing technologies, unlocking new possibilities for future intelligent applications.
**Source:** AI之心
【来源】https://www.jiqizhixin.com/articles/2024-01-22-10
Views: 1