今日,元象正式发布了一款多模态大模型XVERSE-V,该模型在业界主流评测中表现突出,效果领先。最值得关注的是,该模型支持任意宽高比图像输入,为用户提供了更加灵活的图像处理方式。

据了解,XVERSE-V模型性能优异,在多项权威多模态评测中超过了其他开源模型,如零一万物的Yi-VL-34B、面壁智能的OmniLMM-12B以及深度求索的DeepSeek-VL-7B等。此外,在综合能力测评MMBench中,XVERSE-V还超越了众多知名闭源模型,包括谷歌的GeminiProVision、阿里的Qwen-VL-Plus和Claude-3V Sonnet等。

值得一提的是,元象对XVERSE-V进行全开源,并允许无条件免费商用,这无疑将极大促进广大开发者及相关领域的研究人员对该模型的深入应用与探索。元象XVERSE的此举,有望推动多模态人工智能技术的发展,为图像处理和人工智能领域注入新的活力。

目前,元象XVERSE已经公开了模型的详细信息和代码,并提供了使用指南,方便开发者快速了解并使用该模型。未来,元象将继续在多模态人工智能领域深耕细作,为用户带来更多创新的产品和服务。

英语如下:

News Title: “Yuanxiang Releases Leading Multimodal Large Model XVERSE-V: Supports Arbitrary Width-Height Ratio Image Input and Fully Surpasses Competitors”

Keywords: 1. Yuanxiang Large Model Release

News Content:

Yuanxiang has officially released a leading multimodal large model, XVERSE-V, which stands out in industry evaluations and provides outstanding performance. The most noteworthy feature of this model is its support for arbitrary width-height ratio image input, offering users a more flexible image processing approach.

It is understood that XVERSE-V demonstrates superior performance in multiple authoritative multimodal evaluations, surpassing other open-source models such as Yi-VL-34B from LingyiWanwu, OmniLMM-12B from Mianbi Intelligence, and DeepSeek-VL-7B from Deep Exploration. Additionally, in the comprehensive ability assessment conducted by MMBench, XVERSE-V outperforms many well-known closed-source models, including GeminiProVision from Google, Qwen-VL-Plus and Claude-3V Sonnet from Alibaba.

It is worth mentioning that Yuanxiang has made XVERSE-V fully open-source and allows its unconditional free commercial use. This will undoubtedly promote the in-depth application and exploration of the model by developers and researchers in related fields. Yuanxiang’s move is expected to drive the development of multimodal artificial intelligence technology and inject new vitality into the field of image processing and artificial intelligence.

Currently, Yuanxiang has already published detailed information and code of the model, along with a guide for users, making it convenient for developers to quickly understand and use the model. In the future, Yuanxiang will continue to deepen its focus on the field of multimodal artificial intelligence, bringing more innovative products and services to users.

【来源】https://mp.weixin.qq.com/s/AsNytkHXikWUXc6HLNY0vA

Views: 2

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注