在人工智能领域,多模态大型语言模型(MLLMs)的进展正引发广泛关注。随着大型语言模型(LLMs)的不断进步,MLLMs因其在处理图像与文本信息上的卓越能力,展现出在自动驾驶、医疗助手等领域的巨大潜力。然而,这些模型在理解和生成图像内容时,往往存在错误或幻觉,即生成与输入图像不相符的回复,如回答不存在的对象或错误识别属性等。
研究发现,这种现象的产生主要与多模态大模型在不同训练阶段的数据量和训练时间的不平衡有关。语言模块通常使用海量文本数据进行预训练,而模态对齐阶段则依赖更小的数据规模和更短的训练时间,这种不平衡导致模型在处理图像信息时产生偏见。
为了应对这一挑战,香港科技大学的博士生皮仁杰及其导师张潼教授和周晓方教授提出了一种创新解决方案——Bootstrapped Preference Optimization(BPO)。BPO旨在缓解多模态大模型的幻觉现象,同时提升模型的视觉理解能力。该方法通过设计两种方法自动构建偏好学习的负样本,揭示模型对预训练知识的过度依赖,并用原始数据标注作为正样本进行偏好微调。
实验结果显示,经过BPO微调后的多模态模型,在多个基准测试中性能得到显著提升,特别是在MM-Vet、LLaVA-Wild和Object HalBench等测试上表现出色。这一研究成果不仅为解决多模态大模型的幻觉问题提供了新的视角,也为构建更可靠、准确的多模态AI系统奠定了基础。
此研究将发表于2024年欧洲计算机视觉会议(ECCV),并由机器之心AIxiv专栏进行报道。AIxiv专栏致力于发布学术和技术内容,过去数年已报道了2000多篇内容,涵盖全球各大高校与企业的顶级实验室,有效促进了学术交流与传播。如果您有优秀的工作想要分享,欢迎投稿或联系报道。
皮仁杰的这一研究成果,不仅为人工智能领域在多模态模型的优化和应用提供了新的思路,也为解决实际应用中的幻觉问题提供了有力的理论支撑和实践指导。这一创新解决方案的提出,标志着多模态大模型在理论研究和实际应用中迈出了重要的一步,有望在未来推动人工智能技术的进一步发展和普及。
英语如下:
News Title: “Addressing Bias in Multimodal Large Models: New Approaches to Enhance Image Understanding”
Keywords: Multimodal large models, Text pre-training, Over-reliance solutions
News Content: Title: Debunking the Illusion of Multimodal Large Models: New Strategies Discussed at ECCV 2024
In the field of artificial intelligence, the advancements in multimodal large language models (MLLMs) are garnering significant attention. As large language models (LLMs) continue to advance, MLLMs, due to their exceptional ability to handle both image and text information, are demonstrating immense potential in domains such as autonomous driving, medical assistants, and more. However, these models often exhibit errors or illusions when understanding and generating image content, such as providing answers to non-existent objects or incorrectly identifying attributes.
Research has revealed that the cause of this phenomenon lies primarily in the imbalance of data quantity and training time across different training stages of multimodal large models. The language module typically undergoes pre-training with vast amounts of text data, while the modality alignment phase relies on smaller data scales and shorter training times, leading to model biases in processing image information.
To tackle this challenge, Dr. Renjie Pi, a doctoral student at the Hong Kong University of Science and Technology, along with his supervisors Professor Zhihong Zhang and Professor Xiaofang Zhou, have proposed an innovative solution called Bootstrapped Preference Optimization (BPO). BPO aims to alleviate the illusion problem in multimodal large models while enhancing their visual understanding capabilities. The method designs two approaches to automatically construct negative samples for preference learning, revealing the model’s over-reliance on pre-trained knowledge, and uses original data annotations as positive samples for preference fine-tuning.
Experimental results show that after BPO fine-tuning, multimodal models perform significantly better in various benchmark tests, particularly excelling in MM-Vet, LLaVA-Wild, and Object HalBench tests. This research not only provides a new perspective on solving the illusion problem in multimodal large models but also lays a foundation for building more reliable and accurate multimodal AI systems.
This study will be published at the 2024 European Conference on Computer Vision (ECCV) and will be featured in the AIxiv column of Machine Intelligence. AIxiv is dedicated to publishing academic and technical content, reporting on over 2,000 pieces in the past years from leading labs across global universities and enterprises, effectively facilitating academic exchange and dissemination. If you have outstanding work to share, please consider submitting or contacting for coverage.
Dr. Renjie Pi’s research findings not only offer new insights into the optimization and application of multimodal models in the AI field but also provide robust theoretical support and practical guidance for addressing real-world illusion issues. The introduction of this innovative solution marks a significant step forward in the theoretical research and practical application of multimodal large models, promising to drive the further development and widespread adoption of AI technology in the future.
【来源】https://www.jiqizhixin.com/articles/2024-07-27
Views: 1