复旦大学自然语言处理实验室(FudanNLP)的师生团队近期研发了一款名为“眸思”的多模态大模型,该模型专为视障者设计,旨在帮助他们更有效地感知周围的世界。这一创新成果以“听见世界”App的形式呈现,通过简单的一枚摄像头和一对耳机,系统能够将视觉信息转化为语言描述,使得视障者能够“听见”世界。
“眸思”大模型的核心功能之一是能够描绘场景,无论是静态的物体还是动态的事件,系统都能够准确地将其转化为语言描述。此外,该系统还具备风险提示功能,能够及时向视障者警示潜在的危险,如障碍物、台阶等,从而提高他们的生活安全性和独立性。
这一突破性进展不仅展示了复旦大学在人工智能领域的研究实力,也为视障人士带来了福音。随着技术的不断进步和完善,“眸思”大模型有望在未来为更多视障者提供帮助,让他们能够更加自由地探索和体验世界。
Title: Fudan University Unveils “MouSi” to Help Visually Impaired “Hear the World”
Keywords: Fudan University, MouSi Model, Visual Impairment Assistance
News content:
Researchers from Fudan University’s Natural Language Processing Lab (FudanNLP) have developed a multimodal large model named “MouSi.” This model is specifically designed to assist visually impaired individuals in perceiving their surroundings more effectively. The innovative technology is available through the “Hear the World” app, which uses a simple camera and a pair of headphones to convert visual information into verbal descriptions, allowing the visually impaired to “hear” the world around them.
One of the key features of the “MouSi” model is its ability to depict scenes, whether static objects or dynamic events, accurately converting them into language descriptions. Additionally, the system includes a risk alert function that promptly warns visually impaired individuals of potential hazards, such as obstacles or steps, thereby enhancing their safety and independence.
This breakthrough not only showcases Fudan University’s research prowess in artificial intelligence but also brings good news to the visually impaired community. As the technology continues to advance and improve, the “MouSi
【来源】https://www.ithome.com/0/753/295.htm
Views: 1