news pappernews papper

复旦大学自然语言处理实验室的师生们近日宣布,他们成功研发出一款名为“复旦・眸思”的多模态大模型,并以此为基础开发了“听见世界”App,旨在为视障人士提供更加便捷的生活辅助。这一创新应用通过结合先进的计算机视觉和自然语言处理技术,能够将摄像头捕捉到的画面转化为语言描述,帮助视障者更好地感知周围环境。

“听见世界”App不仅能够描述场景,还能识别并提示潜在的风险,如障碍物、变化的路况等,大大提升了视障者出行的安全性和自主性。这款应用的推出,标志着人工智能技术在无障碍领域的又一次重要突破,也为构建更加包容的社会环境注入了新的活力。

英文翻译内容:
Title: Fudan University Team Launches “MouSi” AI Model to Enhance Visually Impaired People’s Access to the World
Keywords: Artificial Intelligence, Visual Enhancement, Accessibility Technology
News content:
A team from the School of Computer Science at Fudan University has unveiled an innovative AI model named “Fudan MouSi” and an accompanying application, “Hearing the World,” designed specifically for visually impaired individuals. This cutting-edge technology combines computer vision and natural language processing to convert visual input into verbal descriptions, significantly enhancing the mobility and independence of its users.

“Hearing the World” not only describes scenes but also identifies and alerts users to potential hazards such as obstacles and changing road conditions, enhancing the safety and autonomy of visually impaired individuals. The launch of this application marks a significant breakthrough in the application of AI technology in accessibility and is a step towards a more inclusive society.

【来源】https://www.ithome.com/0/753/295.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注