《黑神话:悟空》再添“受害者”:AI 搜索引擎信息泄露引争议
北京时间 2024 年8 月 23 日 – 近日,国产 3A 游戏大作《黑神话:悟空》火爆全网,其热度甚至引发了“受害者”事件。继游戏主播因晕 3D 上热搜后,微软必应搜索引擎的 AI 功能又闹出“乌龙”,将某机锋网员工的个人手机号错误显示为游戏官方客服电话,引发信息泄露争议。
据了解,该事件发生于 8 月 21 日,用户在微软必应搜索中输入“黑神话悟空客服”时,搜索结果显示了该员工的个人手机号,而非官方客服电话。此外,还有两个错误的电话号码被标记为客服,其中包括第一财经版权部的联系电话及其邮箱。
受害者表示,他在 5 小时内接到了近 20 个电话。尽管相关信息已被删除,受害者也已提交申诉,但错误的“黑神话悟空客服”信息一度出现在必应搜索首页。目前,微软必应团队已对错误信息进行更正。
此次事件暴露了 AI 搜索引擎在信息抓取和处理上的不足。微软必应作为全球第二大搜索引擎,覆盖 36 个国家和地区,用户超 6 亿。今年 2 月,微软宣布将 ChatGPT 集成进新版必应,采用 OpenAI 的 AI 模型 GPT-4。然而,AI 模型在信息抓取和处理方面仍存在缺陷,容易出现错误信息,甚至造成信息泄露。
值得注意的是,微软并非唯一一家将 AI 生成的结果添加到搜索页面的浏览器公司。谷歌、Arc Search 浏览器等也推出了类似功能,但同样出现了 AI 搜索结果不准确、甚至给出错误信息和建议的情况。
专家警告,AI 搜索引擎的“幻觉”问题无法真正解决。AI 模型在学习海量数据时,可能会将错误信息或偏见也纳入学习范围,导致生成错误的结果。此外,AI 模型缺乏对信息的理解和判断能力,无法像人类一样进行逻辑推理和判断,因此容易出现错误。
此次事件也引发了人们对 AI 搜索引擎的安全性、可靠性和伦理问题的担忧。随着 AI 技术的不断发展,如何确保 AI 搜索引擎的安全性和可靠性,如何防止 AI 生成的错误信息对用户造成伤害,将成为未来需要重点关注的问题。
英语如下:
Black Myth: Wukong Falls Victim to AI Search Engine Mishap
Keywords: Black Myth: Wukong, AI, victim
Beijing, August 23, 2024 – The highly anticipated Chinese 3A game, Black Myth: Wukong, has become avictim of its own popularity, with its recent surge in online buzz leading to a data leak incident. Following the viral trend of gamers experiencing motion sickness due to thegame’s 3D graphics, Microsoft’s Bing search engine’s AI feature has now caused a stir by mistakenly displaying a personal phone number of a Jifeng.com employee as the official customer service number for the game.
The incident occurred on August 21st when users searched for “Black Myth Wukong customer service” on Bing. The search results displayed the employee’s personal phone number instead of the official customer service line. Additionally, two otherincorrect phone numbers were listed as customer service, including the contact information for the copyright department of First Financial Daily.
The victim reported receiving nearly 20 calls within a 5-hour period. Although the information has since been removed and the victim has filed a complaint, the incorrect “Black Myth Wukong customerservice” information was displayed on the Bing search homepage for a period of time. Microsoft’s Bing team has now corrected the error.
This incident highlights the shortcomings of AI search engines in information retrieval and processing. Bing, the world’s second-largest search engine, operates in 36 countries and regions withover 600 million users. In February, Microsoft announced the integration of ChatGPT into the new Bing, utilizing OpenAI’s AI model GPT-4. However, AI models still exhibit flaws in information retrieval and processing, leading to inaccurate information and even data leaks.
It’s worth noting that Microsoft isn’t the only browser company incorporating AI-generated results into their search pages. Google and Arc Search browsers have also introduced similar features, but they too have faced issues with inaccurate AI search results, sometimes providing incorrect information and suggestions.
Experts warn that the “hallucination” problem of AI search engines cannot be fullyresolved. When AI models learn from massive datasets, they may incorporate erroneous information or biases into their training, leading to incorrect outputs. Moreover, AI models lack the ability to understand and judge information, making them prone to errors as they cannot perform logical reasoning and judgment like humans.
This incident has raised concerns about the security, reliability, and ethical implications of AI search engines. As AI technology continues to evolve, ensuring the security and reliability of AI search engines and preventing AI-generated misinformation from harming users will become critical areas of focus in the future.
【来源】https://mp.weixin.qq.com/s/Iszki-MQKUZFXPMSPXaSUA
Views: 4