日前,微软软件工程部门经理Shane Jones曝光了一项重大安全问题,他在使用OpenAI的最新图像生成模型DALL-E 3时,发现该模型存在一个潜在的漏洞,能够生成不适宜的NSFW(Not Safe For Work,不适合工作场合的内容)图像。这一发现对人工智能的伦理和安全标准提出了新的挑战。
据IT之家报道,Shane Jones在发现这一漏洞后,第一时间向OpenAI报告了这一情况,期待能够得到及时的修复。然而,令他意外的是,他随后接到了一道“封口令”,被要求不得公开讨论此事。这一做法引发了他对透明度和责任伦理的疑虑。
尽管面临压力,Shane Jones还是决定将此事公之于众,以提醒广大用户和业界关注人工智能模型的安全隐患。他的勇敢举动在网络上引起了广泛关注,也促使OpenAI和微软对这一事件进行了重新评估。
OpenAI和微软作为人工智能领域的领军企业,其产品安全性和道德规范一直备受瞩目。此次事件不仅暴露了技术漏洞,也揭示了在处理敏感问题时,企业内部沟通和公众知情权之间的复杂平衡。目前,OpenAI和微软尚未就此事发表官方回应,但可以预见,这将对未来的AI安全标准设定和企业危机处理策略产生深远影响。
英语如下:
**News Title:** “Security Vulnerability Uncovered in OpenAI DALL-E 3:员工曝出能生成 inappropriate content, silenced afterward”
**Keywords:** OpenAI DALL-E, vulnerability exposed, inappropriate content
**News Content:**
Title: Microsoft Employee Exposes OpenAI DALL-E 3 Model Security Flaw, Faces ‘Gag Order’
Recently, Shane Jones, a Microsoft software engineering department manager, revealed a significant security issue, discovering that OpenAI’s latest image-generation model, DALL-E 3, has a potential vulnerability allowing it to generate NSFW (Not Safe For Work) inappropriate images. This finding poses new challenges to the ethical and safety standards in artificial intelligence.
According to IT Home, Shane Jones promptly reported the issue to OpenAI, hoping for a prompt fix. However, to his surprise, he subsequently received a ‘gag order’ instructing him not to publicly discuss the matter. This action raised concerns about transparency and ethical responsibility.
Undeterred by the pressure, Shane Jones chose to disclose the issue to the public to draw attention to the security risks associated with AI models. His courageous move has sparked widespread attention online and prompted a reevaluation of the incident by both OpenAI and Microsoft.
As leading companies in the AI sector, OpenAI and Microsoft’s product safety and ethical guidelines have always been under scrutiny. This incident not only exposes a technical flaw but also highlights the delicate balance between internal communication in handling sensitive issues and the public’s right to know. While OpenAI and Microsoft have yet to issue an official statement, it is foreseeable that this event will have a profound impact on future AI safety standards and corporate crisis management strategies.
【来源】https://www.ithome.com/0/748/569.htm
Views: 1