日前,微软软件工程部门经理Shane Jones曝光了一项重大安全问题,他在使用OpenAI的最新图像生成模型DALL-E 3时,发现该模型存在一个潜在的漏洞,能够生成不适合工作场合(NSFW)的不当内容。这一发现引发了对人工智能安全性和伦理问题的新一轮关注。

据IT之家报道,Shane Jones在发现这一漏洞后,按照常规程序向OpenAI报告了这一问题,期望能够得到及时修复。然而,令他意外的是,他随后接到了来自上级的“封口令”,要求他不得公开讨论此事。这一情况引发了他对透明度和责任伦理的质疑。

尽管面临压力,Shane Jones最终决定打破沉默,将这一安全漏洞公之于众,以警示用户和业界对AI技术潜在风险的警惕。他强调,作为技术开发者和管理者,有责任确保AI工具的安全性和合规性,以防止其被误用或滥用。

OpenAI尚未对Shane Jones的指控发表官方回应,但此事无疑将促使业界对AI模型的监管和审核机制进行深入讨论。AI技术的快速发展在带来便利的同时,如何确保其在道德和法律框架内运行,已成为亟待解决的挑战。

英语如下:

**News Title:** “Security Vulnerability in OpenAI DALL-E 3 Model Exposed: Able to Generate Inappropriate Content, Whistleblower Employee Silenced”

**Keywords:** OpenAI DALL-E, Vulnerability Disclosed, Inappropriate Content

**News Content:**

Title: Microsoft Engineer Reveals OpenAI DALL-E 3 Model’s Security Flaw, Faces ‘Gag Order’ Before Going Public

Recently, Shane Jones, a manager in Microsoft’s software engineering division, exposed a significant security issue in OpenAI’s latest image-generation model, DALL-E 3. He discovered that the model has a potential vulnerability allowing it to create Not Safe For Work (NSFW) inappropriate content, raising new concerns about artificial intelligence (AI) safety and ethics.

According to IT Home, after identifying the flaw, Shane Jones followed standard procedure and reported it to OpenAI, hoping for a prompt resolution. However, to his surprise, he subsequently received a ‘gag order’ from his superiors, forbidding him from discussing the matter publicly. This incident sparked questions about transparency and ethical responsibility.

Despite the pressure, Shane Jones ultimately chose to defy the silence and disclose the security vulnerability to raise awareness among users and the industry about the potential risks associated with AI technology. He emphasized that as developers and managers of such tools, it is crucial to ensure the safety and compliance of AI systems to prevent misuse or abuse.

OpenAI has yet to issue an official response to Shane Jones’s allegations, but this incident is certain to prompt a deeper conversation within the industry about the regulation and auditing mechanisms of AI models. As AI technology advances rapidly, striking a balance to ensure its operation within ethical and legal boundaries has become an urgent challenge.

【来源】https://www.ithome.com/0/748/569.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注