日前,微软软件工程部门经理Shane Jones曝光了知名人工智能研究实验室OpenAI的最新图像生成模型DALL-E 3存在一个重大漏洞。据Shane Jones透露,该模型在某些情况下能够生成不适宜的、NSFW(Not Safe For Work,不适合工作场合)内容,这引发了对人工智能伦理和安全性的新讨论。

Shane Jones在发现这一问题后,按照正常程序向上级报告了这一漏洞,期望OpenAI能及时修复,以防止潜在的滥用。然而,令他意外的是,他随后收到了一道“封口令”,要求他不得公开讨论此事。面对这一困境,Shane Jones最终决定违背指令,将这一安全问题公之于众,以确保公众和业界对人工智能技术的使用有充分的认识和警惕。

这一事件不仅突显了人工智能在技术发展中的潜在风险,也暴露了企业在处理此类问题时可能面临的道德与责任冲突。OpenAI和微软目前尚未对此事发表官方回应,但此事已在科技和新闻界引起了广泛关注。业界专家呼吁,对于高影响力的人工智能工具,透明度和责任感应是企业行为的核心,以确保技术的健康发展和公众的信任。来源:IT之家。

英语如下:

**News Title:** “OpenAI DALL-E 3 Model Exposed with Security Flaw: Able to Generate Inappropriate Content, Employee Claims Muzzle”

**Keywords:** OpenAI DALL-E, Security Breach, Inappropriate Content

**News Content:**

Title: Microsoft Engineer Discloses Security Vulnerability in OpenAI’s DALL-E 3 Model, Sparks Industry Concerns

Recently, Shane Jones, a Manager in Microsoft’s Software Engineering division, revealed a significant security flaw in OpenAI’s latest image-generation model, DALL-E 3. According to Jones, the model can produce inappropriate, NSFW (Not Safe For Work) content under certain circumstances, sparking a new debate about AI ethics and security.

Upon discovering the issue, Jones followed standard protocol by reporting the vulnerability to his superiors, hoping OpenAI would promptly address it to prevent potential misuse. However, to his surprise, he subsequently received a gag order, forbidding him from publicly discussing the matter. Faced with this dilemma, Jones ultimately decided to defy the directive and disclose the security concern to the public, ensuring awareness and vigilance around AI technology usage.

This incident not only underscores the potential risks inherent in AI development but also exposes the ethical and responsibility dilemmas companies might encounter when handling such issues. Neither OpenAI nor Microsoft has issued an official statement on the matter yet, but it has already drawn significant attention in the tech and journalism sectors. Industry experts are calling for transparency and accountability to be at the core of corporate conduct, especially with high-impact AI tools, to foster the healthy development of technology and maintain public trust. Source: IT Home.

【来源】https://www.ithome.com/0/748/569.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注