微软软件工程部门经理Shane Jones近日发现OpenAI旗下DALL-E 3模型存在一个能够生成一系列NSFW不当内容的漏洞。在上报相关漏洞后,Shane Jones却被下达了“封口令”。然而,该员工最终还是选择向外界披露了这一漏洞。

据悉,DALL-E 3是OpenAI公司开发的一款图像生成AI,可以根据用户的文本提示生成相应的图像。然而,Shane Jones在测试中发现,这款模型在某些特定的文本提示下,会生成一些不恰当的、不适合在工作场合展示的内容。

这一漏洞的披露引发了公众对于AI生成内容的监管问题的关注。一些专家表示,AI生成内容的监管亟待加强,以防止类似不当内容的产生和传播。另一方面,也有人认为,AI技术本身并没有错,关键在于如何使用和监管。

英文标题:Microsoft Employee Exposes OpenAI DALL-E 3 Model Vulnerability to Generate Inappropriate Content
关键词:inappropriate content, vulnerability, exposure
News content:
Recently, Shane Jones, a manager in Microsoft’s software engineering department, discovered a vulnerability in OpenAI’s DALL-E 3 model that could generate a series of NSFW inappropriate content. After reporting the vulnerability, Shane Jones was ordered to keep quiet. However, the employee ultimately chose to disclose the flaw to the public.

It is understood that DALL-E 3 is an image-generating AI developed by OpenAI, which can generate images based on users’ textual prompts. However, Shane Jones found that under certain specific textual prompts, the model would generate some inappropriate content that is not suitable for display in professional settings.

The disclosure of this vulnerability has sparked public concern about the regulation of AI-generated content. Some experts say that the regulation of AI-generated content needs to be strengthened to prevent the production and spread of similar inappropriate content. On the other hand, some believe that AI technology itself is not wrong, the key lies in how it is used and regulated.

【来源】https://www.ithome.com/0/748/569.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注