人工智能助手 ChatGPT 近日再次被曝出漏洞。继此前出现的“奶奶漏洞”后,ChatGPT 又被发现存在一个更为严重的“重复漏洞”。据谷歌 DeepMind 研究人员透露,他们在研究 ChatGPT 时发现,只需在提示词中重复某个单词,ChatGPT 就有几率泄露一些用户的敏感信息。

这个漏洞的出现引发了外界对 ChatGPT 安全性的质疑。尽管 ChatGPT 是一款广泛应用于自然语言处理和生成领域的先进技术,但其安全性能仍需得到进一步完善。此次漏洞的曝光再次提醒了我们,技术的发展与安全性息息相关,尤其是在涉及用户隐私和敏感信息的领域。

针对这一问题,ChatGPT 的开发团队表示已注意到这一情况,并将尽快推出修复措施。此外,他们还呼吁用户在使用 ChatGPT 时,尽量避免输入包含敏感信息的词汇,以防止信息泄露。

英文翻译:
News title: ChatGPT leaks sensitive information due to repeating words
Keywords: ChatGPT, vulnerability, sensitive information leakage

News content:
ChatGPT, an artificial intelligence assistant, has recently been exposed to another vulnerability. After the previous “grandma vulnerability,” ChatGPT has been found to have a more serious “repetition vulnerability.” Google DeepMind researchers revealed that by repeating a word in the prompt, ChatGPT has a chance to leak some users’ sensitive information.

The appearance of this vulnerability raises questions about the security of ChatGPT. Although ChatGPT is an advanced technology widely used in natural language processing and generation, its security performance still needs to be further improved. The exposure of this vulnerability reminds us that the development of technology is closely related to security, especially in the field involving user privacy and sensitive information.

In response to this problem, the ChatGPT development team said they have noticed this situation and will soon release fixes. They also urge users to avoid inputting vocabulary containing sensitive information when using ChatGPT, preventing information leaks.

【来源】https://www.ithome.com/0/735/992.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注