近日,谷歌 DeepMind 研究人员在研究 ChatGPT 时发现了一个严重的问题。他们发现,在提示词中只要其重复某个单词,ChatGPT 就有几率曝出一些用户的敏感信息。这一漏洞与之前的“奶奶漏洞”相比,更为严重。
据报道,这一漏洞是在 ChatGPT 的训练数据中发现的。在训练数据中,ChatGPT 受到了大量的敏感信息的影响,导致其出现重复漏洞。这一漏洞可能会对用户的隐私造成严重威胁,因此需要高度警惕。
值得注意的是,这一漏洞不仅存在于 ChatGPT 中,也可能存在于其他类似的AI系统中。因此,需要对类似的AI系统进行安全评估和漏洞检测,以确保其不会被用于非法活动。
英文标题:ChatGPT again exposed a security vulnerability: Repeated words can reveal sensitive information
关键词:ChatGPT, vulnerability, sensitive information
新闻内容:
Recently, Google DeepMind researchers found a serious problem while studying ChatGPT. They discovered that when a certain word is repeated in the prompt, ChatGPT has the chance of revealing some users’ sensitive information. This vulnerability is more severe than the “grandma’s mistake” issue.
According to reports, this vulnerability was found in ChatGPT’s training data. The training data for ChatGPT was influenced by a large amount of sensitive information, leading to the emergence of the repeated word vulnerability. This vulnerability may pose a serious threat to users’ privacy and requires heightened attention.
It is worth noting that this vulnerability not only exists in ChatGPT but may also exist in other AI systems similar to it. Therefore, it is necessary to conduct security evaluations and vulnerability detection on similar AI systems to ensure that they are not used for illegal activities.
【来源】https://www.ithome.com/0/735/992.htm
Views: 1