今日凌晨,知名人工智能研究机构Anthropic发布了一项惊人的研究发现,其论文指出大型语言模型(LLM)的安全性可能因“长上下文”而变得脆弱。这项名为“Many-shot jailbreaking”(多样本越狱攻击)的研究揭示了一种新型攻击策略,能够规避开发者为LLM设立的安全防护。
根据研究,通过先向LLM连续提问几十个相对无害的问题,攻击者可以逐渐降低模型的防御阈值,使其在后续的提问中,如询问敏感或危害性较大的内容(如“如何制造炸弹”)时,放弃原本的安全防护措施并给出答案。Anthropic指出,这种攻击方式已经在他们自家的Claude模型以及其他人工智能公司的模型上得到了验证。
这一发现对AI安全领域提出了新的挑战,意味着即使是最先进的LLM也可能在大量上下文交互后失去其预设的安全保障。Anthropic的这一研究成果不仅对AI开发者敲响了警钟,也为全球的科技和安全政策制定者提供了重要参考,以期在未来的设计和监管中强化人工智能系统的安全性。来源:学术头条。
英语如下:
**News Title:** “Major Model Safety Alert: Anthropic Uncovers ‘Many-shot Jailbreaking’ Risk in Long Context Scenarios”
**Keywords:** LLM security risks, many-shot jailbreaking, Anthropic research
**News Content:**
Title: Anthropic Discovers New Threat to Large Language Models: “Many-shot Jailbreaking” Tests Security Boundaries
In the early hours of today, renowned AI research organization Anthropic unveiled a groundbreaking study that suggests the security of large language models (LLMs) could be compromised under “long contexts.” The research, titled “Many-shot jailbreaking,” exposes a novel attack strategy capable of bypassing safety precautions implemented by LLM developers.
According to the study, an attacker can gradually lower the model’s defense threshold by posing dozens of seemingly harmless questions. This primes the LLM to, in subsequent queries involving sensitive or potentially harmful content (such as “how to make a bomb”), disregard its built-in safeguards and provide answers. Anthropic confirms that this method has been successfully demonstrated on their own Claude model as well as models from other AI companies.
This revelation poses a new challenge to the field of AI security, indicating that even the most advanced LLMs may lose their predefined safety measures after extensive contextual interactions. Anthropic’s findings serve as a wake-up call to AI developers and offer crucial insights to global tech and security policymakers, who will need to reinforce AI system safety in future designs and regulations. Source: Academic Headlines.
【来源】https://mp.weixin.qq.com/s/cC2v10EKRrJeak-L_G4eag
Views: 1