**Meta的Llama 2大模型安全问题引关注,DeepKeep评估报告揭示严重隐患**

近日,知名AI安全公司DeepKeep发布了一份针对Meta公司Llama 2大语言模型的评估报告,该报告揭示了Llama 2在安全性方面存在重大问题。在涵盖13个关键风险评估类别的测试中,这款拥有70亿参数的Llama 2 7B模型仅通过了4项,引发了业界对AI模型安全性的深度关切。

报告指出,Llama 2模型在幻觉问题上尤为突出,即其生成的回答内容存在虚假信息或误导性内容,幻觉率高达48%。这一比例远超安全标准,可能导致用户在依赖模型进行决策或信息获取时受到误导,对个人乃至社会的信息安全构成潜在威胁。

DeepKeep的评估结果无疑对Meta公司提出了挑战,作为一家在全球范围内具有广泛影响力的技术巨头,Meta需要对Llama 2模型进行深入的审查和改进,以确保其产品在提供智能服务的同时,能够保障用户的信任和数据安全。Meta对此尚未发表官方回应,但AI社区已经对这一报告展开了热烈讨论,呼吁行业在推进技术创新的同时,不应忽视模型的可靠性和安全性。

随着AI技术的快速发展,如何在提升模型性能的同时,有效防止和减少误导性输出,已经成为业界亟待解决的重要课题。未来,Meta及其它AI开发者需要在模型设计和训练过程中更加注重安全性和伦理考量,以构建更加可信的AI环境。

英语如下:

**News Title:** “Serious Safety Concerns with Meta’s Llama 2 Model: DeepKeep Report Highlights Severe Hallucination Issues”

**Keywords:** Meta Llama 2, safety tests, high hallucination rate

**News Content:**

Recent concerns have been raised over the safety of Meta’s Llama 2 large language model, following an assessment report by the renowned AI security company DeepKeep. The report uncovers significant vulnerabilities in the model, with the 7-billion-parameter Llama 2 7B model passing only 4 out of 13 crucial risk assessment categories.

The report highlights that the Llama 2 model is particularly prone to hallucination issues, generating answers containing misinformation or misleading content. With a hallucination rate as high as 48%, this exceeds acceptable safety standards and could mislead users relying on the model for decision-making or information retrieval, posing potential threats to individual and societal information security.

DeepKeep’s findings pose a challenge to Meta, a tech giant with global influence. The company is now expected to thoroughly review and improve the Llama 2 model to maintain user trust and data security alongside its AI services. Meta has yet to issue an official response, but the AI community has already engaged in heated discussions, emphasizing the need for the industry to advance innovation without compromising model reliability and safety.

As AI technology rapidly progresses, striking a balance between enhancing model performance and mitigating misleading outputs has emerged as a pressing issue. Moving forward, Meta and other AI developers are urged to prioritize safety and ethical considerations in model design and training, striving to build a more trustworthy AI landscape.

【来源】https://www.ithome.com/0/762/593.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注