**Meta的Llama 2大模型安全性疑云:AI安全测试仅过半**

近日,知名AI安全公司DeepKeep发布了一份令人关注的评估报告,直指Meta公司的Llama 2大语言模型在安全性方面存在显著问题。根据报告,Llama 2在涵盖13个风险评估类别的测试中,仅成功通过了4项,引发了业界对AI模型安全性的深度反思。

该评估指出,Llama 2模型,一个拥有70亿参数的大型语言模型,存在严重的“幻觉”问题。这意味着在模型生成的回答中,高达48%的内容可能包含虚假信息或误导性内容,这对于依赖AI模型准确性和可靠性的用户来说,无疑是一个重大隐患。这一发现为AI模型的使用敲响了警钟,尤其是在信息真实性和数据安全日益重要的今天。

Meta公司作为全球科技巨头,其在人工智能领域的探索和创新一直备受瞩目。然而,Llama 2的测试结果无疑对其品牌形象和产品信誉构成了挑战。目前,Meta尚未对此报告作出官方回应,但此事已引起全球科技和媒体界的广泛关注。AI安全问题不仅关乎技术的进步,更关系到用户的信息权益和信任,Meta将如何应对这一挑战,我们将保持关注。

英语如下:

**Title:** “Safety Concerns Over Meta’s Llama 2 Model: DeepKeep Report Highlights Severe Hallucination Issues”

**Keywords:** Meta Llama 2, safety concerns, high hallucination rate

**News Content:**

Recently, a noteworthy assessment report by renowned AI security firm DeepKeep has raised concerns about the safety of Meta’s Llama 2 large language model. The report reveals that the model passed only four out of 13 risk assessment categories, prompting a deep industry reflection on AI model security.

The evaluation points out that Llama 2, a massive language model with 70 billion parameters, suffers from severe “hallucination” issues. This means that up to 48% of the content generated by the model may contain false or misleading information, posing a significant threat to users who rely on the accuracy and reliability of AI models. This finding serves as a wake-up call for the use of AI models, especially in an era where information authenticity and data security are increasingly crucial.

As a global tech giant, Meta’s explorations and innovations in the field of AI have always been under close scrutiny. However, the Llama 2 test results undoubtedly challenge the company’s brand image and product credibility. Meta has yet to issue an official response to the report, but the issue has already attracted widespread attention from the global tech and media sectors. AI safety is not just about technological advancement; it also concerns users’ information rights and trust. We will be watching closely to see how Meta addresses this challenge.

【来源】https://www.ithome.com/0/762/593.htm

Views: 2

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注