**Meta旗下Llama 2大模型安全性受质疑,专业评估揭示安全隐患**
近日,AI 安全公司 DeepKeep 发布了一份关于 Meta 公司的 Llama 2 大语言模型的评估报告,引发业界关注。该报告指出,在 13 个风险评估类别中,Llama 2 仅通过了 4 项测试,其安全性被质疑。
据悉,DeepKeep 的评估结果显示,Llama 2 模型在幻觉风险方面尤为突出。特别是拥有 70 亿参数的 Llama 2 7B 模型,其幻觉率高达 48%,意味着该模型在回答时存在虚假或误导性内容的可能性较大。这一发现令人担忧,因为语言模型的准确性是确保信息安全和用户体验的关键。
对于 DeepKeep 的评估报告,Meta 公司尚未发表官方回应。但业内专家指出,随着人工智能技术的快速发展,模型的安全性及可靠性问题日益凸显。此次 DeepKeep 的评估报告为行业敲响了警钟,希望 Meta 等公司能重视并积极改进模型的安全性能。
针对上述情况,广大用户和开发者应提高警惕,关注人工智能模型的安全性问题。未来,随着技术的不断进步,期待各大科技公司能不断提升模型的安全性能,确保人工智能技术的健康发展。DeepKeep 的评估报告为后续研究提供了重要参考,行业期待更严格的监管标准和措施出台。
英语如下:
News Title: “Safety Doubts Surround Meta’s Llama 2 Big Model: Evaluation Report Highlights High Risks and Serious Hallucination Issues”
Keywords: 1. Low Safety of Meta Models
News Content: **Meta’s Llama 2 Big Model Safety Questioned, Professional Evaluation Uncovers Security Risks**
Recently, DeepKeep, an AI security company, released an evaluation report on Meta’s Llama 2 large language model, attracting industry attention. The report indicates that Llama 2 only passed four tests in 13 risk assessment categories, calling into question its safety.
According to DeepKeep’s assessment, the Llama 2 model stands out in terms of hallucination risk. Specifically, the Llama 2 7B model with 7 billion parameters has a hallucination rate of up to 48%, meaning there is a high likelihood of it providing false or misleading content in its responses. This discovery is concerning as the accuracy of language models is crucial for ensuring information security and user experience.
Meta has not yet issued an official response to DeepKeep’s evaluation report. However, industry experts point out that with the rapid development of artificial intelligence technology, issues related to model safety and reliability are becoming increasingly prominent. DeepKeep’s evaluation report serves as a wake-up call, hoping that companies like Meta will take model security seriously and actively improve it.
In response to the above situation, users and developers should be more vigilant and pay attention to the safety issues of artificial intelligence models. As technology continues to advance, major tech companies are expected to continuously improve model security, ensuring the healthy development of artificial intelligence technology. DeepKeep’s evaluation report provides an important reference for future research, and the industry is looking forward to stricter regulatory standards and measures being introduced.
【来源】https://ai-bot.cn/go/?url=aHR0cHM6Ly93d3cuaXRob21lLmNvbS8wLzc2Mi81OTMuaHRt
Views: 1