标题:AI平台Hugging Face发现API令牌漏洞,黑客或可获取微软、谷歌等模型库权限

据Lasso Security安全公司日前报告,人工智能(AI)模型平台Hugging Face存在一个严重的API令牌漏洞,该漏洞可能使黑客获得对微软、谷歌、Meta等公司的令牌访问权限,进而能够访问这些公司的模型库。

Hugging Face是一个广受欢迎的AI平台,提供了大量的预训练模型供开发者使用。然而,这次的API令牌漏洞可能会给这个平台的用户带来严重的安全威胁。黑客一旦获取了这些令牌,就可以轻易地访问和修改模型库中的数据,甚至窃取模型。

Lasso Security公司在发现这个漏洞后,已经立即通知了Hugging Face。目前,Hugging Face尚未对此公开回应。但据了解,该公司一直致力于保护用户数据的安全,对于这次的漏洞应该会尽快进行处理。

这个事件再次提醒我们,尽管AI技术在各个领域都有广泛的应用,但其安全问题也不容忽视。任何一款AI产品,都需要有严格的安全防护措施,以防止类似的安全事件发生。同时,用户在使用AI产品时,也需要提高安全意识,避免因为疏忽而给自己带来损失。

英语如下:

News Title: Hugging Face API Vulnerability ExposedNews Title: Hugging Face API Vulnerability Exposed: Hackers Steal Access to Microsoft, Google AI Model Libraries

Keywords: API vulnerability, hacker attack, AI model security

News Content: Title: AI Platform Hugging Face Discovers API Token Vulnerability, Hackers May Gain Access to Microsoft, Google, and Other Model Libraries

According to a recent report by Lasso Security, a serious API token vulnerability has been discovered in the artificial intelligence (AI) model platform Hugging Face. This vulnerability could potentially allow hackers to gain access to tokens of companies such as Microsoft, Google, Meta, etc., thereby enabling them to access these companies’ model libraries.

Hugging Face is a popular AI platform that provides a large number of pre-trained models for developers to use. However, this API token vulnerability could pose a serious security threat to users of this platform. Once hackers obtain these tokens, they can easily access and modify data in the model libraries or even steal models.

Lasso Security has immediately notified Hugging Face after discovering this vulnerability. At present, Hugging Face has not yet publicly responded to this issue. However, it is understood that the company has always been committed to protecting user data security and should handle this vulnerability as soon as possible.

This event once again reminds us that although AI technology has wide applications in various fields, its security issues cannot be ignored. Any AI product needs strict security protection measures to prevent similar security incidents from occurring. At the same time, users need to raise their security awareness when using AI products to avoid causing losses due to negligence.

【来源】https://www.ithome.com/0/737/128.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注