加州正在就一项名为SB-1047的法案进行辩论,该法案旨在为高风险的人工智能模型建立明确的安全标准,以防止其被滥用或引发灾难性后果。然而,这项法案遭到了包括李飞飞在内的数十名科学家的强烈反对。

李飞飞在《财富》网站上撰文,批评SB-1047法案可能会对人工智能生态系统造成重大而意外的损害。她指出,如果法案通过,它将不必要地惩罚开发者,扼杀开源社区,并阻碍人工智能学术研究,同时无法解决其旨在解决的真正问题。

李飞飞强调,SB-1047要求所有超过特定阈值的模型都包含“终止开关”,这是一种可以随时关闭程序的机制。这将摧毁开源社区,这是无数创新的源泉。她还指出,该法案将束缚开源开发,迫使开发人员退步并采取防御行动,这正是我们试图避免的。

此外,SB-1047还要求模型开发者对其模型的下游使用或修改承担法律责任,这将阻碍开源社区的发展。监督新法律执行的“前沿模型部门”将制定安全标准,并向该机构歪曲模型的功能可能会使开发人员因伪证而入狱。

法案中还加入了吹哨人保护条款,保护和鼓励AI开发实体内部的举报者,确保员工可以在不受报复的情况下报告企业的不合规情况。

如果法案获得通过,州长Gavin Newsom的一个签名就可以将其纳入加州法律。a16z普通合伙人Anjney Midha表示,如果这项法案在加州获得通过,将为其他州树立先例,并在美国国内外产生连锁反应。

在太平洋夏令时间8月7日早晨,相关部门将举行关于该法案的听证会。留给科学家们的抗议时间已经不多了。因此,李飞飞亲自撰文,陈明法案利害。还有些科学家正在签署一封联名信,以阻止法案通过。

英语如下:

News Title: “Silicon Valley AI Scientists Fight California Bill: Innovation or Suppression?”

Keywords: Regulation, Innovation, AI Bill

News Content:
California is currently in a debate over a bill known as SB-1047, which aims to establish clear safety standards for high-risk artificial intelligence models to prevent their misuse or causing catastrophic outcomes. However, the bill has been strongly opposed by over a dozen scientists, including Fei-Fei Li.

Li Fei-Fei wrote on the Fortune website, criticizing the SB-1047 bill for potentially causing significant and unintended damage to the artificial intelligence ecosystem. She pointed out that if the bill were to pass, it would unnecessarily penalize developers, stifle the open-source community, and hinder academic research in artificial intelligence, while failing to address the genuine issues it aims to solve.

Li Fei-Fei stressed that SB-1047 requires all models exceeding a specific threshold to include a “kill switch,” a mechanism that can shut down the program at any time. This would destroy the open-source community, which is the source of countless innovations. She also noted that the bill would tie the hands of open-source developers, forcing them to take defensive actions backward, which is precisely what we are trying to avoid.

Additionally, SB-1047 also requires model developers to bear legal responsibility for the downstream use or modification of their models, which would hinder the development of the open-source community. The “Frontier Model Department” tasked with overseeing the new law’s enforcement will set safety standards and may lead to developers being imprisoned for perjury if they misrepresent the functions of their models.

The bill also includes whistleblower protection provisions, protecting and encouraging whistleblowers within AI development entities to report non-compliance without fear of retaliation.

If the bill passes, Governor Gavin Newsom’s signature could make it part of California law. Anjney Midha, a general partner at a16z, stated that if the bill passes in California, it would set a precedent for other states and could lead to a domino effect domestically and internationally.

On the morning of August 7, Pacific Daylight Time, a hearing on the bill will be held by relevant departments. With limited time left for the scientists to protest, Li Fei-Fei has personally written an article to clarify the bill’s implications. Some scientists are also signing a joint letter to prevent the bill from passing.

【来源】https://www.jiqizhixin.com/articles/2024-08-07-4

Views: 8

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注