上海的陆家嘴

##OpenAI 反对加州 AI 安全法案,担忧扼杀创新,引发行业争议

**IT之家 8 月 22 日消息**,OpenAI 近日公开反对加州一项涉及 AI 安全领域的法案,该法案旨在规范人工智能技术发展,防止其造成“严重损害”。OpenAI 认为该法案将损害 AI 行业的创新,并认为其安全问题的监管应该由联邦政府而不是州政府来制定。

该法案名为“SB 1047”,今年 5 月已获得加州州参议院一致认可。该法案要求人工智能公司采取措施,防止其模型造成“严重损害”,例如开发可能导致大量人员伤亡的生物武器或造成超过 5 亿美元的经济损失。根据该法案,AI 公司需要确保其人工智能系统可以关闭,采取“合理措施”确保 AI 模型不会造成灾难,并向加州司法部长披露合规声明。如果不遵守这些要求,企业可能会被起诉并面临民事处罚。

OpenAI 首席战略官杰森・权在致函加州参议员斯科特・维纳办公室的信中表示,该法案将威胁到加州作为全球人工智能领导者的地位,减缓创新的步伐,并导致加州世界级的工程师和企业家离开该州,从而到其他地方寻找机会。彭博社援引消息人士的话透露,由于对加州监管环境的不确定性,OpenAI 已暂停有关扩大其旧金山办公室的计划。

对于 OpenAI 的观点,维纳表示关于 AI 人才离开该州的论点“毫无意义”,因为该法律将适用于在加州开展业务的任何公司,无论办公室在哪里。

该法案受到了许多主要科技公司、初创企业和风险投资家的强烈反对,他们表示,对于仍处于起步阶段的 AI 技术来说,这是过度干涉,可能会扼杀该州的技术创新。OpenAI 也表示,如果该法案通过,可能会对美国在人工智能和国家安全方面的竞争力产生“广泛和重大的”影响。

这场围绕 AI 安全法案的争论反映了目前人工智能发展所面临的挑战。一方面,人工智能技术的发展速度惊人,其应用领域不断扩展,但也带来了潜在的风险和伦理问题。另一方面,如何平衡人工智能技术发展与安全监管之间的关系,成为各国政府和科技公司共同面临的难题。

加州的 AI 安全法案是全球范围内对人工智能技术进行监管的尝试之一。未来,随着人工智能技术的不断发展,类似的监管措施可能会在更多国家和地区出现。如何找到一个既能促进人工智能技术发展,又能有效控制其风险的监管模式,将是未来人工智能领域的重要议题。

英语如下:

##OpenAI Opposes California AI Safety Bill, Citing Innovation Concerns

**IT Home, August 22** – OpenAI has publicly opposed a California billaimed at regulating the development of AI technology to prevent “serious harm.” The company argues that the bill would stifle innovation in the AI industry and that regulation of itssafety should be handled by the federal government, not state governments.

The bill, known as “SB 1047,” was unanimously approved by theCalifornia State Senate in May. It requires AI companies to take steps to prevent their models from causing “serious harm,” such as developing biological weapons that could lead to mass casualties or causing economic losses exceeding $500 million. Under the bill, AI companies would need to ensure their AI systems can be shut down, take “reasonable measures” to ensure AI models do not cause disasters, and disclose compliance statements to the California Attorney General. Failure to comply with these requirements could result inlawsuits and civil penalties.

In a letter to the office of California Senator Scott Wiener, OpenAI Chief Strategy Officer Jason Kwon said the bill would threaten California’s position as a global AI leader, slow innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of opportunities elsewhere.Bloomberg, citing sources, revealed that OpenAI has paused plans to expand its San Francisco office due to uncertainty about California’s regulatory environment.

Responding to OpenAI’s concerns, Wiener said the argument about AI talent leaving the state is “nonsensical” as the law would apply to any company doing business in California, regardless of office location.

The bill has faced strong opposition from many major tech companies, startups, and venture capitalists, who argue that it is overly intrusive for AI technology that is still in its early stages and could stifle technological innovation in the state. OpenAI also stated that if the bill passes, it could havea “broad and significant” impact on American competitiveness in AI and national security.

This debate surrounding the AI safety bill reflects the challenges currently faced by AI development. On the one hand, AI technology is developing at an astonishing pace, with its applications expanding constantly, but it also presents potential risks and ethical concerns. Onthe other hand, balancing the development of AI technology with safety regulations is a challenge faced by governments and tech companies worldwide.

California’s AI safety bill is one of several attempts globally to regulate AI technology. As AI technology continues to evolve, similar regulatory measures are likely to emerge in more countries and regions. Finding aregulatory model that promotes AI development while effectively controlling its risks will be a key issue in the future of AI.

【来源】https://www.ithome.com/0/790/507.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注