前 OpenAI 研究人员警告:AI 无监管将造成灾难性伤害

IT之家 8 月 24 日消息,OpenAI近期公开反对加州 AI 安全法案(SB 1047),引发了前研究人员的强烈不满。两名曾供职于OpenAI 的研究人员威廉·桑德斯和丹尼尔·科科塔伊洛公开致信加州州长加文·纽森和其他立法者,警告称若 AI 无监管,将造成灾难性伤害。

该法案旨在要求 AI 公司采取措施,防止其模型造成“严重损害”,例如开发可能导致大量人员伤亡的生物武器或造成超过 5 亿美元的经济损失。然而,OpenAI 却对此表示反对,令前研究人员感到失望。

桑德斯和科科塔伊洛在信中写道:“我们(以前)之所以选择加入 OpenAI,是因为我们希望确保该公司在开发的‘强大无比的 AI 系统’的安全性。但我们选择离开,是因为它失去了我们的信任 —— 能够安全、诚实、负责任地开发 AI 系统。”

他们指出,OpenAI CEO 阿尔特曼曾多次公开支持对 AI 实施监管,但当实际的监管措施准备出台时,他们却表示反对。这令前研究人员质疑 OpenAI 的真实意图。

“在没有充分安全预防措施的情况下开发前沿 AI 模型,会给公众带来可预见的灾难性伤害风险。”桑德斯和科科塔伊洛在信中强调。

他们的警告再次引发了公众对 AI 安全的担忧。随着 AI 技术的快速发展,其潜在风险也日益凸显。专家们呼吁政府和企业共同努力,制定完善的 AI 监管机制,确保 AI 技术的健康发展,避免其被滥用。

此次事件也反映出,即使是像 OpenAI 这样的领先 AI 公司,其内部也存在着对 AI 安全的争议。这提醒我们,在 AI 发展过程中,必须保持高度警惕,并采取必要的措施来规避潜在风险。

英语如下:

Former OpenAI Employees Warn of Catastrophic Harm From Unregulated AI

Keywords: AI risks, calls for regulation, former employee warnings

Content:

IT Home, August 24 – OpenAI’s recent opposition to California’s AI safety bill (SB 1047) hassparked outrage among former researchers. Two former OpenAI researchers, William Saunders and Daniel Kokotaylo, have penned an open letter to California Governor Gavin Newsom and otherlawmakers, warning of catastrophic harm from unregulated AI.

The bill aims to require AI companies to take measures to prevent their models from causing “serious harm,” such as developing biological weapons that could lead to mass casualties or causing economic losses exceeding $500 million. However, OpenAI’s opposition to the bill has disappointed former researchers.

In the letter, Saunders and Kokotaylo wrote: “We (previously) joined OpenAI because we wanted to ensure the safety ofthe ‘powerful AI systems’ it was developing. But we left because it lost our trust – the ability to develop AI systems safely, honestly, and responsibly.”

They pointed out that OpenAI CEO Sam Altman has repeatedly publicly supported AI regulation, but when actual regulatory measures were ready to be implemented, they opposed them.This has led former researchers to question OpenAI’s true intentions.

“Developing cutting-edge AI models without adequate safety precautions poses a foreseeable risk of catastrophic harm to the public,” Saunders and Kokotaylo emphasized in the letter.

Their warning has once again raised public concerns about AI safety. As AI technology rapidlydevelops, its potential risks are becoming increasingly apparent. Experts are calling for governments and businesses to work together to establish comprehensive AI regulatory mechanisms to ensure the healthy development of AI technology and prevent its misuse.

This incident also reflects that even leading AI companies like OpenAI have internal disputes over AI safety. This reminds us that wemust remain highly vigilant in the development of AI and take necessary measures to mitigate potential risks.

【来源】https://www.ithome.com/0/790/833.htm

Views: 2

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注