【OpenAI解散超级对齐AI风险团队】
据华尔街见闻报道,知名人工智能研究机构OpenAI近日宣布,由前首席科学家Ilya领导的超级对齐AI风险团队已经解散。该团队旨在研究超级智能可能带来的安全风险,并提出相应的对策。团队的核心成员包括Jan Leike等。OpenAI表示,该团队的研究工作将被整合到其他研究小组中,以确保研究的连续性和完整性。
对此,OpenAI的联合创始人埃隆·马斯克发表评论称,这一决定表明安全问题并非OpenAI的首要任务。Jan Leike在离职后透露,他与OpenAI高层的分歧长期存在,特别是在公司的核心优先事项上。团队在推进研究项目和争取计算资源方面遇到了重大困难,这对其研究进度和质量产生了负面影响。
OpenAI的这一决定引发了业界对于人工智能安全问题的广泛关注。有专家指出,人工智能的发展速度极快,如何在保证技术进步的同时确保其安全,是当前和未来必须面对的重要课题。OpenAI的这一调整,或许意味着其在人工智能安全领域的策略有所调整,未来可能会更加注重其他方面的研究和发展。
英语如下:
News Title: “OpenAI Disbands AI Safety Team; Musk Questions Safety Not a Priority”
Keywords: Disbandment, AI Safety, Dissent
News Content:
[OpenAI Dissolves Superintelligence Alignment Team]
According to Wall Street Journal reports, the renowned artificial intelligence research institution OpenAI recently announced the dissolution of its Superintelligence Alignment AI Risk Team, led by former Chief Scientist Ilya. The team was tasked with studying the potential safety risks associated with superintelligent AI and proposing countermeasures. Core members of the team included Jan Leike, among others. OpenAI stated that the research work of the team would be integrated into other research groups to ensure the continuity and integrity of the research.
In response, OpenAI co-founder Elon Musk commented that this decision indicated that safety issues were not OpenAI’s primary concern. Jan Leike revealed after leaving the team that there had been longstanding disagreements with OpenAI leadership, particularly concerning the company’s core priorities. The team faced significant challenges in advancing its research projects and securing computational resources, which adversely affected its research progress and quality.
OpenAI’s decision has sparked widespread concern in the industry about the safety of artificial intelligence. Experts have pointed out that the development of artificial intelligence is moving at an extremely rapid pace, and it is crucial to ensure both technological progress and safety. OpenAI’s adjustment may indicate a shift in its strategy regarding artificial intelligence safety, suggesting that it may place greater emphasis on other areas of research and development in the future.
【来源】https://wallstreetcn.com/articles/3715193
Views: 3