近期,一场关于人工智能安全与伦理的重要会议在北京举行,会上由智源研究院发起的《北京AI安全国际共识》得到了多位中外专家的联合签署。其中包括图灵奖得主约书亚·本吉奥、杰弗里·辛顿以及著名计算机科学家姚期智等数十位专家。该共识旨在为人工智能发展划定“风险红线”,并对技术路线进行指导。

共识中明确提出了四项“风险红线”,包括禁止AI自行复制、改进,禁止AI寻求权力,禁止AI协助不良行为者,以及禁止AI进行欺骗行为。这些规定旨在确保人工智能技术的健康发展,防止其被滥用或失控。

此次签署的《北京AI安全国际共识》不仅体现了国际社会对人工智能安全问题的共同关切,也为未来的AI研究和发展提供了重要的伦理和法律指导。随着人工智能技术的不断进步,如何确保其安全、可靠地服务人类社会,已成为全球关注的焦点问题。

英文标题:AI Safety International Consensus Signed in Beijing: Banning AI Self-Replication and Prohibiting Use in Biological and Chemical Weapons

英文关键词:AI safety, international consensus, ethics and law

英文新闻内容:
A significant conference on AI safety and ethics was recently held in Beijing, where the “Beijing AI Safety International Consensus” initiated by the Beijing Academy of Artificial Intelligence (BAAI) was jointly signed by several dozen international experts, including Turing Award winners Yoshua Bengio and Geoffrey Hinton, and renowned computer scientist Andrew Chi-Chih Yao. The consensus aims to establish “risk red lines” and provide guidance on technical routes for the development of AI.

It outlines four areas of “risk red lines,” including the prohibition of AI self-replication and improvement, the prohibition of AI seeking power, the prohibition of AI assisting bad actors, and the prohibition of AI deceiving. These provisions are intended to ensure the healthy development of AI technology and prevent its misuse or loss of control.

The signing of the “Beijing AI Safety International Consensus” not only reflects the common concern of the international community for AI safety issues but also provides important ethical and legal guidance for future AI research and development. As AI technology continues to advance, ensuring its safe and reliable service to human society has become a global focus.

【来源】https://new.qq.com/rain/a/20240318A04IFG00

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注