在人工智能技术迅速发展的今天,AI的安全问题日益成为全球关注的焦点。日前,包括图灵奖得主约书亚·本吉奥、杰弗里·辛顿、姚期智等在内的数十位中外专家在北京联合签署了《北京AI安全国际共识》,旨在为人工智能的发展设立一道安全屏障。该共识由智源研究院发起,涉及人工智能的“风险红线”和“路线”两大块,旨在为人工智能技术的健康发展提供明确的指导原则。
《北京AI安全国际共识》明确提出,禁止人工智能系统自行复制和改进,也不得将AI技术应用于生化武器等违反国际伦理和法律的行为。此外,共识还强调了AI系统不应追求权力,不得协助不良行为者,也不得进行欺骗性行为。这些规定不仅为中国的人工智能发展设定了底线,也为全球的人工智能安全提供了借鉴。
此次共识的签署,标志着全球人工智能专家对于AI安全问题的共同关切和合作意愿,也为未来人工智能技术的健康发展奠定了基础。
英文标题:Experts Unite to Sign the Beijing AI Safety International Consensus
英文关键词:AI Safety, Risk Redlines, Technical Guidelines
英文新闻内容:
In the rapidly evolving landscape of AI technology, the safety of AI has become a global concern. Recently, dozens of experts, including Turing Award winners Yoshua Bengio and Geoffrey Hinton and Turing Award winner Andrew Yao, convened in Beijing to sign the Beijing AI Safety International Consensus, aiming to set a safety barrier for AI development. Initiated by the Beijing Academy of Artificial Intelligence, the consensus covers two main areas: “risk redlines” and “routes,” providing clear guiding principles for the healthy development of AI technology.
The Beijing AI Safety International Consensus explicitly prohibits AI systems from self-replicating and self-improving and from using AI technology in biochemical weapons or other actions that violate international ethics and laws. Additionally, the consensus emphasizes that AI systems should not seek power, assist malicious actors, or engage in deceptive behavior. These regulations set a baseline for China’s AI development and offer a reference for AI safety globally.
The signing of this consensus signifies a shared concern and willingness to cooperate among global AI experts regarding AI safety issues, laying the foundation for the future healthy development of AI
【来源】https://new.qq.com/rain/a/20240318A04IFG00
Views: 1