OpenAI于本周一发布了名为“准备框架”(Preparedness Framework)的指导方针,旨在为公司的AI安全提供指导。这套指南强调了在AI模型发布方面的规定,即使CEO等公司领导层认为AI模型是安全的,董事会也可以阻止其发布。
根据这份指南,即使公司领导层批准了AI模型,如果该模型存在安全风险,董事会仍可以决定不发布。此外,即使AI模型已经发布,OpenAI也需要定期更新和维护其安全性和可靠性。
OpenAI表示,其AI安全指南仍处于测试阶段,并鼓励公司内部进行更多的测试和评估,以确保AI模型的安全性和可靠性。
新闻翻译:
Title: OpenAI发布AI安全指南,领导层无法决定AI模型是否安全
Keywords: OpenAI, AI safety, leadership, AI models, release
News content:
OpenAI released a document titled “Preparedness Framework” on Monday, which aims to provide guidance for the company’s AI security. This document emphasizes the rules for the release of AI models, even if the CEO and other company leaders believe that the AI model is secure, the board of directors can still decide not to release it. Moreover, even if the AI model has been released, OpenAI needs to regularly update and maintain its security and reliability.
OpenAI said that its AI security guidelines are still in the testing stage and encourages more testing and evaluation of AI models to ensure their security and reliability.
【来源】https://www.cls.cn/detail/1547583
Views: 1