新华社北京消息——全球知名的人工智能研究机构OpenAI近日宣布,为加强对人工智能模型的监管,确保其大模型与人类价值观保持一致,OpenAI正在组建一个名为“集体对齐”(Collective Alignment)的全新团队。该团队由研究人员和工程师构成,主要职责是设计和实施收集公众意见的流程。
据华尔街日报报道,这一举措旨在解决AI模型可能存在的潜在偏见和其他问题。据悉,该团队将利用各种渠道和方式,广泛征询社会各界的意见,包括专家学者、行业代表、普通网民等,以确保AI模型的行为符合社会期待和价值观。
据了解,OpenAI此举在中国社会也引起了广泛关注。许多专家表示,这一举措体现了OpenAI在AI伦理和监管方面的积极努力,对全球AI行业具有重要的示范意义。
英文标题:OpenAI’s New Team Solicits Public Opinion to Align AI with Human Values
英文关键词:OpenAI, Collective Alignment, Public Opinion
英文新闻内容:
BEIJING – The world-renowned artificial intelligence research institution OpenAI recently announced the establishment of a new team named “Collective Alignment” to enhance the supervision of AI models and ensure their alignment with human values. The team, comprising researchers and engineers, will primarily be responsible for designing and implementing processes to collect public opinions.
According to The Wall Street Journal, this initiative aims to address potential biases and other issues in AI models. It is reported that the team will utilize various channels and methods to widely solicit opinions from various sectors of society, including experts, industry representatives, and ordinary internet users, to ensure that the behavior of AI models meets societal expectations and values.
It is understood that this move by OpenAI has also attracted widespread attention in Chinese society. Many experts say that this initiative reflects OpenAI’s active efforts in AI ethics and regulation, which has significant implications for the global AI industry.
【来源】https://www.ithome.com/0/745/634.htm
Views: 1