上海的陆家嘴

人工智能公司OpenAI近日宣布,他们将组建一个名为“集体对齐”(Collective Alignment)的新团队,以收集公众意见,确保其AI大模型与人类价值观保持一致。这个全新团队主要由研究人员和工程师构成,他们将专注于设计和实施收集公众意见的流程,以帮助训练和塑造OpenAI人工智能模型的行为,从而解决潜在的偏见和其他问题。

OpenAI此举是为了响应社会对于人工智能潜在风险的关切,特别是在AI模型可能存在的偏见方面。通过组建这个新团队,OpenAI希望能够更广泛地听取公众的声音,以确保其AI模型的开发和训练更加公正、透明和符合人类价值观。

据IT之家报道,OpenAI的这一举措是其持续努力的一部分,旨在确保人工智能的发展能够更好地服务于人类社会的需求,并避免可能出现的不良后果。OpenAI表示,他们将通过多种渠道收集公众意见,包括在线调查、公开论坛和专家咨询等,以确保尽可能广泛和多元的参与。

英文标题:OpenAI’s New Team Ensures AI Models Align with Human Values
关键词:OpenAI, Collective Alignment, Public Feedback

News content:
OpenAI recently announced the establishment of a new team called “Collective Alignment” to collect public opinions and ensure that their AI models align with human values. This newly formed team, consisting mainly of researchers and engineers, will focus on designing and implementing processes to gather public feedback to help train and shape the behavior of OpenAI’s artificial intelligence models, addressing potential biases and other issues.

This move by OpenAI is in response to societal concerns about the potential risks of artificial intelligence, particularly in terms of potential biases in AI models. By establishing this new team, OpenAI aims to hear from a broader audience to ensure that the development and training of its AI models are more fair, transparent, and aligned with human values.

According to IT Home, this initiative is part of OpenAI’s ongoing efforts to ensure that the development of artificial intelligence better serves the needs of human society and avoids potential negative consequences. OpenAI stated that they will collect public opinions through various channels, including online surveys, public forums, and expert consultations, to ensure broad and diverse participation.

【来源】https://www.ithome.com/0/745/634.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注