90年代的黄河路

OpenAI近日宣布,他们将组建一个名为“集体对齐”(Collective Alignment)的新团队,该团队主要由研究人员和工程师构成。新团队的任务是设计和实施收集公众意见的流程,以帮助训练和塑造其人工智能模型的行为,从而解决潜在的偏见和其他问题。

“集体对齐”团队将通过各种渠道,如在线调查、公开讨论和焦点小组,收集公众对人工智能的看法和期望。这些信息将被用于指导OpenAI在人工智能领域的研发工作,确保其AI模型与人类价值观保持一致。

OpenAI表示,这一举措是他们在构建安全、可靠和符合人类价值观的人工智能系统方面迈出的重要一步。他们希望通过这种方式,能够更好地理解和满足公众的需求和期望,从而推动人工智能技术的健康发展。

英文标题:OpenAI Establishes New Team to Gather Public Opinion to Ensure AI Models Align with Human Values

英文关键词:OpenAI, New Team, Public Opinion

英文新闻内容:
OpenAI has recently announced that they will establish a new team called “Collective Alignment” tasked with designing and implementing processes to collect public opinions. The team, consisting mainly of researchers and engineers, will focus on shaping the behavior of OpenAI’s AI models by addressing potential biases and other issues.

The “Collective Alignment” team will collect public views and expectations on artificial intelligence through various channels such as online surveys, public discussions, and focus groups. This information will guide OpenAI’s research and development efforts in the field of artificial intelligence, ensuring that their AI models align with human values.

OpenAI states that this initiative is an important step in building safe, reliable, and human-aligned AI systems. They hope to better understand and meet the needs and expectations of the public through this approach, thereby promoting the healthy development of AI technology.

【来源】https://www.ithome.com/0/745/634.htm

Views: 1

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注