近日,OpenAI 宣布组建全新团队“集体对齐”(Collective Alignment),旨在收集公众意见,以确保其 AI 大模型与人类价值观保持一致。该团队主要由研究人员和工程师组成,他们将负责设计和实施收集公众意见的流程,以帮助训练和塑造人工智能模型的行为,解决潜在的偏见和其他问题。

此举体现了 OpenAI 对人工智能伦理的重视,也标志着其在人工智能领域的发展迈出了新的一步。通过公众意见的收集,OpenAI 希望建立一个更加公正、客观、符合人类价值观的 AI 系统,为人类社会带来更多正面影响。

英文翻译:

News Title: OpenAI Forms New Team to Ensure AI Compliance with Human Values

Keywords: OpenAI, Collective Alignment, Artificial Intelligence Models

News Content:

Recently, OpenAI announced the formation of a new team called “Collective Alignment,” which aims to collect public opinions to ensure that its large-scale AI models align with human values. The team, consisting mainly of researchers and engineers, will be responsible for designing and implementing processes to collect public opinions to help train and shape the behavior of AI models, addressing potential biases and other issues.

This move reflects OpenAI’s emphasis on AI ethics and marks a new step in the company’s development in the field of artificial intelligence. By collecting public opinions, OpenAI hopes to build a more just, objective, and human-value-aligned AI system, bringing more positive impacts to society.

【来源】https://www.ithome.com/0/745/634.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注