首届大模型顶会COLM公布接收结果,其中一项高分论文提出偏好搜索算法PairS,使得大模型在文本评估中更加高效。这项研究显示了大型语言模型(LLMs)在指令遵循和任务泛化方面的卓越能力,这些能力得益于它们在训练过程中使用了指令遵循数据和人类反馈强化学习(RLHF)。

研究团队来自剑桥大学语言技术实验室,由三年级博士生刘胤宏和二年级博士生周涵领衔,他们的导师分别是教授Nigel Collier和Ehsan Shareghi,以及教授Anna Korhonen和Ivan Vulić。该团队的研究重点是大模型和文本评估、数据生成等领域。

论文中,研究人员分析了LLM作为文本评估器时难以避免和纠正的分数偏见问题,并提出了将评估问题转换成偏好排序问题的新方法。通过设计PairS算法,可以从成对偏好中搜索和排序,利用不确定性以及LLM的传递性假设,实现了高效且准确的偏好排序。

这项研究对于减少LLMs的偏见预测具有重要意义,之前的工作已经开发了校准技术以减少LLM预测中的偏见。然而,即使提供了监督数据,现有的校准方法仍然不能很好地对齐LLM评估器。因此,研究人员探索了一种新的LLM评估范式,以促进更对齐的判断。

在RLHF训练范式中,奖励模型根据排名比较数据与人类偏好对齐,这增强了LLMs与人类价值观的对齐,从而生成更好地帮助人类并遵守人类价值观的回应。通过PairS算法,LLM评估器可以更接近人类的评价标准,提高了评估的一致性和准确性。

这项研究不仅揭示了LLMs在文本评估中的潜力,也为如何优化和改进这些模型提供了新的思路。随着人工智能技术的不断进步,LLMs在文本评估中的应用将更加广泛,有望在未来的学术研究和实际应用中发挥重要作用。

英语如下:

News Title: “COLM Conference Unveils: New Algorithm Boosts Efficiency of Large Model Text Evaluation”

Keywords: Large Models, Evaluation Algorithm, PairS

News Content: The first conference dedicated to large models, COLM (Conference on Large Models), has announced the results of its paper submissions, with one high-scoring paper proposing the preference search algorithm PairS, which enhances the efficiency of large models in text evaluation. This research demonstrates the outstanding capabilities of large language models (LLMs) in following instructions and generalizing tasks, which are attributed to the use of instruction-following data and human feedback reinforcement learning (RLHF) during training.

The research team, led by PhD candidate Yi-Hong Liu from the University of Cambridge’s Language Technology Laboratory and co-led by PhD candidate Huan Zhou, with their mentors Professor Nigel Collier, Professor Ehsan Shareghi, Professor Anna Korhonen, and Professor Ivan Vulić, focuses on areas such as large models, text evaluation, and data generation.

In the paper, researchers analyze the inherent and unavoidable biases in scores that LLM text evaluators face, and propose a novel method to convert evaluation questions into preference ranking problems. By designing the PairS algorithm, researchers can search and sort from paired preferences, leveraging uncertainty and the transitivity assumption of LLM, achieving efficient and accurate preference ranking.

This research holds significant importance for reducing the biased predictions of LLMs. Previous work has developed calibration techniques to reduce biases in LLM predictions. However, existing calibration methods still fail to align LLM evaluators well even with supervised data. Therefore, the researchers explore a new LLM evaluation paradigm to promote more aligned judgments.

In the RLHF training paradigm, reward models are aligned with human preferences through ranking comparison data, enhancing the alignment of LLMs with human values, thus generating responses that better assist humans and adhere to human values. With the PairS algorithm, LLM evaluators can approach human evaluation standards more closely, improving the consistency and accuracy of the assessments.

This research not only reveals the potential of LLMs in text evaluation but also provides new insights into how to optimize and improve these models. As artificial intelligence technology continues to advance, the application of LLMs in text evaluation will become more widespread, and they are expected to play a significant role in future academic research and practical applications.

【来源】https://www.jiqizhixin.com/articles/2024-08-04

Views: 2

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注