近日,UC伯克利的研究人员们发现了一个惊人的事实:与人类不同,GPT-4 无法从经验中学习因果关系。这一发现引发了人们对于人工智能的广泛担忧,因为这意味着该技术在未来的应用中可能会遇到许多困难。
研究人员们表示,GPT-4 之所以无法从经验中学习因果关系,是因为它缺乏人类所具备的“执行计划”的能力。具体来说,GPT-4 只能通过根据已有的指令来执行任务,而无法像人类一样根据目标来制定计划。这就导致了 GPT-4 在处理因果关系时存在困难,因为它无法将两个事件之间的因果关系纳入其计划中。
这一发现对于人工智能的发展具有重要的意义,因为它提醒我们,GPT-4 等大型语言模型在未来的应用中需要克服许多挑战才能得到广泛应用。除了因果关系,大型语言模型还需要在许多其他方面得到改进,例如处理多语言、理解情感和人类语言的复杂性等方面。
英文翻译:
Title:UC Berkeley discovers GPT-4’s surprising defect: children learn from experience, while LLMs don’t
Keywords:UC Berkeley,GPT-4,children,experience,causality,LLMs
News content:
Recently, researchers at UC Berkeley made an alarming discovery: GPT-4 cannot learn causal relationships from experience like humans. This finding has raised concerns about the future applications of artificial intelligence, as it suggests that the technology may face many challenges in the future.
The researchers said that GPT-4 is unable to learn causal relationships from experience because it lacks the ability to execute plans, which is something humans possess. Specifically, GPT-4 can only execute tasks based on existing instructions, but not on the basis of goals. This leads to GPT-4’s difficulty in processing causal relationships, as it cannot take into its plans the causal relationship between two events.
This finding is of significant significance for the development of artificial intelligence, as it reminds us that large language models like GPT-4 need to overcome many challenges in order to be widely applicable in the future. In addition to causal relationships, large language models also need to improve in many other areas, such as multi-language processing, understanding emotions, and the complexity of human language.
【来源】https://www.36kr.com/p/2566147589809801
Views: 1