最新消息最新消息

OpenAI, the leading force in large language model (LLM) development, isreportedly shifting its strategy, with the latest flagship model, Orion, failing to meet performance expectations and facing data scarcity concerns. This shift has sparked heated discussions within theAI community, with some researchers arguing that AI development will continue unabated, while others see a diminishing return in performance improvements.

The controversy stems from a recentreport by The Information, which claims that OpenAI’s next-generation flagship model will show less improvement than the gap between its previous two flagship models. The report further suggests that the industry is moving towards post-training model enhancements, adeparture from the traditional focus on scaling up training data.

The report cites a study titled Will we run out of data? Limits of LLM scaling based on human-generated data, which predicts that existing data reserves will be fully utilized by2028. This could potentially slow down or even halt the development of large models reliant on massive datasets.

OpenAI’s own researchers are divided on this issue. Noam Brown, a prominent research scientist at OpenAI, disagrees with the report, stating that AI development will not slow down in the short term. Heechoes the sentiment expressed by OpenAI CEO Sam Altman, who believes that AGI development is progressing faster than anticipated.

However, other researchers, including Adam GPT, OpenAI’s product vice president Peter Welinder, and Gary Marcus, have expressed support for the report’s claims. They argue that while scaling lawsmay be reaching their limits, advancements in reasoning time optimization can still contribute to significant improvements in model performance.

The debate highlights the complex challenges facing the AI community as it grapples with the limitations of existing data and explores alternative approaches to enhance model capabilities. While the future of AI development remains uncertain, the shift infocus from scaling up data to post-training improvements suggests a new era of innovation in the field.

This article is based on information from multiple sources, including:

  • The Information: As GPT Gains Slow, OpenAI Shifts Strategy
  • Machine Intelligence: OpenAI Changes Next-Generation Large ModelDirection, Scaling Law Hits a Wall? AI Community Erupts
  • Research paper: Will we run out of data? Limits of LLM scaling based on human-generated data

Further research is needed to fully understand the implications of this shift in strategy and its potential impact on the future of AI development.


>>> Read more <<<

Views: 0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注