90年代的黄河路

据华尔街见闻报道,科技巨头Meta正准备在下周推出两个缩小版的Llama 3大语言模型,作为其即将在夏季发布的全面升级版Llama 3的预览。这一消息源自一位不愿透露姓名的Meta员工,他在当地时间周一向科技媒体The Information透露了这一内部计划。

据悉,这两个小型的Llama 3版本将是正式版的前奏,旨在为Llama 3的盛大登场奠定基础。Llama 3的开发目标直指OpenAI的GPT-4,后者以其强大的多模态处理能力著称,能够处理更长的文本序列并接受图像输入。

然而,据消息来源指出,即将发布的两个小版本Llama 3将不会具备多模态处理功能,这意味着它们将专注于文本处理,而不支持图像输入和理解。与之形成对比的是,Meta的Llama 3正式版预计将打破这一局限,实现同时理解和生成文本及图片的能力,进一步拓展人工智能在跨媒体交互方面的应用。

这一举措表明Meta在人工智能领域的持续投入和创新,以及与OpenAI等竞争对手在先进语言模型上的激烈竞争。随着Llama 3的逐步揭晓,科技界对于Meta如何在多模态处理上挑战现有标准充满了期待。

英语如下:

**News Title:** “Meta Teases Next Week: Preview of Mini Llama 3 Models, Targeting GPT-4’s Multimodal Rivalry”

**Keywords:** Meta, Llama 3, Multimodal

**News Content:** According to Wall Street Heard, tech giant Meta is set to unveil two downsized versions of its large language model, Llama 3, next week as a preview of the full-scale upgrade due for release this summer. The information was divulged by an unnamed Meta employee to the tech media outlet The Information on Monday, local time.

It is understood that these compact Llama 3 editions will serve as a preamble, laying the groundwork for the grand launch of Llama 3. The development of Llama 3 is directly aimed at challenging OpenAI’s GPT-4, which is renowned for its robust multimodal processing capabilities, enabling it to handle longer text sequences and accept image inputs.

However, sources indicate that the upcoming smaller Llama 3 iterations will lack multimodal processing capabilities. This suggests they will focus on text handling, without supporting image input or comprehension. In contrast, the full-fledged Llama 3 is expected to break this barrier, enabling the simultaneous understanding and generation of text and images, thus expanding AI’s applications in cross-media interactions.

This move underscores Meta’s ongoing investment and innovation in the AI domain and the intense competition it is engaging in with rivals like OpenAI in advanced language models. As Llama 3 gradually unfolds, the tech community eagerly awaits Meta’s challenge to existing standards in multimodal processing.

【来源】https://wallstreetcn.com/articles/3712253

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注