近日,人工智能领域再添新突破,OpenAI推出了其最新的文本生成视频模型——Sora。这款模型能够根据用户提供的提示词,创造出栩栩如生的视频内容,引起了业界的广泛关注。然而,尽管Sora的技术创新令人赞叹,但其生成效率问题也引发了热议。据Reddit社区用户反馈,Sora在生成1分钟长度的视频时,需要的渲染时间竟超过1个小时,这一效率对于实际应用来说无疑是一大挑战。

目前,OpenAI仅展示了预先选择的示例,公众尚无法自由地使用自定义提示词来生成视频。在已公开的演示中,最长的视频片段仅有17秒,这进一步引发了公众对Sora实际应用可行性的讨论。尽管如此,Sora的出现仍然展示了AI在视频生成领域的巨大潜力,未来如何提升效率并扩大其应用范围,成为了OpenAI及整个行业亟待解决的问题。

来源:IT之家

英语如下:

**News Title:** “OpenAI’s New Model Sora Sparks Debate: Rendering 1 Minute of Video Takes Over 1 Hour”

**Keywords:** OpenAI Sora, video generation, rendering time

**News Content:**

OpenAI’s latest text-to-video generation model, Sora, has recently grabbed attention in the artificial intelligence realm. The model creates lifelike videos based on user-provided prompts, but it has also sparked a conversation about its efficiency. According to feedback from Reddit users, the rendering time for Sora to generate a 1-minute video exceeds 1 hour, posing a significant challenge for practical applications.

Currently, OpenAI has only showcased pre-selected examples, and the public is unable to generate videos using custom prompts. In the publicly available demonstrations, the longest video clip is merely 17 seconds, fueling discussions about the practical feasibility of Sora. Nevertheless, the emergence of Sora demonstrates the immense potential of AI in video generation. The challenge now for OpenAI and the industry as a whole is to enhance efficiency and expand the model’s applicability.

**Source:** IT Home

【来源】https://www.ithome.com/0/751/364.htm

Views: 2

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注