OpenAI Sora 视频生成耗时过长,引发网友质疑
据 IT 之家报道,OpenAI 近期发布了文本生成视频模型 Sora,该模型可根据用户提供的提示词生成逼真的视频。然而,有网友反馈称,Sora 生成 1 分钟视频的时间超过 1 小时,渲染速度令人担忧。
在 Reddit 社区上,网友们对 Sora 的生成效率表示质疑。据悉,研究人员仅展示了预先选择的示例,并没有允许公众访问自定义提示词。此外,最长的演示视频时长仅为 17 秒,这与声称的 1 分钟生成时间相去甚远。
对此,有业内人士指出,生成逼真的视频需要大量的计算资源,因此渲染时间较长是可以理解的。然而,超过 1 小时的生成时间对于实际应用来说显然是不切实际的。
OpenAI 尚未对此事做出回应。目前,Sora 仍处于早期开发阶段,其生成效率还有待进一步优化。
一些网友认为,OpenAI 夸大了 Sora 的能力,而另一些网友则表示理解生成逼真视频的难度。目前,Sora 的实际应用价值还有待观察。
值得注意的是,Sora 并不是第一个文本生成视频模型。此前,Meta 也曾推出过类似的模型 Make-A-Video,但其生成速度同样较慢。文本生成视频技术仍处于发展初期,其生成效率和质量还有很大的提升空间。
英语如下:
**AI Video Generation Takes Too Long, Rendering Over an Hour for a Minute**
**Keywords:** Video generation, Long rendering, Pre-selected examples
**News Content:**
OpenAI’s Sora video generation has sparked concerns over itslengthy rendering time.
According to IT之家, OpenAI recently released Sora, a text-to-video generation model that can create realistic videos from user prompts. However, users have reported that Sora takes over an hour to generate a one-minute video, raising concerns about its rendering speed.
On Reddit, usersquestioned Sora’s efficiency. It was noted that the researchers only showcased pre-selected examples and did not allow the public to input custom prompts. Additionally, the longest demo video was only 17 seconds long, far from the claimed one-minute generation time.
Industry experts have pointed out that generating realistic videos requires significant computational resources, so long rendering times are understandable. However, a generation time of over an hour is impractical for real-world applications.
OpenAI has not yet commented on the matter. Sora is still in its early stages of development, and its generation efficiency is expected to improve.
Some users have accused OpenAI of overstating Sora’s capabilities, while others have expressed understanding of the challenges in generating realistic videos. The practical value of Sora remains to be seen.
It’s worth noting that Sora is not the first text-to-video generation model. Meta previously released a similar model called Make-A-Video, which also had slow rendering speeds. Text-to-video generation technology is still in its infancy, and there is significant room for improvement in both generation efficiency and quality.
【来源】https://www.ithome.com/0/751/364.htm
Views: 1