OpenAI近日发布了文本生成视频模型Sora,根据用户提供的提示词生成逼真的视频。然而,根据Reddit社区网友的反馈,Sora生成1分钟视频的时间需要渲染超过1个小时,这一问题引起了广泛关注。

据悉,研究人员主要展示了预先选择的示例,并未允许公众使用自定义提示词。此外,最长的演示视频时长也只有17秒。这使得网友们对Sora的实际应用效果产生了质疑。

针对这一问题,OpenAI方面尚未给出明确回应。不过,有业内人士指出,渲染时间过长可能是由于模型的计算复杂度较高,以及硬件资源有限等因素导致。他们认为,随着技术的不断进步和硬件资源的提升,Sora的渲染时间有望得到优化。

Title: OpenAI Video Model Sora Draws Attention Due to Rendering Time
Keywords: OpenAI, Video Model, Rendering Time

News content:
OpenAI recently released the text-generated video model Sora, which generates realistic videos based on user-provided prompts. However, according to feedback from Reddit community members, it takes over 1 hour to render a 1-minute video using Sora, sparking widespread concern.

It is reported that researchers mainly demonstrated pre-selected examples and did not allow the public to use custom prompts. In addition, the longest demonstration video is only 17 seconds. This raises doubts about the actual effectiveness of Sora.

So far, OpenAI has not given a clear response to this issue. However, industry insiders suggest that the long rendering time may be due to the high computational complexity of the model and limited hardware resources. They believe that with technological advancements and increased hardware resources, the rendering time of Sora can be optimized.

【来源】https://www.ithome.com/0/751/364.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注