OpenAI近日发布的文本生成视频模型Sora引起了广泛关注,但根据网友反馈,Sora生成一分钟视频的时间需要渲染超过一个小时。尽管Sora模型根据用户提供的提示词生成逼真视频,但目前研究人员仅展示了预先选择的示例,且不允许公众访问自定义提示词,最长的演示视频也只有17秒。这意味着Sora模型在渲染效率上还有待提高,公众对其全面推出的期待可能还需时日。

英文标题:OpenAI’s Sora Video Generation Model Faces Long Rendering Times, Public Access Restricted
英文关键词:AI Video Generation, OpenAI, Sora Model, Rendering Efficiency
英文新闻内容:
OpenAI recently released its text-to-video model Sora, which has garnered significant attention. However, according to user feedback, the time it takes for Sora to generate a one-minute video exceeds one hour. Although Sora creates realistic videos based on user prompts, access to custom keywords is currently restricted to pre-selected samples, with the longest demo video lasting only 17 seconds. This indicates that Sora still needs to improve its rendering efficiency, and the public may have to wait longer for its full release.

【来源】https://www.ithome.com/0/751/364.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注