【OpenAI Sora文本生成视频模型引发关注:1分钟视频渲染超1小时】
全球知名人工智能研究机构OpenAI近日推出了一款名为Sora的创新性文本生成视频模型,该模型能够根据用户提供的提示词创造出逼真的动态视频。这一技术的进步为内容创作带来了全新的可能性,但也暴露出一些实际操作中的挑战。

据网友在Reddit社区上的反馈,Sora在生成1分钟长度的视频时,需要进行超过1小时的渲染过程,这一时间成本对于实时或高效的内容制作可能构成障碍。目前,OpenAI仅展示了预先选定的示例,公众暂时无法使用自定义提示词进行自由创作,最长的公开演示视频仅有17秒。

尽管Sora的技术潜力引人瞩目,但其目前的效率问题也引发了业界的讨论。如果要将此类技术广泛应用到新闻报道、电影制作或游戏开发等领域,显然需要进一步优化渲染速度和提高用户体验。OpenAI尚未对此发表官方评论,但这一技术的进展无疑将对人工智能和媒体制作行业产生深远影响。

来源:IT之家

英语如下:

**News Title:** “OpenAI’s New Model Sora: Rendering 1 Minute of Video in Over 1 Hour, Drawing Attention to Efficiency”

**Keywords:** OpenAI Sora, video generation, rendering time

**News Content:**
**OpenAI’s Sora Text-to-Video Model Sparks Interest: 1 Minute of Video Takes Over 1 Hour to Render**
Renowned artificial intelligence research institute OpenAI recently unveiled Sora, an innovative text-to-video model that can create realistic dynamic videos based on user-provided prompts. This technological advancement opens new possibilities for content creation but also presents practical challenges.

According to user feedback on the Reddit platform, Sora requires over an hour of rendering time for a 1-minute video, which could pose barriers for real-time or efficient content production. Currently, OpenAI has only showcased pre-selected examples, and the public is unable to generate videos with custom prompts, with the longest publicly demonstrated video being only 17 seconds long.

While Sora’s technological potential is noteworthy, its current efficiency issues have sparked industry discussions. For widespread adoption in news reporting, film production, or game development, it is clear that rendering speed optimization and user experience improvements are needed. OpenAI has yet to comment officially on this, but the progress of this technology will undoubtedly have a profound impact on the AI and media production industries.

**Source:** IT Home

【来源】https://www.ithome.com/0/751/364.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注