OpenAI近日推出了一款名为Sora的创新文本生成视频模型,该模型能根据用户提供的提示词创造出逼真的视频内容。这一技术的进步在新闻和娱乐领域具有潜在的革命性影响,但其效率问题也引发了外界的讨论。据网友在Reddit社区的反馈,Sora在生成1分钟长度的视频时,需要的渲染时间竟然超过1个小时,这无疑为实际应用带来了挑战。
尽管Sora的技术演示显示了其在生成视频方面的先进性,但研究人员目前仅展示了预先选定的示例,并未向公众开放自定义提示词的功能。最长的公开演示视频仅有17秒,这使得人们对于Sora在处理更长、更复杂视频任务的能力上存有疑问。
IT之家的消息指出,OpenAI的Sora模型在视频生成领域的探索具有前瞻性,但目前的效率问题限制了其广泛应用的可能性。未来,OpenAI可能需要在提升模型渲染速度和优化用户体验方面做出更多努力,以满足用户对于即时性和效率的需求。这一进展也提醒业界,尽管人工智能在内容创作上的潜力巨大,但技术的成熟和完善仍需时日。
英语如下:
News Title: “OpenAI’s New Model Sora: 1-Minute Video Rendering Takes Over 1 Hour, Sparking Debate on Efficiency”
Keywords: OpenAI Sora, video generation, extended rendering time
News Content:
OpenAI has recently unveiled Sora, an innovative text-to-video generation model that creates realistic videos based on user-provided prompts. This technological advancement has potential for groundbreaking impacts in the news and entertainment industries, but its efficiency has become a topic of discussion. According to user feedback on Reddit, the model requires over an hour to render a 1-minute video, posing a challenge for practical applications.
While Sora’s technical demonstrations showcase its sophistication in video generation, researchers have so far only presented pre-selected examples and have not made custom prompt functionality available to the public. The longest publicly demonstrated video clocks in at only 17 seconds, raising questions about Sora’s capability to handle longer and more complex video tasks.
IT Home reports that OpenAI’s Sora model demonstrates pioneering exploration in video generation, but the current efficiency issues restrict its potential for widespread use. In the future, OpenAI may need to focus on enhancing the model’s rendering speed and optimizing the user experience to meet demands for immediacy and efficiency. This development serves as a reminder to the industry that while AI holds tremendous potential in content creation, the maturation and refinement of such technology still require time.
【来源】https://www.ithome.com/0/751/364.htm
Views: 1