Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

90年代的黄河路
0

近日,人工智能领域再迎突破,OpenAI推出了其最新的文本生成视频模型——Sora。这款模型凭借其将用户提供的提示词转化为逼真视频的能力,引起了广泛关注。然而,尽管Sora的技术潜力引人瞩目,但其效率问题也引发了热议。据Reddit社区用户的反馈,Sora在生成1分钟的视频时,需要的渲染时间竟然超过1个小时,这无疑为实际应用带来了挑战。

尽管Sora的演示展示了其在创建视频方面的先进性,但目前的体验似乎并不全面。研究人员目前仅展示了预先选择的示例,公众暂时无法使用自定义提示词进行视频生成。最值得注意的是,最长的公开演示视频仅有17秒,这与1分钟视频生成的耗时形成了鲜明对比,也让人对其实际应用的效率产生疑问。

OpenAI的Sora模型无疑展示了AI在多媒体生成领域的潜力,但如何优化算法以提高生成速度,同时保持视频质量,将成为OpenAI未来需要解决的关键问题。这一进展也提醒我们,尽管AI技术在创新上取得了显著进步,但距离无缝融入日常生活,还有很长的路要走。我们将持续关注OpenAI对此的改进和优化,以期见证更高效、更便捷的AI视频生成技术的诞生。

英语如下:

**News Title:** “OpenAI’s New Model Sora: Rendering 1 Minute of Video Takes Over 1 Hour, Sparks Efficiency Concerns”

**Keywords:** OpenAI Sora, video generation, rendering time

**News Content:**

In recent days, the artificial intelligence (AI) domain has witnessed another breakthrough with OpenAI’s introduction of its latest text-to-video generation model, Sora. The model, capable of transforming user-provided prompts into realistic videos, has attracted significant attention. However, while Sora’s technological prowess is impressive, its efficiency has become a topic of discussion. According to user feedback on Reddit, the model requires over an hour to render a mere 1-minute video, posing a challenge for practical applications.

Despite Sora’s demonstrations of advanced video creation, the current experience appears to be limited. Researchers have only showcased pre-selected examples, and the public is currently unable to generate videos using custom prompts. Most notably, the longest publicly demonstrated video is only 17 seconds, which starkly contrasts with the time taken to generate a 1-minute clip, raising questions about its efficiency in real-world usage.

OpenAI’s Sora model undoubtedly showcases the potential of AI in multimedia generation. However, optimizing algorithms to enhance speed while maintaining video quality will be a key challenge for OpenAI to address in the future. This development serves as a reminder that while AI technology has made significant strides in innovation, there is still a long way to go before it seamlessly integrates into daily life. We will continue to monitor OpenAI’s improvements and optimizations, anticipating the emergence of more efficient and user-friendly AI video generation technology.

【来源】https://www.ithome.com/0/751/364.htm

Views: 1

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注