Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

0

Hong Kong/Beijing – The world of artificial intelligence continues to astound, with the latest breakthrough coming in the form of YuE (乐), an open-source music generation model developed collaboratively by the Hong Kong University of Science and Technology (HKUST) and DeepSeek’s Multimodal Art Projection (MAP). This innovative AI, dubbed by some as having stolen a Grammy, is poised to revolutionize music creation, offering a powerful and accessible tool for artists and enthusiasts alike.

The model, detailed in the paper YuE: Scaling Open Foundation Models for Long-Form Music Generation, leverages the power of large language models (LLMs), effectively feeding LLaMA with musical data to create a system capable of generating diverse and compelling musical pieces. The project is available on GitHub at https://github.com/multimodal-art-projection/YuE.

The capabilities of YuE are truly remarkable. Demonstrations showcase the AI’s ability to produce a wide range of musical styles, from ethereal electronic vocals to gritty street rap. The system can even emulate the vocal styles of renowned artists such as Adele and Billie Eilish with uncanny accuracy. One particularly impressive example is an AI-generated rendition of The World Gave Me, originally performed by Faye Wong, capturing the singer’s signature ethereal and melancholic sound.

The open-source nature of YuE is a significant development. Unlike proprietary AI music generators, YuE provides a platform for collaborative development and experimentation, potentially democratizing music creation and fostering innovation within the field. The model’s versatility, demonstrated by its ability to seamlessly switch between Japanese, Korean, and English in a K-Pop style track, highlights its potential for cross-cultural musical exploration.

The emergence of YuE raises important questions about the future of music creation and the role of AI in the creative process. While some may view AI music generation with skepticism, the technology offers exciting possibilities for artists to explore new sounds, overcome creative blocks, and collaborate with AI in unprecedented ways.

The development of YuE marks a significant step forward in AI music generation, pushing the boundaries of what’s possible and opening up new avenues for musical expression. As the technology continues to evolve, it will be fascinating to see how artists and musicians embrace and integrate AI into their creative workflows.

References:


>>> Read more <<<

Views: 0

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注