Viggle, a Canadian AI startup, has secured $19 million in Series A funding led by Andreessen Horowitz (a16z) and supported by Two Small Fish. According to TechCrunch, this investment will help Viggle expand its operations, accelerate product development, and grow its team. This round of funding highlights the growing interest in AI-driven video generation technologies.
Viggle has developed a 3D video foundation model called JST-1, which is capable of understanding physical laws. Unlike other AI video models, Viggle allows users to specify the actions they want characters to perform, ensuring that the generated movements are realistic and adhere to physical laws. Hang Chu, the CEO of Viggle, explains, We are essentially building a new kind of graphics engine, but purely using neural networks. This model is fundamentally different from existing video generators, which are primarily pixel-based and do not truly understand physical structure and properties. Our model is designed to understand these aspects, which is why it excels in controllability and generation efficiency.
The company’s AI model can create 3D animated videos from scratch or from existing footage. For example, to create a video where a jester impersonates Lil Yachty, users can upload the original video and images of the jester performing the desired action. Alternatively, users can provide an image of the character and text instructions, or use only text prompts to create a new animated character. While meme videos make up only a small portion of Viggle’s user base, the model has been widely adopted as a tool for creative visualization by filmmakers, animators, and video game designers.
Chu notes that the model can currently only create characters but hopes to expand to more complex video creation in the future. Viggle offers a free, limited version of its AI model on Discord and its web application, with a $9.99 subscription to increase capacity. The company also provides special access through a creator program. Viggle is in discussions with film and video game studios about licensing the technology, but Chu also sees independent animators and content creators adopting the technology.
The training data for Viggle’s AI model has been a point of interest. During a TechCrunch interview, Chu confirmed that the model is trained on various public resources, including YouTube videos. However, Neal Mohan, CEO of YouTube, had previously stated that using YouTube videos to train AI text-to-video generators would clearly violate the platform’s terms of service. This raised concerns about the legality and ethical implications of using YouTube content for training AI models.
Viggle’s spokesperson later clarified that the company uses YouTube videos for training but emphasizes that the training data is carefully curated and complies with all service terms. Chu’s initial statement, however, contradicts this clarification, leading to questions about the company’s practices. Many AI model developers, including OpenAI, Nvidia, Apple, and Anthropic, have been reported to use YouTube videos for training, a practice that has become a well-known, albeit somewhat secretive, aspect of the industry.
In conclusion, Viggle’s innovative approach to AI-driven video generation has attracted significant investment and attention. As the company continues to scale and refine its technology, it faces the challenge of balancing innovation with ethical considerations surrounding the use of training data.
Views: 0