icon
icon
icon
icon
Upgrade
Upgrade

News /

Articles /

Open-Source Video Generators SkyReels-V2, FramePack Extend Duration

Coin WorldWednesday, Apr 23, 2025 7:08 pm ET
3min read

Open-source video generators are rapidly advancing, challenging the dominance of closed-source alternatives. These models are not only more customizable and less restricted but also free to use, and are now capable of producing high-quality videos. Three models, Wan, Mochi, and Hunyuan, have ranked among the top 10 AI video generators, showcasing the growing capabilities of open-source technology.

Ask Aime: How will the rise of open-source video generators impact the market for AI video content?

The latest breakthrough in this field involves extending video duration beyond the typical few seconds. Two new models, SkyReels-V2 and FramePack, have demonstrated the ability to generate content lasting minutes instead of seconds. SkyReels-V2, released this week, claims it can generate scenes of potentially infinite duration while maintaining consistency throughout. FramePack, on the other hand, allows users with lower-end hardware to create long videos without overloading their systems.

SkyReels-V2 represents a significant advance in video generation technology, tackling four critical challenges that have limited previous models. It describes its system as an "Infinite-Length Film Generative Model," which uses a "diffusion forcing framework" to allow seamless extension of video content without explicit length constraints. This framework conditions on the last frames of previously generated content to create new segments, preventing quality degradation over extended sequences. The model ensures smooth transitions and consistent quality by looking at the final frames it just created to decide what comes next.

This innovation addresses the main reason why video generators tend to stick with short videos of around 10 seconds; anything longer, and the generation tends to lose coherence. Videos uploaded to social media by developers and enthusiasts show that the model is actually pretty coherent, and the images don’t lose quality. Subjects remain identifiable throughout the long scenes, and backgrounds don’t warp or introduce artifacts that could damage the scene.

Ask Aime: What advancements have been made in open-source video generators like Wan, Mochi, and Hunyuan, especially with SkyReels-V2 and FramePack?

SkyReels-V2 incorporates several innovative components, including a new captioner that combines knowledge from general-purpose language models with specialized "shot-expert" models to ensure precise alignment with cinematic terminology. This helps the system better understand and execute professional film techniques. The system uses a multi-stage training pipeline that progressively increases resolution from 256p to 720p, providing high-quality results while maintaining visual coherence. For motion quality—a persistent weakness in AI video generation—the team implemented reinforcement learning specifically designed to improve natural movement patterns.

The model is available to try at Skyreels.AI. Users get enough credits to generate only one video; the rest requires a monthly subscription, starting at $8 per month. However, those willing to run it locally will need a high-performance PC. “Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 51.2GB peak VRAM,” the team says on GitHub.

FramePack offers a different approach to Skyreel’s technique, focusing on efficiency rather than just length. Using FramePack nodes can generate frames at impressive speeds—just 1.5 seconds per frame when optimized—while requiring only 6 GB of VRAM. “To generate 1-minute video (60 seconds) at 30fps (1800 frames) using 13B model, the minimal required GPU memory is 6GB. (Yes, 6 GB, not a typo. Laptop GPUs are okay),” the research team said in the project’s official GitHub repo. This low hardware requirement represents a potential democratization of AI video technology, bringing advanced generation capabilities within reach of consumer-grade GPUs.

With a compact model size of just 1.3 billion parameters (compared to tens of billions in other models), FramePack could enable deployment on edge devices and wider adoption across industries. FramePack was developed by researchers at a university. The team included Lvmin Zhang, who is better known in the generative AI community as illyasviel, the dev-influencer behind many open-source resources for AI artists like the different Control Nets and IC Lights nodes that revolutionized image generation during the SD1.5/SDXL era.

FramePack's key innovation is a clever memory compression system that prioritizes frames based on their importance. Rather than treating all previous frames equally, the system assigns more computational resources to recent frames while progressively compressing older ones. Using FramePack nodes under ComfyUI (the interface used to generate videos locally) provides very good results—especially considering how low hardware is required. Enthusiasts have generated 120 seconds of consistent video with minimal errors, beating state-of-the-art models that provide great quality but severely degrade when users push their limits and extend videos to more than a few seconds.

FramePack is available for local installation via its official GitHub repository. The team emphasized that the project has no official website, and all other URLs using its name are scam sites not affiliated with the project. “Do not pay money or download files from any of those websites,” the researchers warned. The practical benefits of FramePack include the possibility of small-scale training, higher-quality outputs due to "less aggressive schedulers with less extreme flow shift timesteps," consistent visual quality maintained throughout long videos, and compatibility with existing video diffusion models like HunyuanVideo and Wan.

Comments

Add a public comment...
Post
Refresh
Disclaimer: the above is a summary showing certain market information. AInvest is not responsible for any data errors, omissions or other information that may be displayed incorrectly as the data is derived from a third party source. Communications displaying market prices, data and other information available in this post are meant for informational purposes only and are not intended as an offer or solicitation for the purchase or sale of any security. Please do your own research when investing. All investments involve risk and the past performance of a security, or financial product does not guarantee future results or returns. Keep in mind that while diversification may help spread risk, it does not assure a profit, or protect against loss in a down market.
You Can Understand News Better with AI.
Whats the News impact on stock market?
Its impact is
fork
logo
AInvest
Aime Coplilot
Invest Smarter With AI Power.
Open App