Hunyuan Adds LoRA Support To Let You Create Your Own Custom AI Video Model
You can now train your own custom LoRA with the Hunyuan video model. Try text-to-video or video-to-video with your own custom model.
The silver screen is no longer the exclusive domain of flesh-and-blood actors, all thanks to the recent advancements in AI. These days, it’s remarkably easy to generate videos in which actors do or say things they never actually did or to take a clip from a film and replace the actor’s face with someone else’s.
Tencent’s open-source AI video generator, Hunyuan, has just integrated Low-Rank Adaptation (LoRA) support, which means you can now train custom styles, characters, and movements, making your AI videos truly unique and personalized.
Hunyuan was launched in December 2024 and quickly impressed the AI community with its 95.7% visual quality score, outperforming many of its competitors. Now with LORA integration, it’s even more powerful. This free, open-source AI video generator packs the punch of pricey options like OpenAI’s Sora, which, by the way, can cost you as much as $200 per month.
How Do These AI Tools Work?
There are three ways to generate a deepfake video of an actor.
Text-to-video: You can use a fine-tuned image model with images of the actor. Simply describe the video you’d like to generate, and the AI will render a clip of that actor guided by the description.
Image-to-video: If you don’t have an image model trained with image samples, you can also use an image of an actor and turn it into a video. This solution is offered by popular platforms like Kling AI, Runway, and Pika Labs, etc.
Video-to-video: Using an existing video clip and replacing the face of the actor with another character is also possible. This is probably the most effective way to produce a deepfake of an actor in a movie clip.
Let’s look at text-to-video in practice. Using a model that’s been trained on Keanu Reeves’s photos as John Wick, you could prompt:
Prompt: John Wick, a man with long hair and a beard, wearing a dark suit and tie in a church. He has a serious expression on his face and is holding a gun in his right hand. The scene is dimly lit, creating a tense atmosphere.
This is super cool. It looks like a genuine outtake from the John Wick franchise. Even the people in the background still look so real.
Alternatively, with video-to-video, you could feed an existing movie clip into the same model and ask the AI to replace the lead actor’s face with John Wick’s.
Here’s the result:
Seeing this in action for the first time is shocking in the best possible way. The realism is close to mind-blowing, especially when the AI nails the subtle movements in an actor’s expression. Hair, lighting, shading—it's all getting better with every new version of these models.
I’ve seen a bunch of video-to-video models in the recent months, but this is by far the best in terms of quality.
Downloading Models and Running Workflows
If you’re interested in experimenting with these methods, you can find the video model used in the John Wick demo on CivitAI. However, the workflow is not yet publicly available. You’ll need to piece it together on your own or wait for official documentation.
If you have a powerful Mac, you can attempt to run the workflow with this setup:
Pinokio
Hunyuan Video
CivitAI John Wick LoRA
In my experience, it’s not too difficult to get everything working if you’re comfortable with basic command-line operations, Docker, or local AI dev environments. Still, the learning curve can be a bit steep if you’re totally new to AI model deployment.
How to Train Your Own LoRA for the Hunyuan Video Model
Okay, now let me show you how you can train an AI model with your own video input.