This project integrates LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control to The Foundry's Nuke, enabling artists to easily create animated portraits through advanced facial expression and motion transfer.
LivePortrait leverages a series of neural networks to extract information, deform, and blend reference videos with target images, producing highly realistic and expressive animations.
By integrating LivePortrait into Nuke, artists can enhance their workflows within a familiar environment, gaining additional control through Nuke's curve editor and custom knob creation.
This implementation provides a self-contained package as a series of Inference nodes. This allows for easy installation on any Nuke 14+ system, without requiring additional dependencies like ComfyUI or conda environments.
The current version supports video-to-image animation transfer. Future developments will expand this functionality to include video-to-video animation transfer, eyes and lips retargeting, an animal animation model, and support for additional face detection models.
🔥 For more results, visit the project homepage 🔥
Nuke 14.0++, tested on Linux.
- Fast inference and animation transfer
- Flexible advanced options for animation control
- Seamless integration into Nuke's node graph and curve editor
- Separated network nodes for customization and workflow experimentation
- Easy installation using Nuke's Cattery system
Maximum resolution for image output is currently 256x256 pixels (upscaled to 512x512 pixels), due to the original model's limitations.
- Download and unzip the latest release from here.
- Copy the extracted
Cattery
folder to.nuke
or your plugins path. - In the toolbar, choose Cattery > Update or simply restart Nuke.
LivePortrait will then be accessible under the toolbar at Cattery > Stylization > LivePortrait.
LivePortrait requires two inputs:
- Image (target face)
- Video reference (animation to be transferred)
Open the included demo.nk
file for a working example.
A self-contained gizmo will be provided in the next release.
Latest version: 1.0
- Initial release
- Video to image animation transfer
- Integrated into Nuke's node graph
- Advanced options for animation control
- Easy installation with Cattery package
LivePortrait.cat is licensed under the MIT License, and is derived from https://github.com/KwaiVGI/LivePortrait.
While the MIT License permits commercial use of LivePortrait, the dataset used for its training and some of the underlying models may be under a non-commercial license.
This license does not cover the underlying pre-trained model, associated training data, and dependencies, which may be subject to further usage restrictions.
Consult https://github.com/KwaiVGI/LivePortrait for more information on associated licensing terms.
Users are solely responsible for ensuring that the underlying model, training data, and dependencies align with their intended usage of LivePortrait.cat.
We would like to thank the contributors of FOMM, Open Facevid2vid, SPADE, InsightFace and X-Pose repositories, for their open research and contributions.
Portrait animation technologies come with social risks, particularly the potential for misuse in creating deepfakes. To mitigate these risks, it’s crucial to follow ethical guidelines and adopt responsible usage practices. At present, the synthesized results contain visual artifacts that may help in detecting deepfakes. Please note that we do not assume any legal responsibility for the use of the results generated by this project.
If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
@article{guo2024liveportrait,
title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
author = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
journal = {arXiv preprint arXiv:2407.03168},
year = {2024}
}