Sync Lipsync 2
Lipsync 2.0 generates realistic lip movements that match spoken audio without training. It preserves speaker style across languages and video types, s
Lipsync 2.0 generates realistic lip movements that match spoken audio without training. It preserves speaker style across languages and video types, supporting live-action, animation, and AI-generated characters.
Model Overview
A zero-shot model for generating realistic lip movements that synchronize with spoken audio. No training or fine-tuning required, making it quick to deploy.
Best At
- Creating highly realistic and expressive lip movements for video content
- Preserving unique speaking styles across different languages and domains (live-action, animation, AI-generated)
- Enabling post-production edits to dialogue without re-recording
Ideal Use Cases
- Dubbing videos in multiple languages while maintaining original delivery
- Editing dialogue in post-production (changing words, re-animating entire performances)
- Generating AI-driven animations with synchronized speech
Input & Output Format
- Input: Video (mp4) and audio (wav) files via URI, with optional parameters for sync mode and temperature control
- Output: Generated video file (mp4) synchronizing lip movements to audio
Video
StringInput video file (.mp4)
Audio
StringInput audio file (.wav)
Sync Mode
StringLipsync mode when audio and video durations are out of sync
loopTemperature
NumberHow expressive lipsync can be (0-1)
0.5Active Speaker
BooleanWhether to detect active speaker (i.e. whoever is speaking in the clip will be used for lipsync)
falseOutput
InferredOutput
Type
Node
Status
Official
Package
Nodespell AI
Category
AI / Video / SyncInput
Output