The evolution of diffusion models has greatly impacted video generation and understanding.
Particularly, text-to-video diffusion models have significantly facilitated the customization of input video with target appearance, motion, etc.
Despite these advances, challenges persist in accurately distilling motion information from video frames.
While existing works leverage the consecutive frame residual as the target motion vector,
they inherently lack global motion context and are vulnerable to frame-wise distortions.
To address this, here we present Spectral Motion Alignment
(SMA),
a novel framework that refines and aligns motion vectors using Fourier and wavelet transforms.
SMA learns motion patterns by incorporating frequency-domain regularization, facilitating the learning of whole-frame global motion dynamics, and mitigating spatial artifacts.
Extensive experiments demonstrate SMA's efficacy in improving motion transfer
while maintaining computational efficiency and compatibility across various video customization frameworks.