Presentation
SMF: Template-free and Rig-free Animation Transfer using Kinetic Codes
SessionMotion Transfer & Control
DescriptionAnimation retargeting involves applying a sparse motion description (e.g., 2D/3D keypoint sequences) to a given character mesh to produce a semantically plausible and temporally coherent full-body motion. This brings the characters to life. Given its practical relevance, It remains a highly desired tool in any digital character workflow. An ideal data-driven solution to this problem should be able to work without templates, without access to corrective keyframes, and still generalize to novel characters and unseen motions. Existing approaches come with a mix of restrictions -- they require annotated training data, assume access to template-based shape priors or artist-designed deformation rigs, suffer from limited generalization to unseen motion and/or shapes, or exhibit motion jitter. We propose Self-supervised Motion Fields (SMF) as a self-supervised framework that can be robustly trained with sparse motion representations, without requiring dataset specific annotations, templates, or rigs. At the heart of our method are Kinetic Codes, a novel autoencoder-based sparse motion encoding, that exposes a semantically rich latent space simplifying large-scale training. Our architecture comprises of dedicated spatial and temporal gradient predictors, which are trained end-to-end. The resultant network, regularized by the Kinetic Codes's latent space, has good generalization across shapes and motions. We evaluated our method on unseen motion sampled from AMASS, D4D, Mixamo, and raw monocular video for animation transfer on various characters with varying shapes and topology. We report a new SoTA on the AMASS dataset in the context of generalization to unseen motion. (Source code will be released.)

Event Type
Technical Papers
TimeWednesday, 17 December 20254:51pm - 5:02pm HKT
LocationMeeting Room S426+S427, Level 4

