Presentation
Generating Detailed Character Motion from Blocking Poses
SessionMotion Transfer & Control
DescriptionWe focus on the problem of using generative diffusion models for the task of motion detailing: converting a rough ``sketch'' of a character animation, represented by a sparse set of coarsely posed, and imprecisely timed key poses, into a detailed, natural looking character animation. Current diffusion models can address the problem of correcting the timing of imprecisely timed key poses, but we find that no good solution exists for leveraging the diffusion prior to enhance a sparse set of key poses with additional pose detail. We overcome this challenge using a simple inference-time trick. Each diffusion step we blend the outputs of an unconditioned diffusion model with input key pose constraints using per-keypose tolerance weights, and pass this result in as the input condition to an pre-existing motion retiming model. We find this approach works significantly better than existing approaches that attempt to add detail by blending model outputs or via expressing keypose constraints as guidance. The result is the first diffusion model that can robustly convert blocking-level keyposes into plausible detailed character animations.

Event Type
Technical Papers
TimeWednesday, 17 December 20255:13pm - 5:24pm HKT
LocationMeeting Room S426+S427, Level 4


