Advances in generative models and sequence learning have greatly promoted research in dance motion generation, yet current methods still suffer from coarse semantic control and poor coherence in long sequences.
In this work, we present LRCM (Listen to Rhythm, Choose Movements), a multimodal-guided diffusion framework supporting both diverse input modalities and autoregressive dance motion generation. We explore a feature-decoupling paradigm for dance datasets and generalize it to the Motorica Dance dataset, separating motion capture data, audio rhythm, and professionally annotated global and local text descriptions.
Our diffusion architecture integrates an audio–latent Conformer and a text–latent Cross-Conformer, and incorporates a Motion Temporal Mamba Module (MTMM) to enable smooth, long-duration autoregressive synthesis. Experimental results indicate that LRCM delivers strong performance in both functional capability and quantitative metrics, demonstrating notable potential in multimodal input scenarios and extended-sequence generation.
We propose a fine-grained semantic decoupling paradigm for multimodal dance datasets, formalizing the separation of dance motion, audio rhythm, and professionally annotated textual descriptions into hierarchical global style and local movement levels, instantiated on the Motorica Dance dataset.
Audio–latent Conformers capture persistent rhythmic cues from audio; text–latent Cross-Conformers incorporate fine-grained semantics from global and local textual inputs. A jerk-based loss function jointly maintains rhythmic smoothness and semantic consistency.
State space model-based autoregressive extension of the diffusion framework for long-sequence generation. MTMM enables smooth, long-duration synthesis with linear complexity and hardware-friendly design via bidirectional Mamba scan.