arXiv:2601.03323  ·  Jan 2026  ·  LRCM

Listen to Rhythm, Choose Movements

Autoregressive Multimodal Dance Generation via Diffusion and Mamba
with Decoupled Dance Dataset

1 Communication University of China 2 Zhipu AI

Advances in generative models and sequence learning have greatly promoted research in dance motion generation, yet current methods still suffer from coarse semantic control and poor coherence in long sequences.

In this work, we present LRCM (Listen to Rhythm, Choose Movements), a multimodal-guided diffusion framework supporting both diverse input modalities and autoregressive dance motion generation. We explore a feature-decoupling paradigm for dance datasets and generalize it to the Motorica Dance dataset, separating motion capture data, audio rhythm, and professionally annotated global and local text descriptions.

Our diffusion architecture integrates an audio–latent Conformer and a text–latent Cross-Conformer, and incorporates a Motion Temporal Mamba Module (MTMM) to enable smooth, long-duration autoregressive synthesis. Experimental results indicate that LRCM delivers strong performance in both functional capability and quantitative metrics, demonstrating notable potential in multimodal input scenarios and extended-sequence generation.

1

Decoupled Multimodal Dance Dataset Paradigm

We propose a fine-grained semantic decoupling paradigm for multimodal dance datasets, formalizing the separation of dance motion, audio rhythm, and professionally annotated textual descriptions into hierarchical global style and local movement levels, instantiated on the Motorica Dance dataset.

Feature DecouplingMotorica DatasetGlobal + Local Text
2

Heterogeneous Multimodal-Guided Diffusion Architecture

Audio–latent Conformers capture persistent rhythmic cues from audio; text–latent Cross-Conformers incorporate fine-grained semantics from global and local textual inputs. A jerk-based loss function jointly maintains rhythmic smoothness and semantic consistency.

Audio–latent ConformerText–latent Cross-ConformerJerk-based Loss
3

Motion Temporal Mamba Module (MTMM)

State space model-based autoregressive extension of the diffusion framework for long-sequence generation. MTMM enables smooth, long-duration synthesis with linear complexity and hardware-friendly design via bidirectional Mamba scan.

Mamba SSMBidirectional ScanLong Sequences
Overview
Overview
Architecture
Architecture
Full-Prompts Generation
Local-Clip Generation (Global + Local Text)
Style-Mix Generation
Rendered 3D Videos
@misc{lrcm2026, title = {Listen to Rhythm, Choose Movements: Autoregressive Multimodal Dance Generation via Diffusion and Mamba with Decoupled Dance Dataset}, author = {Oran Duan and Yinghua Shen and Yingzhu Lv and Luyang Jie and Yaxin Liu and Qiong Wu}, year = {2026}, eprint = {2601.03323}, archivePrefix = {arXiv}, primaryClass = {cs.CV} }