Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation

Xingyang Li*, Muyang Li*, Tianle Cai, Haocheng Xi, Shuo Yang, Yujun Lin, Lvmin Zhang, Songlin Yang, Jinbo Hu, Kelly Peng, Maneesh Agrawala, Ion Stoica, Kurt Keutzer, Song Han
MIT, NVIDIA, Princeton, UC Berkeley, Stanford, First Intelligence
(* indicates equal contribution)

News

Waiting for more news.

Awards

No items found.

Competition Awards

No items found.

Abstract

Recent advances in diffusion models have enabled high-quality video generation, but the additional temporal dimension significantly increases computational costs, making training and inference on long videos prohibitively expensive. In this paper, we identify a phenomenon we term Spatiotemporal Energy Decay in video diffusion models: post-softmax attention scores diminish as spatial and temporal distance between tokens increase, akin to the physical decay of signal or waves over space and time in nature. Motivated by this, we propose Radial Attention, a scalable sparse attention mechanism with O(nlogn) complexity that translates energy decay into exponentially decaying compute density, which is significantly more efficient than standard O(n2) dense attention and more expressive than linear attention. Specifically, Radial Attention employs a simple, static attention mask where each token attends to spatially nearby tokens, with the attention window size shrinking with temporal distance. Moreover, it allows pre-trained video diffusion models to extend their generation length with efficient LoRA-based fine-tuning. Extensive experiments show that Radial Attention maintains video quality across Wan2.1-14B, HunyuanVideo, and Mochi 1, achieving up to a 1.9× speedup over the original dense attention. With minimal tuning, it enables video generation up to 4× longer while reducing training costs by up to 4.4× compared to direct fine-tuning and accelerating inference by up to 3.7× compared to dense attention inference.

Overview

Radial Attention Teaser

teaser

We present Radial Attention, a sparse attention mechanism with \( \mathcal{O}(n \log n) \) computational complexity. Radial Attention accelerates pre-trained HunyuanVideo by 1.9× at its default video length while maintaining comparable video quality. When generating 4× longer videos, it reduces tuning costs by up to 4.4× and speeds up inference by up to 3.7× versus dense attention.

Pattern Design

Radial Attention Patterns

patterns

(a) The compute density pattern. The attention map is divided into \( 2\lceil\log_2(\max(f, 2))\rceil - 1 \) bands (here, the number of frames \( f = 12 \)) based on the temporal distance between tokens. The central band has full compute density, while each successive outer band has half the density of the previous one. Except for band \( \pm1 \), each band also doubles the diagonal width of its predecessor.

(b) The corresponding attention mask for (a). The compute density is reflected in the compute diagonal width of each frame-to-frame block. When the diagonal width drops below 1, we reduce the frequency of diagonals. We additionally add an attention sink.

(c) An example mask used in HunyuanVideo, illustrating the final sparsity pattern in practice.

Performance

Radial Attention Results

results

Radial Attention reduces the computational complexity of attention from \( \mathcal{O}(n^2) \) to \( \mathcal{O}(n \log n) \). When generating a 500-frame 720p video with HunyuanVideo, it reduces the attention computation by 9×, achieves 3.7× speedup, and saves 4.6× tuning costs.

Visual Results

Accelerating Pre-trained Models

Radial Attention Results

results

Radial Attention delivers nearly identical quality to Wan2.1-14B at default video length, while offering 1.8× speedup.

Long Video Generation

Radial Attention Results

results

Radial Attention enables 4× longer video generation with LoRA tuning, outperforming dense attention in vision rewards, while achieving 3.7× speedup and 4.4× lower tuning costs.

LoRA Compatibility

Radial Attention Results

results

Fully compatible with existing style LoRAs. On HunyuanVideo, Radial Attention LoRA enables 4× video length extension while preserving vision quality.

Video

Citation

@article{li2025radial,
 title={Radial Attention: $\mathcal{O}(n\log n)$ Sparse Attention with Energy Decay for Long Video Generation},
 author={Li*, Xingyang and Li*, Muyang and Cai, Tianle and Xi, Haocheng and Yang, Shuo and Lin, Yujun and Zhang, Lvmin and Yang, Songlin and Hu, Jinbo and Peng, Kelly and Agrawala, Maneesh and Stoica, Ion and Keutzer, Kurt and Han, Song}
 journal={arXiv preprint arXiv:2506.19852},
 year={2025}
}

Media

No media articles found.

Acknowledgment

We thank MIT-IBM Watson AI Lab, National Science Foundation, Hyundai, and Amazon for supporting this research.

Team Members