FastVAR Logo

Linear Visual Autoregressive Modeling via Cached Token Pruning

1Tsinghua University, 2ETH Zurich 3Shenzhen University 4Peng Cheng Laboratory

ICCV 2025

Teaser Image

FastVAR enables 2K resolution image generation with on single 3090 GPU.

Abstract

Visual Autoregressive (VAR) modeling has gained popularity for its shift towards next-scale prediction. However, existing VAR paradigms process the entire token map at each scale step, leading to the complexity and runtime scaling dramatically with image resolution.

To address this challenge, we propose FastVAR, a post-training acceleration method for efficient resolution scaling with VARs. Our key finding is that the majority of latency arises from the large-scale step where most tokens have already converged. Leveraging this observation, we develop the cached token pruning strategy that only forwards pivotal tokens for scale-specific modeling while using cached tokens from previous scale steps to restore the pruned slots. This significantly reduces the number of forwarded tokens and improves the efficiency at larger resolutions.

Experiments show the proposed FastVAR can further speedup FlashAttention-accelerated VAR by 2.7x with negligible performance drop of less than 1%. We further extend FastVAR to zero-shot generation of higher resolution images. In particular, FastVAR can generate one 2K image with 15 GB memory footprints in 1.5 s on a single NVIDIA 3090 GPU.

FastVAR Pipeline

FastVAR introduces the "cached token pruning" which is training-free and generic for various VAR backbones.

Highlights

1️⃣ Faster VAR Generation without Perceptual Loss

Faster VAR Generation

2️⃣ High-resolution Image Generation (even 2K image on single 3090 GPU)

High-resolution

3️⃣ Promising Resolution Scalibility (almost linear complexity)

Efficiency

Observation

FastVAR is developed based on the three interesting findings with pre-trained VAR models:

  1. Large-scale steps are speed bottleneck but appear robustness.
  2. High-frequency modeling matters at large-scale steps.
  3. Tokens from different scales are related.

More illustrations can be seen in our paper.

Faster VAR Generation

Algorithm

Faster VAR Generation

Performance

Our FastVAR can achieve 2.7x speedup with less than 1% performance drop, even on top of Flash-attention accelerated setups.Detailed results can be found in the paper.

Faster VAR Generation
Faster VAR Generation

Analysis

Our FastVAR is robust under even extrame pruning ratios.

FastVAR Ratio
Images generated under different pruning ratios.

Without any training, our FastVAR can find meaningful pruning strategies.

FastVAR Ratio
Pruned tokens under different pruning ratios.

More Results

Faster VAR Generation
Faster VAR Generation

BibTeX

@article{guo2025fastvar,
    title={FastVAR: Linear Visual Autoregressive Modeling via Cached Token Pruning},
    author={Guo, Hang and Li, Yawei and Zhang, Taolin and Wang, Jiangshan and Dai, Tao and Xia, Shu-Tao and Benini, Luca},
    journal={arXiv preprint arXiv:2503.23367},
    year={2025}
}