ColaVLA: Leveraging Cognitive Latent Reasoning for Hierarchical Parallel
Trajectory Planning in Autonomous Driving

1Tsinghua University 2CUHK MMLab 3Voyager Research, Didi Chuxing

Abstract

Autonomous driving requires generating safe and reliable trajectories from complex multimodal inputs. Traditional modular pipelines separate perception, prediction, and planning, while recent end-to-end (E2E) systems learn them jointly. Vision–language models (VLMs) further enrich this paradigm by introducing cross-modal priors and commonsense reasoning, yet current VLM-based planners face three key challenges: (i) a mismatch between discrete text reasoning and continuous control, (ii) high latency from autoregressive chain-of-thought decoding, and (iii) inefficient or non-causal planners that limit real-time deployment. We propose ColaVLA, a unified vision–language–action framework that transfers reasoning from text to a unified latent space and couples it with a hierarchical, parallel trajectory decoder. The Cognitive Latent Reasoner compresses scene understanding into compact, decision-oriented meta-action embeddings through ego-adaptive selection and only two VLM forward passes. The Hierarchical Parallel Planner then generates multi-scale, causality-consistent trajectories in a single forward pass. Together, these components preserve the generalization and interpretability of VLMs while enabling efficient, accurate and safe trajectory generation. Experiments on the nuScenes benchmark show that ColaVLA achieves state-of-the-art performance in both open-loop and closed-loop settings with favorable efficiency and robustness.

🔥 Highlights

  • Cognitive Latent Reasoning (text CoT → latent reasoning). We relocate chain-of-thought from discrete text to a compact latent space: an ego-adaptive router keeps only safety-critical cues, and a latent “rethink & decide” stage produces a driving strategy without autoregressive text decoding.
  • Hierarchical Parallel Planner (one-pass, causal, multi-scale). A multi-stage intent-to-motion decoder generates coarse-to-fine trajectories in parallel with a causality-preserving hybrid attention mask, enabling efficient and consistent multi-scale planning in a single forward pass.
  • Proxy Attention. We introduce a generalized proxy attention mechanism, allowing the selection of different proxy tokens based on task requirements, achieving linear computational complexity.
  • State-of-the-art performance on nuScenes Open-loop: 0.30m average L2 and 0.23% collision rate. Closed-loop (NeuroNCAP): 3.48 score with 36.8% collision rate (top-1 strategy for realistic decision making).
  • Low latency for real-time deployment. End-to-end inference is 5–10× faster than text-based VLM planners, since ColaVLA avoids autoregressive text CoT and multi-pass decoding.
Illustration of reasoning paradigm

From text chain-of-thought to cognitive latent reasoning for efficient planning.

Method

ColaVLA is a unified vision–language–action framework consisting of two core components: (1) Cognitive Latent Reasoner and (2) Hierarchical Parallel Planner. Multi-view image sequences are encoded into visual tokens (objects & maps), which are fused with an ego token and a fixed driving prompt. An ego-adaptive router selects safety-critical visual cues to form a compact pruned context. The model then performs a latent rethinking step with a bank of learnable meta-queries and finally outputs a discrete driving strategy.

Conditioned on the selected strategy, we retrieve the corresponding meta-action from an action bank and expand it into multi-scale trajectory targets via temporal embeddings and resampling. The Hierarchical Parallel Planner concatenates the pruned context and all scale-wise targets, and performs one-pass parallel decoding with a causality-preserving hybrid mask: global context aggregation to all scales, bidirectional interaction within each scale, and strictly causal information flow from coarser to finer scales. This yields efficient, causal and interpretable multi-scale trajectories.

ColaVLA Framework

Overall framework of ColaVLA.

Visualization

We visualize representative driving scenes and the predicted trajectories to highlight how ColaVLA couples compact latent reasoning with hierarchical parallel decoding. The qualitative results demonstrate robust behavior under complex multi-agent interactions, long-horizon intent understanding, and safety-critical scenarios.

Visualization Image

Qualitative visualization of planning results.

Experiments

Open-loop (nuScenes). ColaVLA achieves strong accuracy and safety with 0.30m average L2 error and 0.23% collision rate, demonstrating precise trajectory prediction while operating efficiently in the latent action space.

Closed-loop (NeuroNCAP). ColaVLA establishes robust closed-loop driving with a 3.48 NeuroNCAP score and 36.8% collision rate under a realistic top-1 strategy setting, showing improved safety and generalization in safety-critical scenarios.

For detailed metrics, ablations, and additional analyses, please refer to the paper.

Open-loop Results

Open-loop results on nuScenes.

Closed-loop Results

Closed-loop results on NeuroNCAP.

BibTeX

            @misc{peng2025colavlaleveragingcognitivelatent,
              title={ColaVLA: Leveraging Cognitive Latent Reasoning for Hierarchical Parallel Trajectory Planning in Autonomous Driving}, 
              author={Qihang Peng and Xuesong Chen and Chenye Yang and Shaoshuai Shi and Hongsheng Li},
              year={2025},
              eprint={2512.22939},
              archivePrefix={arXiv},
              primaryClass={cs.CV},
              url={https://arxiv.org/abs/2512.22939}, 
            }