Global Prior Meets Local Consistency: Dual-Memory Augmented Vision-Language-Action Model for Efficient Robotic Manipulation

Global Prior Meets Local Consistency: Dual-Memory Augmented Vision-Language-Action Model for Efficient Robotic Manipulation

Zaijing Li1 2, Bing Hu1 , Rui Shao1 3✉, Gongwei Chen1,
Dongmei Jiang2✉, Pengwei Xie4, Jianye HAO4, Liqiang Nie,
1Harbin Institute of Technology, Shenzhen    2Peng Cheng Laboratory, Shenzhen
3Shenzhen Loop Area Institute    4Huawei Noah's Ark Lab
✉ Corresponding author  

Abstract

Hierarchical Vision–Language–Action (VLA) models have rapidly become a dominant paradigm for robotic manipulation. It typically comprising a Vision–Language backbone for perception and understanding, together with a generative policy for action generation. However, its performance is increasingly bottlenecked by the action generation proceess. (i) Low inference efficiency. A pronounced distributional gap between isotropic noise priors and target action distributions, which increases denoising steps and the incidence of infeasible samples. (ii) Poor robustness. Existing policies condition solely on the current observation, neglecting the constraint of history sequence and thus lacking awareness of task progress and temporal consistency. To address these issues, we introduce OptimusVLA, a dual-memory VLA framework with Global Prior Memory (GPM) and Local Consistency Memory (LCM). GPM replaces Gaussian noise with task-level priors retrieved from semantically similar trajectories, thereby shortening the generative path and reducing the umber of function evaluations (NFE). LCM dynamically models executed action sequence to infer task progress and injects a learned consistency constraint that enforces temporal coherence and smoothness of trajectory. Across three simulation benchmarks, OptimusVLA consistently outperforms strong baselines: it achieves 98.6% average success rate on LIBERO, improves over π0 by 13.5% on CALVIN, and attains 38% average success rate on RoboTwin 2.0 Hard. In Real-World evaluation, OptimusVLA ranks best on Generalization and Long-horizon suites, surpassing π0 by 42.9% and 52.4%, respectively, while delivering 2.9× inference speedup.

OptimusVLA


Overview of OptimusVLA framework. Given a task and the current observation, the Vision–Language backbone first encodes the inputs into a multimodal representation. GPM then retrieves a task-level prior based on this representation, while LBM dynamically encodes the historical action sequence to produce a consistency constraint. Finally, the flow policy denoises the initialization with an adaptive NFEs schedule to generate the action chunk.

Experiment

Real World Evaluation.

Table1: Main Result of OptimusVLA on Generalization Tasks and Long-horizon Tasks suites.


Conclusion

In this paper, we propose a dual-memory VLA framework, OptimusVLA, which contain Global Prior Memory (GPM) and Local Consistency Memory (LCM) for robotic manipulation. GPM replaces Gaussian noise with task-level priors retrieved from semantically similar trajectories, thereby shortening the generative path and reducing invalid samples without sacrificing generalization. LCM models short histories of executed actions to infer task progress and injects a learned consistency constraint that enforces temporal coherence. Together, GPM and LCM improve efficiency and robustness of OptimusVLA. Extensive experiments in both simulation platforms and the real world demonstrate the superior performance of OptimusVLA, together with substantially higher inference efficiency.