Building an agent that can mimic human behavior patterns to accomplish various open-world tasks is a long-term goal. To enable agents to effectively learn behavioral patterns across diverse tasks, a key challenge lies in modeling the intricate relationships among observations, actions, and language. To this end, we propose Optimus-2, a novel Minecraft agent that incorporates a Multimodal Large Language Model (MLLM) for high-level planning, alongside a Goal-Observation-Action Conditioned Policy (GOAP) for low-level control. GOAP contains (1) an Action-guided Behavior Encoder that models causal relationships between observations and actions at each timestep, then dynamically interacts with the historical observation-action sequence, consolidating it into fixed-length behavior tokens, and (2) an MLLM that aligns behavior tokens with open-ended language instructions to predict actions auto-regressively. Moreover, we introduce a high-quality Minecraft Goal-Observation-Action (MGOA) dataset, which contains 25,000 videos across 8 atomic tasks, providing about 30M goal-observation-action pairs. The automated construction method, along with the MGOA dataset, can contribute to the community's efforts to train Minecraft agents. Extensive experimental results demonstrate that Optimus-2 exhibits superior performance across atomic tasks, long-horizon tasks, and open-ended instruction tasks in Minecraft.
In this paper, we propose a novel agent, Optimus-2, which can excel in various tasks in the open-world environment of Minecraft. Optimus-2 integrates an MLLM for high-level planning and a Goal-Observation-Action conditioned Policy (GOAP) for low-level control. As a core contribution of this paper, GOAP includes an Action-guided Behavior Encoder to model the observation-action sequence and an MLLM to align the goal with the observation-action sequence for predicting subsequent actions. Extensive experimental results demonstrate that GOAP has mastered various atomic tasks and can comprehend open-ended language instructions. This enables Optimus-2 to achieve superior performance on long-horizon tasks, surpassing existing SOTA. Moreover, we introduce a Minecraft GoalObservation-Action dataset to provide the community with large-scale, high-quality data for training Minecraft agents.
@inproceedings{li2025optimus2,
title={Optimus-2: Multimodal Minecraft Agent with Goal-Observation-Action Conditioned Policy},
author={Li, Zaijing and Xie, Yuquan and Shao, Rui and Chen, Gongwei and Jiang, Dongmei and Nie, Liqiang},
booktitle={2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025},
organization={IEEE}
}