Multimodal large language models (MLLMs) have made significant advancements in vision understanding and reasoning. However, the autoregressive Transformer architecture used by MLLMs requires tokenization on input images, which limits their ability to accurately ground objects within the 2D image space. This raises an important question: how can sequential tokens be improved to better ground objects in 2D spatial space for MLLMs? To address this, we present a spatial representation method for grounding objects, namely GETok, that integrates a specialized vocabulary of learnable tokens into MLLMs. GETok first uses grid tokens to partition the image plane into structured spatial anchors, and then exploits offset tokens to enable precise and iterative refinement of localization predictions. By embedding spatial relationships directly into tokens, GETok significantly advances MLLMs in native 2D space reasoning without modifying the autoregressive Transformer architecture. Extensive experiments demonstrate that GETok achieves superior performance over the state-of-the-art methods across various referring tasks in both supervised fine-tuning and reinforcement learning settings.
GETok addresses a core bottleneck of MLLMs: image tokenization often discards 2D spatial topology, making precise localization hard. We introduce a learnable spatial vocabulary, with grid tokens that form a 2D anchor lattice and offset tokens that refine locations, so sequential tokens map smoothly to 2D image space.
We propose a training-free greedy procedure to convert dense masks into discrete tokens. We prompt SAM at each grid point to obtain mask proposals and iteratively add the proposal with the largest IoU gain until a threshold τ is met, yielding the minimal token set that approximates the ground truth. The result is compact and unambiguous, avoids redundancy on multiply-connected regions, and requires no architectural changes.
An offset-aware dataset is constructed by classifying grid points using morphology scaled to the offset step: Inside (0 offset), Ring (non-zero offsets near boundaries), Far, and Hard-Delete (mapped to <DELETE>). An ordered rule assigns each point to exactly one region, and sampling prioritizes Inside/Ring, producing a per-image set of K grid–offset pairs that concentrates learning on boundary-proximal corrections. This simulated supervision carries higher informational value than auto-generated labels and yields stronger refinement.
The structured nature of GETok offers an ideal framework for reinforcement learning due to its 2D lattice organization, which creates a geometrically grounded action space rich in spatial semantics. We introduce a novel self-improving reinforcement learning framework that utilizes the grid-offset hierarchy of GETok to enable iterative self-correction. This mechanism mitigates the brittleness of one-shot predictions in prior work and enables fine-grained, boundary-accurate mask refinement.
Extensive experiments demonstrate that GETok achieves superior performance over the state-of-the-art methods across various referring tasks in both supervised fine-tuning and reinforcement learning contexts.
@article{,
title={Grounding Everything in Tokens for Multimodal Large Language Models},
author={Xiangxuan Ren and Zhongdao Wang and Liping Hou and Pin Tang and Guoqing Wang and Chao Ma},
journal={arXiv preprint arXiv:2512.10554},
year={2025}
}