Abstract

Multimodal large language models (MLLMs) have made significant advancements in vision understanding and reasoning. However, the autoregressive Transformer architecture used by MLLMs requires tokenization on input images, which limits their ability to accurately ground objects within the 2D image space. This raises an important question: how can sequential tokens be improved to better ground objects in 2D spatial space for MLLMs? To address this, we present a spatial representation method for grounding objects, namely GETok, that integrates a specialized vocabulary of learnable tokens into MLLMs. GETok first uses grid tokens to partition the image plane into structured spatial anchors, and then exploits offset tokens to enable precise and iterative refinement of localization predictions. By embedding spatial relationships directly into tokens, GETok significantly advances MLLMs in native 2D space reasoning without modifying the autoregressive Transformer architecture. Extensive experiments demonstrate that GETok achieves superior performance over the state-of-the-art methods across various referring tasks in both supervised fine-tuning and reinforcement learning settings.

Why GETok?

Overview of GETok for spatial referencing
Fig. 1. Right: Comparison of token-based representations for grounding objects in MLLMs. 2D grid tokens preserve spatial topology with shorter sequences than coordinate-, patch-, or 1D bin-based formulations. Left: (e) GETok establishes direct 2D spatial correspondences, yielding focused and consistent attention maps; (f) the grid representation forms an RL-friendly action space for GRPO, enabling stable policy learning and faster convergence.

GETok addresses a core bottleneck of MLLMs: image tokenization often discards 2D spatial topology, making precise localization hard. We introduce a learnable spatial vocabulary, with grid tokens that form a 2D anchor lattice and offset tokens that refine locations, so sequential tokens map smoothly to 2D image space.

  • Unified Referring Representation: A unified representation from points to masks within a standard autoregressive framework, removing task-specific modules while keeping precision.
  • Self-Correction: Iterative refinement with offsets enables the model to adjust spatial predictions and correct early mistakes.
  • RL-friendly: Token shifts correspond to smooth spatial changes, yielding a low-entropy action space and stable reward landscapes for efficient policy optimization.

Data Curation for Supervised Fine-Tuning with GETok

Greedy mask-to-token pipeline

We propose a training-free greedy procedure to convert dense masks into discrete tokens. We prompt SAM at each grid point to obtain mask proposals and iteratively add the proposal with the largest IoU gain until a threshold τ is met, yielding the minimal token set that approximates the ground truth. The result is compact and unambiguous, avoids redundancy on multiply-connected regions, and requires no architectural changes.

Offset-aware dataset pipeline

An offset-aware dataset is constructed by classifying grid points using morphology scaled to the offset step: Inside (0 offset), Ring (non-zero offsets near boundaries), Far, and Hard-Delete (mapped to <DELETE>). An ordered rule assigns each point to exactly one region, and sampling prioritizes Inside/Ring, producing a per-image set of K grid–offset pairs that concentrates learning on boundary-proximal corrections. This simulated supervision carries higher informational value than auto-generated labels and yields stronger refinement.

Self-Improving Reinforcement Learning

Self-improving RL overview

The structured nature of GETok offers an ideal framework for reinforcement learning due to its 2D lattice organization, which creates a geometrically grounded action space rich in spatial semantics. We introduce a novel self-improving reinforcement learning framework that utilizes the grid-offset hierarchy of GETok to enable iterative self-correction. This mechanism mitigates the brittleness of one-shot predictions in prior work and enables fine-grained, boundary-accurate mask refinement.

Experimental Results

Qualitative results of segmentation task.
Qualitative results of segmentation task. GETok demonstrates adaptive corrections, achieving precise localization across diverse scenarios, including small objects and complex shapes.
More qualitative results of the self-improving mechanism in GETok
Qualitative results of the self-improving mechanism. Additional examples demonstrate how GETok establishes initial spatial proposals through grid tokens (red dots) and enables fine-grained adjustments via offset tokens (blue arrows), showing effective handling of objects across scales with enhanced precision on small targets.
Experimental results

Extensive experiments demonstrate that GETok achieves superior performance over the state-of-the-art methods across various referring tasks in both supervised fine-tuning and reinforcement learning contexts.

BibTeX


@article{,
  title={Grounding Everything in Tokens for Multimodal Large Language Models},
  author={Xiangxuan Ren and Zhongdao Wang and Liping Hou and Pin Tang and Guoqing Wang and Chao Ma},
  journal={arXiv preprint arXiv:2512.10554},
  year={2025}
}