OmniSAT: Compact Action Token, Faster Auto Regression

Anonymous Authors

Comparison between Existing Approaches and OmniSAT

Teaser
(a) Diffusion-based policies require iterative denoising, limiting training efficiency and scalability. (b) AR policies train efficiently and support flexible sequence construction, but sacrifice fine-grained accuracy in continuous control. (c) OmniSAT amplifies AR efficiency through feasible high-rate compression while providing a unified token space that enables integration of heterogeneous datasets.

Abstract

Existing Vision-Language-Action (VLA) models can be broadly categorized into diffusion-based and auto-regressive (AR) approaches: diffusion models capture continuous action distributions but rely on computationally heavy iterative denoising. In contrast, AR models enable efficient optimization and flexible sequence construction, making them better suited for large-scale pretraining. To further improve AR efficiency, particularly when action chunks induce extended and high-dimensional sequences, prior work applies entropy-guided and token-frequency techniques to shorten the sequence length. However, such compression struggled with poor reconstruction or inefficient compression. Motivated by this, we introduce an Omni Swift Action Tokenizer, which learns a compact, transferable action representation. Specifically, we first normalize value ranges and temporal horizons to obtain a consistent representation with B-Spline encoding. Then, we apply multi-stage residual quantization to the position, rotation, and gripper subspaces, producing compressed discrete tokens with coarse-to-fine granularity for each part. After pre-training on the large-scale dataset Droid, the resulting discrete tokenization shortens the training sequence by 6.8×, and lowers the target entropy. To further explore the potential of OmniSAT, we develop a cross-embodiment learning strategy that builds on the unified action-pattern space and jointly leverages robot and human demonstrations. It enables scalable auxiliary supervision from heterogeneous egocentric videos. Across diverse real-robot and simulation experiments, OmniSAT encompasses higher compression while preserving reconstruction quality, enabling faster AR training convergence and model performance.

Real-world Experiments

PlaceObj

TubeRack

ZipSeal

OmniSAT Tokenization Pipeline

SAT Diagram
Consistency Encoding converts variable-length trajectories into temporally aligned, fixed-length control-point representations via B-spline fitting. Quantization Compression splits control-point features into part groups (position, rotation, gripper) and applies residual vector quantization to obtain layerwise codebook indices. The selected indices are then flattened into final compact action-pattern tokens.

OmniSAT for Cross-Embodiment Manipulation Learning

Method Diagram
The training pipeline has two phases: (i) Tokenizer Pretraining: OmniSAT is pretrained on heterogeneous human–robot datasets to learn a unified and compressed (× 6.8) action token space; (ii) Cross-Embodiment Fine-Tuning: we construct mixed visual-action auto-regressive sequences over OmniSAT token space, enabling efficient and scalable fine-tuning through shorter sequences and lower target entropy.