[Paper] A Pragmatic VLA Foundation Model

Published: (January 26, 2026 at 12:08 PM EST)
2 min read
Source: arXiv

Source: arXiv - 2601.18692v1

Overview

Offering great potential in robotic manipulation, a capable Vision-Language-Action (VLA) foundation model is expected to faithfully generalize across tasks and platforms while ensuring cost efficiency (e.g., data and GPU hours required for adaptation). To this end, we develop LingBot-VLA with around 20,000 hours of real‑world data from 9 popular dual‑arm robot configurations. Through a systematic assessment on 3 robotic platforms, each completing 100 tasks with 130 post‑training episodes per task, our model achieves clear superiority over competitors, showcasing its strong performance and broad generalizability. We have also built an efficient codebase, which delivers a throughput of 261 samples per second per GPU with an 8‑GPU training setup, representing a 1.5 – 2.8× (depending on the relied VLM base model) speedup over existing VLA‑oriented codebases. The above features ensure that our model is well‑suited for real‑world deployment. To advance the field of robot learning, we provide open access to the code, base model, and benchmark data, with a focus on enabling more challenging tasks and promoting sound evaluation standards.

Key Contributions

  • cs.RO
  • cs.CV

Methodology

Please refer to the full paper for detailed methodology.

Practical Implications

This research contributes to the advancement of cs.RO.

Authors

  • Wei Wu
  • Fan Lu
  • Yunnan Wang
  • Shuai Yang
  • Shi Liu
  • Fangjing Wang
  • Qian Zhu
  • He Sun
  • Yong Wang
  • Shuailei Ma
  • Yiyu Ren
  • Kejia Zhang
  • Hui Yu
  • Jingmei Zhao
  • Shuai Zhou
  • Zhenqi Qiu
  • Houlong Xiong
  • Ziyu Wang
  • Zechen Wang
  • Ran Cheng
  • Yong‑Lu Li
  • Yongtao Huang
  • Xing Zhu
  • Yujun Shen
  • Kecheng Zheng

Paper Information

  • arXiv ID: 2601.18692v1
  • Categories: cs.RO, cs.CV
  • Published: January 26, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »