[Paper] TraceGen: World Modeling in 3D Trace Space Enables Learning from Cross-Embodiment Videos

Published: (November 26, 2025 at 01:59 PM EST)
2 min read
Source: arXiv

Source: arXiv - 2511.21690v1

Overview

Learning new robot tasks on new platforms and in new scenes from only a handful of demonstrations remains challenging. While videos of other embodiments—humans and different robots—are abundant, differences in embodiment, camera, and environment hinder their direct use. We address the small-data problem by introducing a unifying, symbolic representation—a compact 3D trace‑space of scene‑level trajectories—that enables learning from cross‑embodiment, cross‑environment, and cross‑task videos.

We present TraceGen, a world model that predicts future motion in trace‑space rather than pixel space, abstracting away appearance while retaining the geometric structure needed for manipulation. To train TraceGen at scale, we develop TraceForge, a data pipeline that transforms heterogeneous human and robot videos into consistent 3D traces, yielding a corpus of 123 K videos and 1.8 M observation‑trace‑language triplets.

Pretraining on this corpus produces a transferable 3D motion prior that adapts efficiently: with just five target robot videos, TraceGen attains 80 % success across four tasks while offering 50–600× faster inference than state‑of‑the‑art video‑based world models. In the more challenging case where only five uncalibrated human demonstration videos captured on a handheld phone are available, it still reaches 67.5 % success on a real robot, highlighting TraceGen’s ability to adapt across embodiments without relying on object detectors or heavy pixel‑space generation.

Authors

  • Seungjae Lee
  • Yoonkyo Jung
  • Inkook Chun
  • Yao‑Chih Lee
  • Zikui Cai
  • Hongjia Huang
  • Aayush Talreja
  • Tan Dat Dao
  • Yongyuan Liang
  • Jia‑Bin Huang
  • Furong Huang

Categories

  • cs.RO
  • cs.CV
  • cs.LG

Paper Information

  • arXiv ID: 2511.21690v1
  • Categories: cs.RO, cs.CV, cs.LG
  • Published: November 27, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »