[Paper] ShadowDraw: From Any Object to Shadow-Drawing Compositional Art

Published: (December 4, 2025 at 01:59 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.05110v1

Overview

The paper presents ShadowDraw, a novel framework that turns any 3D object—whether a CAD model, a 3D scan, or a procedurally generated asset—into a piece of “shadow‑drawing” art. By automatically selecting camera pose, lighting, and a sparse line sketch, the system makes the object’s cast shadow complete the drawing, yielding a recognisable illustration that feels hand‑drawn yet mathematically precise.

Key Contributions

  • End‑to‑end pipeline that takes a raw 3D mesh and outputs a compositional line drawing plus a complementary shadow that together form a coherent illustration.
  • Joint optimization of pose, lighting, and sketch to maximise the semantic relevance of the shadow while keeping the line strokes simple and aesthetically pleasing.
  • Shadow‑stroke guidance: the shadow geometry is used as a scaffold for generating the line drawing, ensuring that the two modalities reinforce each other.
  • Automatic quality metrics for shadow‑drawing coherence, enabling large‑scale evaluation without human labeling.
  • Extensibility to multi‑object scenes, animated sequences, and real‑world physical setups (e.g., projecting the sketch onto a tabletop and casting a real shadow).

Methodology

  1. Input preprocessing – The system ingests a 3D mesh, normalises its scale, and optionally simplifies it for faster rendering.
  2. Scene parameter search – A differentiable renderer explores camera positions and light directions. The objective balances two terms: (a) shadow saliency (the shadow should convey recognizable shape cues) and (b) line simplicity (the sketch should use as few strokes as possible).
  3. Shadow‑stroke coupling – Once a promising pose is found, the silhouette of the shadow is rasterised into a set of “shadow strokes.” These strokes are fed into a lightweight sketch generator (a CNN‑based edge extractor fine‑tuned on artistic line drawings) that produces the complementary line art.
  4. Coherence enforcement – An automatic evaluator measures alignment between the shadow strokes and the generated line drawing (e.g., overlap, angular consistency). The optimizer iterates until the coherence score plateaus.
  5. Output rendering – The final composition is rendered as a vector graphic (SVG) for easy scaling, together with a depth‑aware shadow map that can be reused for animation or physical projection.

The whole pipeline runs in a few seconds on a modern GPU, making it practical for interactive design tools.

Results & Findings

  • Visual quality – Across a benchmark of 500 diverse objects (real scans, ShapeNet models, and AI‑generated assets), 92 % of the outputs passed a blind human study for recognisability and artistic appeal.
  • Multi‑object scenes – The optimizer can simultaneously position several objects so that their shadows intertwine, creating complex narrative compositions (e.g., a city skyline formed by the shadows of individual buildings).
  • Animation – By smoothly varying the light direction while keeping the sketch fixed, the system produces animated shadow‑drawing videos that maintain visual coherence frame‑to‑frame.
  • Physical deployment – The authors demonstrated a tabletop setup where a printed sketch and a small LED light source recreated the digital result in the real world, confirming that the generated parameters are physically realizable.

Practical Implications

  • Rapid prototyping for visual designers – Artists can feed a 3D asset into ShadowDraw and instantly obtain a stylised illustration ready for branding, storyboarding, or UI icons, cutting weeks of manual sketching.
  • Game and AR content creation – Developers can generate low‑poly line‑art assets with dynamic shadows for stylised game UI, tutorial overlays, or AR filters that react to real‑world lighting.
  • Educational tools – The system can illustrate geometry concepts (e.g., how light interacts with shape) by showing the minimal line drawing that, together with a shadow, fully describes an object.
  • Automated asset pipelines – Integration with existing 3D asset management (e.g., Blender, Unity) enables batch processing of libraries to produce consistent visual language across a product line.

Limitations & Future Work

  • Dependence on clean geometry – Highly noisy scans or meshes with non‑manifold edges can lead to unstable shadow silhouettes, requiring pre‑processing.
  • Simplistic lighting model – The current pipeline assumes a single directional light; complex indoor lighting or colored shadows are not yet supported.
  • Sketch style rigidity – While the line generator produces clean strokes, it does not yet allow user‑controlled artistic styles (e.g., hatching, cross‑hatching).
  • Scalability to ultra‑high‑poly models – Though the optimizer is fast, rendering extremely dense meshes can become a bottleneck; future work could incorporate mesh simplification or multi‑resolution strategies.

The authors suggest extending the framework to multi‑light environments, learning style‑transfer modules for diverse sketch aesthetics, and tighter integration with physical fabrication pipelines (e.g., laser‑cut shadow masks).


ShadowDraw opens a fresh avenue where algorithmic geometry meets handcrafted illustration, giving developers a powerful new tool to turn 3D data into compelling visual stories.

Authors

  • Rundong Luo
  • Noah Snavely
  • Wei-Chiu Ma

Paper Information

  • arXiv ID: 2512.05110v1
  • Categories: cs.CV, cs.AI, cs.GR
  • Published: December 4, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »