I built a custom Deep Learning framework in pure Rust just to simulate Arknights: Endfield gacha luck (Talos-XII)

Published: (February 4, 2026 at 04:08 AM EST)
1 min read
Source: Dev.to

Source: Dev.to

Introduction

I built Talos‑XII, a custom deep‑learning framework in pure Rust to simulate gacha pulls for Arknights: Endfield. The project started as a simple pull‑simulation tool but quickly evolved into a full‑blown reinforcement‑learning (RL) engine.

Technical Implementation (for Rustaceans)

No Python

  • The core engine is written entirely in Rust.
  • I implemented a custom reverse‑mode autograd system that mimics PyTorch’s API without the extra bloat.

Performance

  • Parallel tensor operations are handled with Rayon.
  • Hand‑written SIMD kernels (AVX2 for x86, NEON for ARM) accelerate the critical paths.

Model Architecture

  • Deep Belief Network (DBN) for environment noise simulation.
  • Transformer backend for the RL agent.

Optimisation

  • Integrated ideas from the DeepSeek mHC (Manifold‑Constrained Hyper‑Connections) paper for the optimiser design.
  • The optimiser was ported to Rust as a fun challenge.

Purpose

The system simulates millions of pulls to estimate the exact probability of obtaining the UP character using only free resources (the “Neural Luck Optimiser”).

Usage

  • Currently CLI‑only; no graphical interface is provided yet.

Repository

  • Source code:

References

  • DeepSeek mHC paper: (big thanks to the DeepSeek team for this reference).
Back to Blog

Related posts

Read more »