Coding Rust with Claude Code and Codex
Source: Dev.to
Why Rust Makes AI‑Assisted Development Feel Like a Real‑Time Code Review
For a while now, I’ve been experimenting with AI coding tools, and there’s something fascinating happening when you combine Rust with agents such as Claude Code or OpenAI’s Codex. The experience is fundamentally different from working with Python or JavaScript – and it comes down to one simple fact: Rust’s compiler acts as an automatic expert reviewer for each edit the AI makes.
- If it compiles, it probably works – that’s not just a Rust motto; it’s becoming the foundation for reliable AI‑assisted development.
- When you let Claude Code or Codex loose on a Python codebase, you essentially trust the AI to get things right on its own. Sure, you have linters and type hints (if you’re lucky), but there is no strict enforcement: the AI can generate code that looks reasonable, passes a quick review, and then blows up in production because of some edge case nobody thought about.
With Rust, the compiler catches these issues before anything runs.
| Problem type | What Rust does |
|---|---|
| Memory‑safety incidents | Caught at compile time |
| Data races | Caught at compile time |
| Lifetime issues | Caught at compile time |
This creates a remarkably tight feedback loop that AI coding tools can actually learn from in real time.
What Makes Rust Special for AI Coding?
The compiler doesn’t just say “Error” and leave you guessing. It tells you exactly what went wrong, where it went wrong, and often suggests how to fix it – absolute gold for AI tools like Codex or Claude Code.
Example 1 – Returning a Reference from an Owned String
fn get_first_word(s: String) -> &str {
let bytes = s.as_bytes();
for (i, &item) in bytes.iter().enumerate() {
if item == b' ' {
return &s[0..i];
}
}
&s[..]
}
Compiler output
error[E0106]: missing lifetime specifier
--> src/main.rs:1:36
|
1 | fn get_first_word(s: String) -> &str {
| - ^ expected named lifetime parameter
|
= help: this function's return type contains a borrowed value,
but there is no value for it to be borrowed from
help: consider using the `'static` lifetime
|
1 | fn get_first_word(s: String) -> &'static str {
| ~~~~~~~~
The compiler is literally explaining the ownership model to the AI: “Hey, you’re trying to return a reference but the thing you’re referencing will be dropped when this function ends – that’s not going to work.”
- Structured, deterministic feedback – error code
E0106, exact location, clear explanation, and even a suggested fix. - The real fix, of course, is to change the function signature to borrow instead of taking ownership.
Example 2 – Concurrency Mistake
use std::thread;
fn main() {
let data = vec![1, 2, 3];
let handle = thread::spawn(|| {
println!("{:?}", data);
});
handle.join().unwrap();
}
Compiler output
error[E0373]: closure may outlive the current function, but it borrows `data`
--> src/main.rs:6:32
|
6 | let handle = thread::spawn(|| {
| ^^ may outlive borrowed value `data`
7 | println!("{:?}", data);
| ---- `data` is borrowed here
|
note: function requires argument type to outlive `'static`
--> src/main.rs:6:18
|
6 | let handle = thread::spawn(|| {
| ^^^^^^^^^^^^^
help: to force the closure to take ownership of `data`, use the `move` keyword
|
6 | let handle = thread::spawn(move || {
| ++++
The compiler literally tells the AI: “Add move here.” Claude Code or Codex can parse this, apply the fix, and move on – no guesswork, no hoping for the best, no runtime data races that crash your production system at 3 AM.
How This Differs from Python / JavaScript
When an AI produces buggy concurrent code in those languages, you might not even know there’s a problem until you hit a race condition under specific load conditions. With Rust, the bug never makes it past the compiler.
“Rust is great for Claude Code to work unsupervised on larger tasks. The combination of a powerful type system with strong security checks acts like an expert code reviewer, automatically rejecting incorrect edits and preventing bugs.”
— Julian Schrittwieser, Anthropic
This matches our experience at Sayna, where we built our entire voice‑processing infrastructure in Rust. When Claude Code (or any AI tool) makes a change, the compiler immediately tells it what went wrong. There are no waiting for runtime errors, no debugging sessions to figure out why the audio stream randomly crashes – the errors are clear and actionable.
Typical Workflow
# AI generates code
cargo check
# Compiler output:
error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
--> src/main.rs:4:5
|
3 | let r1 = &x;
| -- immutable borrow occurs here
4 | let r2 = &mut x;
| ^^^^^^ mutable borrow occurs here
5 | println!("{}, {}", r1, r2);
| -- immutable borrow later used here
- AI sees this, understands the borrowing conflict, restructures the code.
- AI makes changes and runs
cargo checkagain.
cargo check
# No errors – we’re good
The beauty here is that every single error has a unique code (E0502 in this case). Running rustc --explain E0502 yields a full explanation with examples. AI tools can use this to understand not only what went wrong but also why Rust’s ownership model prevents the pattern, because the compiler essentially teaches the AI as it codes.
Bottom Line
The margin for error becomes extremely small when the compiler provides structured, deterministic feedback that the AI can parse and act on. Compare this to what you get in languages without such a strong compile‑time guarantee, and the advantage of Rust for AI‑assisted development is crystal clear.
From a C++ Compiler If Something Goes Wrong with Templates
error: no matching function for call to 'std::vector>::push_back(int)'
vector v; v.push_back(42);
^
Sure, it tells you that there’s a type mismatch, BUT imagine if this error was buried in a 500‑line template back‑trace and you had to find an AI to parse that accurately.
Rust’s error messages are designed to be human‑readable, which accidentally makes them perfect for AI consumption: each error contains the exact source location (line & column numbers), an explanation of which rule was violated, suggestions for how to fix it (when possible), and links to detailed documentation.
When Claude Code or Codex runs cargo check it receives a structured error on which it can directly act. The feedback loop is measured in seconds, not debugging sessions.
Why a CLAUDE.md File Helps
One thing that made our development workflow significantly better at Sayna was investing in a correct CLAUDE.md file—a guideline document that lives in your repository and gives AI coding tools context about your project structure, conventions, and best practices.
Specifically for Rust projects you want to include:
- Cargo workspace structure – How your crates are organized.
- Error‑handling patterns – Do you use
anyhow,thiserror, or custom error types? - Async runtime – Are you on Tokio, async‑std, or something else?
- Testing conventions – Integration‑test locations, mocking patterns.
- Memory‑management guidelines – When to use
Arc,Rc, or plain references.
The combination of Rust’s strict compiler with well‑documented project guidelines creates an environment where AI tools can operate with high confidence; they know the rules and the compiler enforces them.
Real‑World Example at Sayna
At Sayna—WebSocket handling, audio‑processing pipelines, real‑time STT/TTS provider abstraction—we use Rust for all the heavy lifting. These are exactly the kind of systems where memory safety and concurrency guarantees matter.
When Claude Code refactors our WebSocket message handlers, it can’t “eat it” in an accidental way; when it changes our audio‑buffer management, it can’t create a use‑after‑free bug because the language simply does not allow it.
pub async fn process_audio_chunk(&self, chunk: Bytes) -> Result {
let processor = self.processor.lock().await;
processor.feed(chunk)?;
while let Some(result) = processor.next_result().await {
self.tx.send(result).await?;
}
Ok(())
}
An AI tool might need several iterations to get the borrowing and lifetimes right, BUT each iteration is guided by specific compiler errors: no guessing, no hoping for the best.
Rust Powers OpenAI’s Codex CLI
OpenAI recently rewrote their Codex CLI entirely in Rust. It wasn’t just about performance—though that was definitely a factor—they explicitly mentioned that Rust eliminates entire classes of bugs at compile time. If OpenAI is betting on Rust for their own AI‑coding infrastructure, it tells you something about where this is headed.
The security implications are massive: Codex now runs in sandboxed environments using Rust safety guarantees combined with OS isolation (Landlock on Linux, Sandbox‑exec on macOS). When you have AI‑generated code running on your machine, having compile‑time security guarantees is not optional.
Using AI to Tame Lifetimes
I won’t pretend that Rust is easy to learn because the ownership model takes time to internalize and lifetimes can be frustrating when you’re starting out—AI coding tools are actually quite good at dealing with Rust’s sharp edges.
My favorite trick is to tell Claude Code to “fix the lifetimes” and let it figure out which combination of &, ref, as_ref(), and explicit lifetime annotations makes my code compile while I concentrate on the actual logic and architecture.
// Before: Claude, fix this
fn process(&self, data: Vec) -> &str {
&data[0] // Won’t compile – returning reference to local data
}
// After: Claude’s solution
fn process(&self, data: &[String]) -> &str {
&data[0] // Works – borrowing from input parameter
}
This is actually a better way to learn Rust than struggling alone through compiler errors: you see patterns, you understand why certain approaches work, and the AI explains its reasoning when you ask.
Recommendations for Rust + AI Development
- Invest in your
CLAUDE.md– Document your patterns, conventions, and architectural decisions. The AI will follow them. - Use
cargo clippyaggressively – Enable all lints. More feedback means better AI output. - CI with strict checks – Ensure
cargo test,cargo clippy, andcargo fmtrun on every change; AI tools can verify their work before you even look at it. - Start with well‑defined tasks – Rust’s type system shines when the boundaries are clear: define your traits and types first, then let AI implement the logic.
- Verify but trust – The compiler catches a lot, BUT not everything: logic errors still slip through, so code review remains essential.
The Future of AI‑Assisted Systems Programming
We’re at an interesting inflection point: Rust is growing quickly in systems programming, and AI coding tools are becoming useful for production work. The combination creates something more than the sum of its parts.
At Sayna, our voice‑processing infrastructure handles real‑time audio streams, multiple provider integrations, and complex state management—all built in Rust, with significant AI assistance. This means we can move faster without constantly worrying about memory bugs or race conditions.
If you’ve tried Rust and found the learning curve steep, give it another try with Claude Code or Codex as your pair programmer. The experience is different when you have an AI that can navigate ownership and borrowing patterns while you focus on building things.
The tools are finally catching up to the promise of the language.