ChatGPT’s free tier gets GPT 5.4 mini model with improved coding capabilities

Published: (March 17, 2026 at 05:15 PM EDT)
2 min read
Source: 9to5Google

Source: 9to5Google

chatgpt android app

OpenAI just announced its latest models, GPT 5.4 mini and nano, with the former now available to free ChatGPT users.

Earlier this month, OpenAI launched its GPT 5.4 model in its higher tiers of use, but the new mini and nano variants are now arriving for the masses. Available now, this new model brings improvements across the board, especially to coding, reasoning, multimodal understanding, and tool/computer use.

GPT‑5.4 mini significantly improves over GPT‑5 mini across coding, reasoning, multimodal understanding, and tool use, while running more than 2× faster. It also approaches the performance of the larger GPT‑5.4 model on several evaluations, including SWE‑Bench Pro and OSWorld‑Verified. — OpenAI

These models are built for workloads where latency directly shapes the product experience: coding assistants that need to feel responsive, sub‑agents that quickly complete supporting tasks, computer‑using systems that capture and interpret screenshots, and multimodal applications that can reason over images in real time. In these settings, the best model is often not the largest one—it’s the one that can respond quickly, use tools reliably, and still perform well on complex professional tasks.

With the rise of “vibe coding,” the focus on coding capabilities becomes especially relevant. OpenAI says that GPT 5.4 mini and nano can both handle coding workflows including targeted edits, codebase navigation, front‑end generation, and debugging loops with low latency.

Beyond being part of ChatGPT’s free and “Go” tiers, GPT 5.4 mini is now live in OpenAI’s API and Codex, while nano is available solely through the API, both at much lower costs compared to GPT 5.4.

More on AI

0 views
Back to Blog

Related posts

Read more »

AI Vocab 101

Why Vocabulary Matters When Talking About AI I've been having a lot of conversations with non‑tech people recently about AI. What I keep running into is the sa...