New Gemini API updates for Gemini 3

Published: (November 30, 2025 at 06:25 PM EST)
3 min read

Source: Google Developers Blog

NOV. 25, 2025

Gemini 3, our most intelligent model, is now available for developers via the Gemini API. To support its state‑of‑the‑art reasoning, autonomous coding, multimodal understanding, and powerful agentic capabilities, we’ve rolled out several updates. These changes give you more control over how the model reasons, processes media, and interacts with the outside world.

What’s new in the Gemini API for Gemini 3

  • Simplified parameters for thinking control
    Starting with Gemini 3, a new thinking_level parameter lets you control the maximum depth of the model’s internal reasoning before it produces a response. The levels are relative guidelines rather than strict token guarantees.

    • Set to "high" for complex tasks that require optimal thinking (e.g., strategic business analysis, scanning code for vulnerabilities).
    • Set to "low" for latency‑ and cost‑sensitive applications such as structured data extraction or summarization.
      Read more here.
  • Granular control over multimodal vision processing
    The media_resolution parameter lets you configure how many tokens are used for image, video, and document inputs, balancing visual fidelity with token usage. Options are media_resolution_low, media_resolution_medium, or media_resolution_high, applied per media part or globally. If unspecified, the model uses optimal defaults based on the media type. Higher resolutions improve the ability to read fine text or identify small details but increase token usage and latency.

  • Thought signatures to improve function calling and image generation performance
    Gemini 3 now returns Thought Signatures—encrypted representations of the model’s internal thought process. Passing these signatures back in subsequent API calls preserves the chain of reasoning across a conversation, which is critical for complex, multi‑step agentic workflows.

    • When using the official SDKs and standard chat history, thought signatures are handled automatically.
    • Function calling: strict validation on the “current turn”. Missing signatures result in a 400 error. See details here.
    • Text/chat generation: validation is not strictly enforced, but omitting signatures degrades reasoning and answer quality.
    • Image generation/editing: strict validation for all model parts, including a thoughtSignature. Missing signatures also return a 400 error.
  • Grounding and URL context with structured outputs
    You can now combine Gemini‑hosted tools—specifically grounding with Google Search and URL context—with structured outputs. This is powerful for agents that need to fetch live information from the web or specific webpages and extract it into precise JSON for downstream tasks. Learn more here.

  • Updates to Grounding with Google Search pricing
    To better support dynamic agentic workflows, pricing shifts from a flat US $35 / 1k prompts to a usage‑based US $14 / 1,000 search queries.

Best practices for using Gemini 3 Pro through our APIs

Gemini 3 Pro has generated excitement for use cases such as vibe coding, zero‑shot generation, mathematical problem solving, and complex multimodal challenges. Follow these guidelines to get the best results:

  • Temperature – Keep the temperature parameter at its default value of 1.0.
  • Consistency & defined parameters – Maintain a uniform structure throughout prompts (e.g., standardized XML tags) and explicitly define ambiguous terms.
  • Output verbosity – Gemini 3 defaults to concise answers. Request a more conversational tone explicitly if needed.
  • Multimodal coherence – Treat text, images, audio, and video as equal‑class inputs. Reference specific modalities clearly so the model synthesizes across them rather than analyzing them in isolation.
  • Constraint placement – Put behavioral constraints and role definitions in the System Instruction or at the very top of the prompt to anchor the model’s reasoning.
  • Long‑context structure – When working with large contexts (books, codebases, long videos), place your specific instructions at the end of the prompt (after the data context).

Gemini 3 Pro is our most advanced model for agentic coding. To help developers maximize its capabilities, we’ve collaborated with our research team to create a System Instructions template that improves performance on several agentic benchmarks.

To start building with these new features, explore the Gemini 3 documentation and read the Developer Guide for technical implementation details.

Back to Blog

Related posts

Read more »