What's Coming with LangChainJS and Gemini?

Published: (January 8, 2026 at 05:53 PM EST)
6 min read
Source: Dev.to

Source: Dev.to

📦 Current Gemini‑LangChainJS Packages

A dizzying number of packages have existed to use Gemini with LangChainJS:

PackagePurposeStatus
@langchain/google-genaiBased on the older Google Generative AI package; works only with the AI Studio API (often called Gemini API).Unmaintained (≈ 1 year)
@langchain/google-gauthREST‑based library for Google‑hosted environments or other Node‑like systems; can access AI Studio or Vertex AI.
@langchain/google-webauthSame as google-gauth but designed for environments without a file system.
@langchain/google-vertexai / @langchain/google-vertexai-webDefault to Vertex AI (still able to use AI Studio).
@langchain/google-commonCore helper used by all the REST‑based packages.

Why so many?

The original goal was a single REST‑based package that could work on both Node and the browser while supporting Google Cloud’s Application Default Credentials (ADC). In practice we ran into two major hurdles:

  1. Platform differences – Node vs. web required separate handling for file‑system access and credential loading.
  2. API divergence – At the time (late 2023 / early 2024) Google offered two distinct APIs (AI Studio and Vertex AI), each with its own authentication flow.

By January 2024 we finally had a working solution, but Google released its own AI Studio‑only library, adding to the confusion.

🛠️ Lessons Learned

  • Cross‑platform compatibility – Our libraries supported both Node and the browser before Google’s official JS SDK caught up.
  • Rapid Gemini 2.0 support – We added compatibility within days, while Google took over a month.
  • Experimental features – We tried a Security Manager, Media Manager, and support for non‑Gemini models (e.g., Gemma on AI Studio, Anthropic on Vertex AI).

Despite these wins, the package landscape was confusing and one of the libraries had become outdated. It was time for a better solution.

🎉 Introducing a Single Unified Package

Going forward, we’ll support only one package: @langchain/google
Cue cheers and applause 🎉

You can install it with any package manager you prefer:

# npm
npm install @langchain/google

# yarn
yarn add @langchain/google

# pnpm
pnpm add @langchain/google

(The original example used yum; that was a typo – use npm/yarn/pnpm as shown.)

📚 Using the New Library

The API feels familiar. If you were using ChatGoogle before, you’ll continue to do so, just importing it from the new package:

import { ChatGoogle } from "@langchain/google";

const llm = new ChatGoogle({
  model: "gemini-3-pro-preview",
});

Creating Agents (LangChainJS 1 style)

LangChainJS 1 introduced a new createAgent() helper. It works out‑of‑the‑box with the unified library:

import { createAgent } from "@langchain/google";

const agent = createAgent({
  model: "gemini-3-flash-preview",
  tools: [], // add your tools here
});

Note: Until the new library is fully released, createAgent() may not behave exactly as expected. Keep an eye on the release notes.

🔐 Authentication & Credentials

The new library continues to support:

  • API keys for both AI Studio and Vertex AI (including Express mode).
  • Google Cloud credentials for service accounts and individual users.

Credentials can be supplied in three ways:

  1. Explicitly in code – pass the key/JSON directly to the constructor.
  2. Environment variablesGOOGLE_API_KEY, GOOGLE_APPLICATION_CREDENTIALS, etc.
  3. Application Default Credentials (ADC) – let Google’s SDK locate credentials automatically.

All communication happens over REST, so the library does not depend on any Google client library.

🚀 What LangChainJS 1 & Gemini 3 Bring

LangChainJS 0 was primarily text‑oriented. As multimodal models emerged, handling images, audio, and video became ad‑hoc and model‑specific. LangChainJS 1 standardizes multimodal support, and the new @langchain/google package is built to leverage those capabilities.

Highlight: Unified response.content

Previously: response.content could be a string or an array, depending on the model and request.

Now: The field follows a consistent schema (see the LangChainJS 1 docs), making it easier to write downstream code that works with any Gemini model—text‑only or multimodal.

📅 What’s Coming “Real Soon Now”

  • Full multimodal agent support (image, audio, video inputs).
  • Integrated Security Manager and Media Manager (re‑imagined for the new REST core).
  • Compatibility layers for any future Gemini releases beyond Gemini 3.

Stay tuned for the upcoming release notes and blog posts that will dive deeper into each feature.

TL;DR

  • One package@langchain/google.
  • Install with npm/yarn/pnpm.
  • Existing code (ChatGoogle, API‑key auth, ADC) works unchanged.
  • New LangChainJS 1 patterns (createAgent) are ready to use.
  • Multimodal support is now standardized and will keep expanding.

Thanks for following along, and happy building! 🚀

Overview

LangChainJS 1 keeps response.content for backwards compatibility, but the preferred way to get the textual part of a response is now response.text. This guarantees a string value.

Getting Text from a Response

import { ChatGoogle } from "@langchain/google";
import { AIMessage } from "langchain/schema";

const llm = new ChatGoogle({
  model: "gemini-3-flash-preview",
});

const result: AIMessage = await llm.invoke("Why is the sky blue?");
const answer: string = result.text; // guaranteed string

Working with Content Blocks

If you need to differentiate between “thinking” and actual content, use the response.contentBlocks field. It is always an array of the new, consistent ContentBlock.Standard objects.

import { ChatGoogle } from "@langchain/google";
import { AIMessage, ContentBlock } from "langchain/schema";

const llm = new ChatGoogle({
  model: "gemini-3-pro-image-preview",
});

const prompt = "Draw a parrot sitting on a chain‑link fence.";
const result: AIMessage = await llm.invoke(prompt);

result.contentBlocks.forEach((block: ContentBlock.Standard) => {
  if (!("text" in block)) {
    // Non‑text block (e.g., image, audio, video)
    saveToFile(block);
  }
});

Sending Media to Gemini

ContentBlock.Standard also works for sending data (images, audio, video) to Gemini.

import { ChatGoogle } from "@langchain/google";
import { HumanMessage, AIMessage, ContentBlock } from "langchain/schema";
import * as fs from "fs";

const llm = new ChatGoogle({
  model: "gemini-3-flash-preview",
});

const dataPath = "src/chat_models/tests/data/blue-square.png";
const dataType = "image/png";
const data = await fs.readFile(dataPath);
const data64 = data.toString("base64");

const content: ContentBlock.Standard[] = [
  {
    type: "text",
    text: "What is in this image?",
  },
  {
    type: "image",
    data: data64,
    mimeType: dataType,
  },
];

const message = new HumanMessage({
  contentBlocks: content,
});

const result: AIMessage = await llm.invoke([message]);
console.log(result.text);

Similar patterns work for audio and video inputs.

Release Roadmap

  • Alpha: Early January 2026
  • Final version: Within a month after the alpha

The new package will be @langchain/google. Older versions will receive a version bump and be marked deprecated, acting as thin veneers that delegate all functionality to the new package. This gives developers extra time to migrate without extensive code changes.

Compatibility

  • Full backwards compatibility cannot be guaranteed, but breaking changes should be minimal.
  • The first release of @langchain/google will lack some features; community feedback will guide priorities.

Features Likely Missing at Launch

  • Embedding support
  • Batch support
  • Media manager
  • Security manager
  • Support for non‑Gemini models (which are most important to you?)
  • Support for Veo and Imagen (how would you like to see these?)
  • Google’s Gemini Deep Thinking model and the Interactions API

If any of these (or other) features are critical for you, please let us know. Contributions are welcome—if you’re willing to help integrate them, let’s talk!

Call for Feedback

Your input will shape the roadmap. Please share:

  • Which missing features matter most to you?
  • Any other functionality you’d like to see.
  • Interest in contributing code or documentation.

Acknowledgements

  • Team & Community: Thanks to the LangChain team and the broader community for support.
  • Special thanks: Denis, Linda, Steven, Noble, and Mark—for technical and editorial advice and a friendly voice during tough times.
  • Family: My family, for unwavering support.

About the Author

I’m a Google Developer Expert (GDE) and a LangChain Champion, though I work for neither company. Over the past two years, I’ve contributed to this project out of love for both Google’s and LangChain’s products, aiming to make them better together. I’ll continue this work and hope you’re also building tools that improve the world in your own way.

Feel free to reach out with feedback, feature requests, or collaboration ideas!

Back to Blog

Related posts

Read more »

Hello, Newbie Here.

Hi! I'm falling back into the realm of S.T.E.M. I enjoy learning about energy systems, science, technology, engineering, and math as well. One of the projects I...