Indian AI lab Sarvam’s new models are a major bet on the viability of open-source AI

Published: (February 18, 2026 at 07:55 AM EST)
2 min read
Source: TechCrunch

Source: TechCrunch

Indian AI lab Sarvam unveiled a new generation of large language models, betting that smaller, efficient open‑source AI models can capture market share from larger U.S. and Chinese rivals.

The launch, announced at the India AI Impact Summit in New Delhi, aligns with New Delhi’s push to reduce reliance on foreign AI platforms and to tailor models to local languages and use cases.

New model lineup

  • 30‑billion‑parameter model – mixture‑of‑experts architecture, 32,000‑token context window, optimized for real‑time conversational use.
  • 105‑billion‑parameter model – mixture‑of‑experts architecture, 128,000‑token context window, suited for complex, multi‑step reasoning tasks.
  • Text‑to‑speech model
  • Speech‑to‑text model
  • Vision model for document parsing

These models represent a sharp upgrade from Sarvam’s 2‑billion‑parameter Sarvam 1 model released in October 2024.

Sarvam’s 30B model compared with Google’s Gemma 27B and OpenAI’s GPT‑OSS‑20B

The 30B model was pre‑trained on roughly 16 trillion tokens of text, while the 105B model was trained on trillions of tokens spanning multiple Indian languages. Both were trained from scratch, not fine‑tuned on existing open‑source systems.

Sarvam’s 105B model positioned against OpenAI’s GPT‑OSS‑120B and Alibaba’s Qwen‑3‑Next‑80B

Training infrastructure

Sarvam leveraged computing resources provided under India’s government‑backed IndiaAI Mission, with infrastructure support from data‑center operator Yotta and technical assistance from Nvidia.

Strategy and roadmap

Sarvam plans a measured approach to scaling, focusing on real‑world applications rather than raw size. Co‑founder Pratyush Kumar emphasized:

“We want to be mindful in how we do the scaling. We don’t want to do the scaling mindlessly. We want to understand the tasks which really matter at scale and go and build for them.”

The company intends to open‑source the 30B and 105B models, though details on releasing training data or full training code have not been disclosed.

Upcoming products

  • Sarvam for Work – specialized AI systems, including coding‑focused models and enterprise tools.
  • Samvaad – a conversational AI agent platform.

Funding and investors

Founded in 2023, Sarvam has raised more than $50 million in funding and counts Lightspeed Venture Partners, Khosla Ventures, and Peak XV Partners (formerly Sequoia Capital India) among its investors.

Read more about Sarvam’s earlier funding round.

0 views
Back to Blog

Related posts

Read more »

What is an LLM Gateway?

markdown !smakoshhttps://media2.dev.to/dynamic/image/width=50,height=50,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploa...