The enterprise AI land grab is on. Glean is building the layer beneath the interface.
Source: TechCrunch
Glean’s shift to the intelligence layer
Seven years ago, Glean set out to be the “Google for enterprise”—an AI‑powered search tool that indexes and searches across a company’s SaaS ecosystem (Slack, Jira, Google Drive, Salesforce, etc.). Today, the company’s strategy has moved from building a better enterprise chatbot to becoming the connective tissue between large language models (LLMs) and enterprise systems.
“The layer we built initially – a good search product – required us to deeply understand people and how they work and what their preferences are,” Arvind Jain told TechCrunch on the recent Equity podcast. “All of that is now becoming foundational in terms of building high‑quality agents.”
Jain emphasizes that while LLMs are powerful, they are generic:
“The AI models themselves don’t really understand anything about your business. They don’t know who the different people are, what kind of work you do, or what products you build. So you have to connect the reasoning and generative power of the models with the context inside your company.”
Glean’s pitch is that it already maps that context and can sit between the model and the enterprise data.
How Glean builds the layer
Model access
Glean acts as an abstraction layer that lets enterprises switch between or combine models (ChatGPT, Gemini, Claude, and open‑source alternatives) as capabilities evolve. This approach positions LLM providers as partners rather than competitors:
“Our product gets better because we’re able to leverage the innovation that they are making in the market,” Jain said.
Connectors
Deep integrations with tools such as Slack, Jira, Salesforce, and Google Drive allow Glean to map information flow across systems and enable agents to act directly within those tools.
Governance and security
A permissions‑aware governance and retrieval layer ensures that the right information is returned to the right user:
- It filters results based on the requester’s access rights.
- It verifies model outputs against source documents, generates line‑by‑line citations, and respects existing permissions.
- It mitigates hallucinations by grounding responses in verified internal data.
In large organizations, this layer can be the difference between piloting AI solutions and deploying them at scale. Enterprises cannot simply dump all internal data into a model and rely on a post‑hoc wrapper to sort out security and relevance.
Market dynamics and competition
Microsoft and Google already control much of the enterprise workflow surface and are eager to deepen their AI integration. If Copilot or Gemini can access the same internal systems with identical permissions, the question arises: does a standalone intelligence layer still matter?
Jain argues that enterprises prefer a neutral infrastructure layer over a vertically integrated assistant, avoiding lock‑in to a single model or productivity suite.
Funding and outlook
Glean raised a $150 million Series F in June 2025, nearly doubling its valuation to $7.2 billion【https://techcrunch.com/2025/06/10/enterprise-ai-startup-glean-lands-a-7-2b-valuation/】. Unlike frontier AI labs, Glean does not require massive compute budgets.
“We have a very healthy, fast‑growing business,” Jain said.
The enterprise AI land grab is on, and Glean is positioning itself as the essential layer beneath the interface.