Aperture: Stop choosing between safe AI and fast AI

Published: (February 17, 2026 at 08:45 AM EST)
6 min read

Source: Tailscale Blog

Building a Gateway — Why We Chose Aperture

“Building a gateway is the obvious choice,” a tech founder told me last summer while we were discussing the proliferation of MCP servers, fast‑and‑loose AI network security, and identity management.

They were right, but I didn’t think the world needed yet another gateway. Instead, my team and I encouraged others to build on top of Tailscale, because it gives connectivity, security, and identity out of the box.


The Problem with API Keys

Even with many gateway products on the market, CISOs and CTOs keep hearing the same pain points:

  • Key sprawl – keys get traded, stolen, or accidentally checked into repositories.
  • Lack of visibility – security teams can’t see who is using which AI product or how much they’re spending.
  • Revocation headaches – revoking a key often breaks existing integrations.

These issues create a tough spot: teams want the bells and whistles of modern AI services, but they can’t manage the underlying keys safely.


Introducing Aperture

In November we decided to showcase a solution that lives inside your Tailnet:

  • Aperture – a gateway built with tsnet that appears as a regular node on your Tailscale network.
  • It implicitly knows the identity of every client, so you can store all your API keys inside Aperture and have everyone point their coding agents to it.
  • Every API call is logged with an attached identity, giving you security, visibility, and ease‑of‑use.

“Aperture takes the friction out of our GenAI workflows, eliminating API key management, centralizing access to model providers including Bedrock, and giving us clear, granular visibility into usage and spend.”
Louis Gardner, Director of Security, Infrastructure and IT at Corelight (early Aperture user & partner)

Aperture diagram


Real‑World Impact

  • Tailscale teams are already seeing smoother workflows. One product leader can now craft and deploy LLM‑powered internal tools without requesting API keys; a two‑line config change is all that’s needed.
  • The result? A “huge unlock” for rapid development while staying secure.

Our goal is simple: make AI usage so easy and secure that teams never have to choose between moving fast and staying safe. Early customers love Aperture and are pushing us to expand its coverage and capabilities.


Learn More

Feel free to try it out and let us know how it helps your organization!

Customer‑led upgrades

We’ve had so much fun building and dog‑fooding Aperture that we couldn’t wait to get it into users’ hands. In early January we started inviting a few customers to share in the fun, and later that month we quietly launched a waitlist (blog post). These early adopters have pushed us in three broad directions.


More providers: Bedrock and Vertex support

I’m happy to announce that we’ve recently added support for AWS Bedrock and Google Vertex. These enterprise‑grade providers are heavily used by many of Tailscale’s larger customers and can be tricky to set up and deploy. Aperture encapsulates much of that complexity, drastically simplifying the authentication process (honestly, I was shocked).

Aperture already supports:

  • All major LLM providers natively
  • LLM inference providers compliant with OpenAI’s v1 API (reference)
  • Self‑hosted LLMs
  • Most major cloud‑AI endpoints

We’ll continue to add more providers while ensuring Aperture manages the complexity and presents a simple, clean interface to its users.

Aperture provider diagram


More integrations: real‑time analysis with partners

An S3 integration was one of the first feature requests we built. After all, coding agents generate a lot of logs, and security teams want to analyze these for PII, keys, and other credentials being improperly shared. There are many tools and services for this, ranging from post‑hoc log analysis to the real‑time analysis and dynamic access‑policy management that CISOs are now demanding.

Aperture is the natural integration point for many of these services, so we’re actively building APIs and integrations with companies such as:

PartnerWhat they bringQuote
CriblData‑engine platform for routing audit logs“We get detailed audit logs out of the box, route them through Cribl Stream, and land them wherever they’re most useful. That means our security team gets real observability and control over AI usage without slowing developers down.” – Clint Sharp, CEO
OsoPolicy engine that protects prompts, agents, and data“You can start with visibility, risk scoring, and alerting… then progressively layer deterministic controls and move toward automated least‑privilege.” – Graham Neray, Co‑founder & CEO
Apollo ResearchWatcher tool that scans coding‑agent logs for high‑level overviews and real‑time data“For everyone using coding agents, it should be easy and effortless to know exactly what the agent is doing and when it fails.” – Marius Hobbhahn, CEO & Co‑founder
CerbosEnterprise authorization platform that evaluates tool calls before execution“Visibility into agent activity is necessary but insufficient on its own. Alerts only fire after a tool call has already been made. Cerbos evaluates every request before execution and returns a binding decision that is deterministic, auditable, and traceable to a specific policy.” – Emre Baran, CEO

These integrations enable feedback loops for dynamic network security and access control—capabilities that will become increasingly critical as AI proliferates.

Expect to see us doing a lot more work to enable even more partners and integrations.

Aperture integration diagram

More Visibility and Control: Know Where Your Tokens Are Going

Configuring Tailscale access policies got a major upgrade with the visual editor, and we’ve brought the same experience to Aperture.

What’s new

  • Per‑user and per‑role model access – fine‑tune which models each user or role can call.
  • Rich visualizations – see token consumption at a glance, broken down by:
    • User
    • User agent
    • Model
  • Actionable insights – use the data for dynamic cost‑optimization and future feature ideas.

Why it matters

Aperture is already a critical piece of our internal infrastructure and is quickly becoming essential for our earliest customers. Even in these early days we’re delivering:

  • Safer AI deployments
  • Faster time‑to‑value for coding agents
  • Real‑time cost control for security, platform, and engineering teams

What’s coming next

  • Advanced cost‑control mechanisms
  • Deeper partner integrations
  • Real‑time security‑team controls over AI usage

Ready to try it?
If you’re a security, platform, or engineering team working with coding agents, sign up for Aperture and we’ll get you started.

Already using Aperture? We’d love to hear your feedback and learn what you’d like to see next.

0 views
Back to Blog

Related posts

Read more »