Token Compression Gateway for your agents

Edgee compresses prompts before they reach LLM providers.
Same code, fewer tokens, lower bills.

Up to 50%Cost reduction
< 1 minTo install
edgee — zsh

How to use Edgee

Whether you’re using a coding agent or building an app, Edgee compresses your LLM traffic in minutes.

For coding agents

Start saving tokens in 1 minute

Install Edgee CLI and connect it to your coding agent. No code changes required.

  • No code changes: works as a transparent proxy for your agent
  • Instant savings: token compression kicks in on the first request
  • Works with any agent: Claude Code, Codex, Cursor and more

Configure your coding agent

Connect Edgee to your AI coding assistant and start saving tokens in 1 minute.

1Choose your coding agent
2Install Edgee CLI
curl -fsSL https://edgee.ai/install.sh | bash
3Start saving tokens
edgee launch claude

Why Edgee AI Gateway?

An edge intelligence layer for your AI traffic

Edgee sits between your AI agents and LLM providers, behind a single OpenAI-compatible API. It adds edge-level intelligence — token compression, team management, observability, retries and fallback, BYOK — so you cut token costs and extend context windows without changing a line of application code.

Token compression

Cut tool-result payloads 60–90% at the edge with sub-15ms P50 latency. Semantically lossless for coding tasks. Same model output, fewer tokens billed.

Learn more

Team Management

Get full visibility into how your team uses coding agents. Track cost per repo and PR, manage team seats, and keep your team unblocked with automatic OSS model fallback.

Learn more

Bring Your Own Keys

Use Edgee’s keys for convenience, or plug in your own provider keys for billing control and custom models.

Learn more

Observability

Monitor latency, errors, usage, and cost per model, per app, and per environment.

Learn more

Retry & Fallback

When a provider request fails, Edgee automatically retries and falls back to the next available provider, transparently, without any changes to your code.

Learn more

The vision behind Edgee

Every technological shift creates a new foundation: the web had bandwidth, the cloud had compute, and AI has tokens. In a world powered by models, intelligence has a cost: tokens flow through every interaction, decision, and response.

At Edgee, we believe intelligence should move efficiently, closer to users, intent, and action. It should be compressed, routed, and optimized so decisions happen instantly. Hear from Sacha, Edgee’s co-founder, on how AI scales by mastering how intelligence moves.