How much can I save with Edgee?
How much can I save with Edgee?
Edgee reduces LLM costs through two mechanisms:Token Compression (up to 50% input token reduction):
- RAG pipelines: 40-50% reduction on document-heavy contexts
- Long contexts: 30-45% reduction on conversation histories
- Document analysis: 35-50% reduction on summarization tasks
- Multi-turn agents: 25-40% reduction as conversations grow
- Automatically routes to cheaper models when quality thresholds are met
- Combines with compression for 60-70% total cost reduction
How does token compression work?
How does token compression work?
Token compression happens automatically at the edge on every request through a four-step process:
- Semantic Analysis: Identify redundant context and compressible sections
- Context Optimization: Compress repeated context (common in RAG) and remove unnecessary formatting
- Instruction Preservation: Keep critical instructions, few-shot examples, and task requirements intact
- Quality Verification: Ensure compressed prompts maintain semantic equivalence
- Prompts with repeated context (RAG document chunks)
- Long system instructions with verbose formatting
- Multi-turn conversations with growing history
- Document analysis with redundant information
compression field with metrics (input_tokens, saved_tokens, rate) so you can track your savings in real-time.What is Edgee?
What is Edgee?
Edgee is an edge-native AI Gateway that reduces LLM costs by up to 50% through token compression. It sits between your application and LLM providers like OpenAI, Anthropic, Google, and Mistral, providing a single API to access 200+ models with built-in intelligent routing, cost tracking, automatic failovers, and full observability.
How is Edgee different from using LLM APIs directly?
How is Edgee different from using LLM APIs directly?
When you use LLM APIs directly, you’re locked into a single provider’s API format, have no visibility into costs until your bill arrives, no automatic failovers when providers go down, and scattered logs across multiple dashboards.Edgee gives you:
- Up to 50% cost reduction — automatic token compression at the edge
- Real-time savings tracking — see exactly how many tokens and dollars you’ve saved
- One API for all providers — switch models with a single line change
- Real-time cost tracking — know exactly what each request costs
- Automatic failovers — when OpenAI is down, Claude takes over seamlessly
- Unified observability — all your AI logs in one place
- Intelligent routing — optimize for cost or performance automatically
Which LLM providers does Edgee support?
Which LLM providers does Edgee support?
Edgee supports all major LLM providers:
- OpenAI
- Anthropic
- Mistral
- Meta
- Cohere
- AWS Bedrock
- Azure OpenAI
- And more
How much latency does Edgee add?
How much latency does Edgee add?
Edgee adds less than 10ms of latency at the p99 level. Our edge network processes requests at the point of presence closest to your application, minimizing round-trip time.For most AI applications, where LLM inference takes 500ms-5s, this overhead is negligible — typically less than 1-2% of total request time.
How does intelligent routing work?
How does intelligent routing work?
Edgee’s routing engine analyzes each request and selects the optimal model based on your configuration:
- Cost strategy: Routes to the cheapest model capable of handling the request
- Performance strategy: Always uses the fastest, most capable model
- Balanced strategy: Finds the optimal trade-off within your latency and cost budgets
What happens when a provider goes down?
What happens when a provider goes down?
Edgee automatically handles provider failures:
- Detection: We detect issues within seconds through health checks and error monitoring
- Retry: For transient errors, we retry with exponential backoff
- Failover: For persistent issues, we route to your configured backup models
How does cost tracking work?
How does cost tracking work?
Every response from Edgee includes a
cost field showing exactly how much that request cost in USD. You can also:- View aggregated costs by model, project, or time period in the dashboard
- Set budget alerts at 80%, 90%, 100% of your limit
- Receive webhook notifications when thresholds are crossed
- Export usage data for your own analysis
Can I use my own API keys for LLM providers?
Can I use my own API keys for LLM providers?
Yes! Edgee supports two modes:
- Edgee-managed keys: We handle provider accounts and billing. Simple, but you pay our prices (with volume discounts available).
- Bring Your Own Key (BYOK): Use your existing provider API keys. You get your negotiated rates, we just route and observe.
Is Edgee compliant with GDPR, SOC 2?
Is Edgee compliant with GDPR, SOC 2?
Yes. Edgee is designed for compliance-sensitive workloads:
- SOC 2 Type II certified
- GDPR compliant with DPA available
- Regional routing to keep data in specific jurisdictions
How can I contact support?
How can I contact support?
We’re here to help:
- Email: [email protected]
- Discord: Join our community
- GitHub: Open an issue