Edgee’s OpenAI-compatible API works seamlessly with Cursor, letting you route AI requests through Edgee to gain observability, cut costs, and access any provider’s models from one place.Documentation Index
Fetch the complete documentation index at: https://www.edgee.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Setup
Configure your API key
Scroll down to the API Keys section and do the following:
- Enable OpenAI API Key
- Paste in your Edgee API key
- Enable Override OpenAI Base URL
- Set the base URL to:
You can find your Edgee API key in the Edgee Console.
Add a model
Scroll up to Add or search model and enter any model from the Edgee model list.Click Add, then toggle it on.
Token compression is only available when routing through Edgee. You can use your own provider keys via BYOK — compression still applies as long as requests go through Edgee.
Benefits
Any model, one key
Access models from OpenAI, Anthropic, Mistral, and more using a single Edgee API key.
Cost reduction
Edgee’s token compression reduces costs by up to 30% on supported models with no change to output quality.
Observability
Every request is logged in the Edgee Console with token counts, latency, and cost breakdowns.
Reliability
Automatic retry and fallback across providers keeps your Cursor sessions running even when a provider has issues.
Next Steps
- Browse the model list to see all available models
- Set up observability to monitor usage and costs
- Explore retry and fallback for resilient routing