curl --request POST \
--url https://api.edgee.ai/v1/count_tokens \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the capital of France?"
}
]
}
'{
"input_tokens": 42
}Estimate token count for a set of messages without making an LLM call
curl --request POST \
--url https://api.edgee.ai/v1/count_tokens \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the capital of France?"
}
]
}
'{
"input_tokens": 42
}Estimates the number of input tokens for a set of messages without sending the request to an LLM provider. Useful for pre-flight cost estimation, rate-limit planning, and prompt optimization.Documentation Index
Fetch the complete documentation index at: https://www.edgee.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Array of message objects to count tokens for. Accepts both OpenAI chat format (with system, user, assistant roles) and Anthropic Messages format, the format is auto-detected from the message structure. Provide tokenizer explicitly to override auto-detection.
1Show child attributes
Optional model hint to improve tokenizer selection. When provided, the gateway uses this to choose the most appropriate tokenizer for the target model.
"openai/gpt-5.2"
Token count estimated successfully
Estimated number of input tokens for the provided messages. This is an approximation, counts may differ from provider-native tokenizers. Use for estimation and budgeting, not exact billing.
x >= 042
Was this page helpful?