curl --request POST \
--url https://api.edgee.ai/v1/chat/completions \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "openai/gpt-4o",
"messages": [
{
"role": "system",
"content": "<string>",
"name": "<string>",
"tool_call_id": "<string>",
"refusal": "<string>",
"tool_calls": [
{
"id": "<string>",
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
}
}
]
}
],
"max_tokens": 2,
"stream": false,
"stream_options": {
"include_usage": true
},
"tools": [
{
"type": "function",
"function": {
"name": "<string>",
"description": "<string>",
"parameters": {}
}
}
],
"tool_choice": "none",
"tags": [
"<string>"
]
}
'{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "openai/gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 10,
"total_tokens": 20,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens_details": {
"reasoning_tokens": 0
}
}
}Create chat completions using the Edgee AI Gateway API
curl --request POST \
--url https://api.edgee.ai/v1/chat/completions \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "openai/gpt-4o",
"messages": [
{
"role": "system",
"content": "<string>",
"name": "<string>",
"tool_call_id": "<string>",
"refusal": "<string>",
"tool_calls": [
{
"id": "<string>",
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
}
}
]
}
],
"max_tokens": 2,
"stream": false,
"stream_options": {
"include_usage": true
},
"tools": [
{
"type": "function",
"function": {
"name": "<string>",
"description": "<string>",
"parameters": {}
}
}
],
"tool_choice": "none",
"tags": [
"<string>"
]
}
'{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "openai/gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 10,
"total_tokens": 20,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens_details": {
"reasoning_tokens": 0
}
}
}ID of the model to use. Format: {author_id}/{model_id} (e.g. openai/gpt-4o)
"openai/gpt-4o"
A list of messages comprising the conversation so far.
1Show child attributes
The maximum number of tokens that can be generated in the chat completion.
x >= 1If set, partial message deltas will be sent, as in OpenAI. Streamed chunks are sent as Server-Sent Events (SSE).
Options for streaming response.
Show child attributes
A list of tools the model may call. Currently, only function type is supported.
Show child attributes
Controls which tool is called by the model.
none, auto Optional tags to categorize and label the request. Useful for filtering and grouping requests in analytics and logs. Can also be sent via the x-edgee-tags header as a comma-separated string.
Chat completion created successfully
A unique identifier for the chat completion.
"chatcmpl-123"
The object type, which is always chat.completion.
chat.completion The Unix timestamp (in seconds) of when the chat completion was created.
1677652288
The model used for the chat completion.
"openai/gpt-4o"
A list of chat completion choices. Can be more than one if n is greater than 1.
Show child attributes
Usage statistics for the completion. In streaming responses, this is only present in the final chunk when stream_options.include_usage is true.
Show child attributes
Was this page helpful?