Skip to main content
POST
/
v1
/
chat
/
completions
Create a chat completion
curl --request POST \
  --url http://localhost:3000/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "messages": [
    {
      "role": "<string>",
      "content": "<unknown>"
    }
  ],
  "model": "<string>",
  "stream": false,
  "tools": [
    {}
  ],
  "tool_choice": "<unknown>",
  "response_format": "<unknown>",
  "metadata": {}
}
'
{
  "id": "<string>",
  "object": "<string>",
  "created": 123,
  "choices": [
    {}
  ],
  "usage": {},
  "astrolabe": {}
}
POST /v1/chat/completions is the compatibility endpoint. Use it when you have an OpenAI chat-completions client that you do not want to change yet.

Important behavior

  • Astrolabe still routes through the same engine used by /v1/responses.
  • Virtual models like astrolabe/auto are supported.
  • Non-stream JSON responses include inline Astrolabe metadata.
  • Streaming responses are passed through as SSE.

Prefer /v1/responses for new OpenClaw setups

OpenClaw integrations should prefer api: openai-responses and POST /v1/responses.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
messages
object[]
required
model
string
stream
boolean
default:false
tools
object[]
tool_choice
any
response_format
any
metadata
object

Response

Successful JSON or stream response

id
string
object
string
created
integer
choices
object[]
usage
object
astrolabe
object