API Parameters
Complete reference for all parameters supported by ParrotRouter's OpenAI-compatible API
Common Parameters
These parameters are supported across both chat completions and legacy completions endpoints.
model
ID of the model to use. Example: "gpt-4", "claude-3-opus", "gemini-pro"
"model": "gpt-4"
temperature
Controls randomness. Lower values make output more focused and deterministic. Higher values make output more creative and varied. Default: 1
"temperature": 0.7
max_tokens
Maximum number of tokens to generate. Defaults vary by model. The total token count (prompt + completion) cannot exceed the model's context window.
"max_tokens": 1000
top_p
Nucleus sampling: only consider tokens with cumulative probability up to top_p. Alternative to temperature. Recommended to alter either temperature or top_p, not both. Default: 1
"top_p": 0.9
frequency_penalty
Penalizes tokens based on their frequency in the text so far. Positive values decrease repetition. Default: 0
"frequency_penalty": 0.5
presence_penalty
Penalizes tokens based on whether they appear in the text so far. Positive values encourage the model to talk about new topics. Default: 0
"presence_penalty": 0.6
stop
Up to 4 sequences where the API will stop generating further tokens.
"stop": ["\n", "Human:", "AI:"]
n
How many completions to generate for each prompt. Default: 1
seed
If specified, the system will make a best effort to sample deterministically. Not guaranteed to be deterministic across all models.
"seed": 12345
user
A unique identifier representing your end-user, for monitoring and rate limiting.
"user": "user-123456"
Chat-Specific Parameters
These parameters are only available for the chat completions endpoint.
messages
A list of messages comprising the conversation so far.
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
tools
List of functions the model may call. Only supported by certain models.
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
}
}
}
}
]
tool_choice
Controls which (if any) function is called by the model. Options: "none", "auto", or {"type": "function", "function": {"name": "my_function"}}
response_format
Specifies the format of the output. Only supported by certain models.
"response_format": {
"type": "json_object"
}
Streaming Parameters
stream
If true, partial message deltas will be sent as Server-Sent Events. Tokens will be sent as they become available. Default: false
"stream": true
Legacy Completion Parameters
These parameters are specific to the legacy completions endpoint.
prompt
The prompt(s) to generate completions for.
"prompt": "Once upon a time"
suffix
The suffix that comes after a completion of inserted text.
echo
Echo back the prompt in addition to the completion. Default: false
best_of
Generates best_of completions server-side and returns the "best". Default: 1