API Parameters

Complete reference for all parameters supported by ParrotRouter's OpenAI-compatible API

Common Parameters

These parameters are supported across both chat completions and legacy completions endpoints.

model
required
string

ID of the model to use. Example: "gpt-4", "claude-3-opus", "gemini-pro"

"model": "gpt-4"
temperature
optional
number (0-2)

Controls randomness. Lower values make output more focused and deterministic. Higher values make output more creative and varied. Default: 1

"temperature": 0.7
max_tokens
optional
integer

Maximum number of tokens to generate. Defaults vary by model. The total token count (prompt + completion) cannot exceed the model's context window.

"max_tokens": 1000
top_p
optional
number (0-1)

Nucleus sampling: only consider tokens with cumulative probability up to top_p. Alternative to temperature. Recommended to alter either temperature or top_p, not both. Default: 1

"top_p": 0.9
frequency_penalty
optional
number (-2.0 to 2.0)

Penalizes tokens based on their frequency in the text so far. Positive values decrease repetition. Default: 0

"frequency_penalty": 0.5
presence_penalty
optional
number (-2.0 to 2.0)

Penalizes tokens based on whether they appear in the text so far. Positive values encourage the model to talk about new topics. Default: 0

"presence_penalty": 0.6
stop
optional
string or array

Up to 4 sequences where the API will stop generating further tokens.

"stop": ["\n", "Human:", "AI:"]
n
optional
integer

How many completions to generate for each prompt. Default: 1

seed
optional
integer

If specified, the system will make a best effort to sample deterministically. Not guaranteed to be deterministic across all models.

"seed": 12345
user
optional
string

A unique identifier representing your end-user, for monitoring and rate limiting.

"user": "user-123456"

Chat-Specific Parameters

These parameters are only available for the chat completions endpoint.

messages
required
array

A list of messages comprising the conversation so far.

"messages": [
  {"role": "system", "content": "You are a helpful assistant."},
  {"role": "user", "content": "Hello!"}
]
tools
optional
array

List of functions the model may call. Only supported by certain models.

"tools": [
  {
    "type": "function",
    "function": {
      "name": "get_weather",
      "description": "Get the current weather",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {"type": "string"}
        }
      }
    }
  }
]
tool_choice
optional
string or object

Controls which (if any) function is called by the model. Options: "none", "auto", or {"type": "function", "function": {"name": "my_function"}}

response_format
optional
object

Specifies the format of the output. Only supported by certain models.

"response_format": {
  "type": "json_object"
}

Streaming Parameters

stream
optional
boolean

If true, partial message deltas will be sent as Server-Sent Events. Tokens will be sent as they become available. Default: false

"stream": true

Legacy Completion Parameters

These parameters are specific to the legacy completions endpoint.

prompt
required
string or array

The prompt(s) to generate completions for.

"prompt": "Once upon a time"
suffix
optional
string

The suffix that comes after a completion of inserted text.

echo
optional
boolean

Echo back the prompt in addition to the completion. Default: false

best_of
optional
integer

Generates best_of completions server-side and returns the "best". Default: 1

Best Practices

Related Documentation