Get Chat Completions
Creates a model response for the given chat conversation.
POST
https://withmartian/api/openai/v1/chat/completions
Request Body
Name | Type | Description |
---|---|---|
model* | string, List[string] | ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API. |
n | number | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep |
temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or |
messages* | array | A list of messages comprising the conversation so far. |
max_tokens | number | The maximum number of tokens to generate in the chat completion. |
presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
user | string | NOT SUPPORTED |
top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or |
logit_bias | map | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. |
response_format | object | An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than Setting to Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if |
seed | null, integer | This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same |
stop | string, array, null | Up to 4 sequences where the API will stop generating further tokens. |
stream | boolean, null | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a |
tools | array | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. |
tool_choice | string or object | Controls which (if any) function is called by the model.
|
functions | array | Deprecated - A list of functions the model may generate JSON inputs for. |
function_call | string or object | Deprecated - Controls which (if any) function is called by the model.
|
extra_body | List[object] | Sets metadata on the requests via a list of objects of form |
}, |
Last updated