Chat
The chat completion object
Represents a chat completion response returned by model, based on the provided input.
Name
id
TypestringDescriptionA unique identifier for the chat completion.
Name
object
TypestringDescriptionThe object type, which is always
chat.completion
.Name
created
TypestringDescriptionA unix timestamp of when the chat completion was created.
Name
model
TypestringDescriptionThe model used for the chat completion.
Name
choices
TypearrayDescriptionA list of chat completion choices. Can be more than one if
n
is greater than 1.Name
choices[].index
TypeintegerDescriptionThe index of the choice in the list of choices.
Name
choices[].message
TypeobjectDescriptionA chat completion message generated by the model.
Name
choices[].message.role
TypestringDescriptionThe role of the author of this message.
Name
choices[].message.content
Typestring or nullDescriptionThe contents of the message.
Name
choices[].finish_reason
TypestringDescriptionThe reason the model stopped generating tokens. This will be
stop
if the model hit a natural stop point or a provided stop sequence,length
if the maximum number of tokens specified in the request was reached.Name
usage
TypeobjectDescriptionUsage statistics for the completion request.
Name
usage.prompt_tokens
TypeintegerDescriptionNumber of tokens in the prompt.
Name
usage.completion_tokens
TypeintegerDescriptionNumber of tokens in the generated completion.
Name
usage.total_tokens
TypeintegerDescriptionTotal number of tokens used in the request (prompt + completion).
The chat completion object
The chat completion chunk object
Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
Name
id
TypestringDescriptionA unique identifier for the chat completion.
Name
object
TypestringDescriptionThe object type, which is always
chat.completion
.Name
created
TypestringDescriptionA unix timestamp of when the chat completion was created.
Name
model
TypestringDescriptionThe model used for the chat completion.
Name
choices
TypearrayDescriptionA list of chat completion choices. Can be more than one if
n
is greater than 1.Name
choices[].index
TypeintegerDescriptionThe index of the choice in the list of choices.
Name
choices[].delta
TypeobjectDescriptionA chat completion delta generated by streamed model responses.
Name
choices[].delta.role
TypestringDescriptionThe role of the author of this message.
Name
choices[].delta.content
Typestring or nullDescriptionThe contents of the chunk message.
Name
choices[].finish_reason
Typestring or nullDescriptionThe reason the model stopped generating tokens. This will be
stop
if the model hit a natural stop point or a provided stop sequence,length
if the maximum number of tokens specified in the request was reached.Name
usage
TypeobjectDescriptionUsage statistics for the completion request.
Name
usage.prompt_tokens
TypeintegerDescriptionNumber of tokens in the prompt.
Name
usage.completion_tokens
TypeintegerDescriptionNumber of tokens in the generated completion.
Name
usage.total_tokens
TypeintegerDescriptionTotal number of tokens used in the request (prompt + completion).
The chat completion chunk object
Creates a model response for the given chat conversation.
POST https://api.Slier.ai/v1/chat/completions
POST https://api.Slier.ai/v1/chat/completions
Available parameters
model
string Required ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
messages
array Required A list of messages comprising the conversation so far.
role
string RequiredThe role of the messages author. One of
system
,user
orassistant
.
content
string RequiredThe contents of the message.
content
is required for all messages.
stream
boolean or null Optional Defaults to false If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE]
message.
model_params
object Optional Each model is different Used to set various parameters of the model.
temperature
number or null OptionalHigher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
top_p
number or null OptionalAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or
temperature
but not both.
Obtain the other configurable parameters from the model interface.
NO STREAMINGSTREAMING
Example Request
POST/v1/chat/completions
The chat completion object
Contents Filtered
Last updated