Chat
The chat completion object
Represents a chat completion response returned by model, based on the provided input.
Name
idTypestringDescriptionA unique identifier for the chat completion.
Name
objectTypestringDescriptionThe object type, which is always
chat.completion.Name
createdTypestringDescriptionA unix timestamp of when the chat completion was created.
Name
modelTypestringDescriptionThe model used for the chat completion.
Name
choicesTypearrayDescriptionA list of chat completion choices. Can be more than one if
nis greater than 1.Name
choices[].indexTypeintegerDescriptionThe index of the choice in the list of choices.
Name
choices[].messageTypeobjectDescriptionA chat completion message generated by the model.
Name
choices[].message.roleTypestringDescriptionThe role of the author of this message.
Name
choices[].message.contentTypestring or nullDescriptionThe contents of the message.
Name
choices[].finish_reasonTypestringDescriptionThe reason the model stopped generating tokens. This will be
stopif the model hit a natural stop point or a provided stop sequence,lengthif the maximum number of tokens specified in the request was reached.Name
usageTypeobjectDescriptionUsage statistics for the completion request.
Name
usage.prompt_tokensTypeintegerDescriptionNumber of tokens in the prompt.
Name
usage.completion_tokensTypeintegerDescriptionNumber of tokens in the generated completion.
Name
usage.total_tokensTypeintegerDescriptionTotal number of tokens used in the request (prompt + completion).
The chat completion object
{
"id": "9f6280c9e7e340edb043a1b1d512ea4b",
"object": "chat.completion",
"created": 1692154125,
"model": "claude-2",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": " I'm doing well, thanks for asking!"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 9,
"total_tokens": 21
}
}The chat completion chunk object
Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
Name
idTypestringDescriptionA unique identifier for the chat completion.
Name
objectTypestringDescriptionThe object type, which is always
chat.completion.Name
createdTypestringDescriptionA unix timestamp of when the chat completion was created.
Name
modelTypestringDescriptionThe model used for the chat completion.
Name
choicesTypearrayDescriptionA list of chat completion choices. Can be more than one if
nis greater than 1.Name
choices[].indexTypeintegerDescriptionThe index of the choice in the list of choices.
Name
choices[].deltaTypeobjectDescriptionA chat completion delta generated by streamed model responses.
Name
choices[].delta.roleTypestringDescriptionThe role of the author of this message.
Name
choices[].delta.contentTypestring or nullDescriptionThe contents of the chunk message.
Name
choices[].finish_reasonTypestring or nullDescriptionThe reason the model stopped generating tokens. This will be
stopif the model hit a natural stop point or a provided stop sequence,lengthif the maximum number of tokens specified in the request was reached.Name
usageTypeobjectDescriptionUsage statistics for the completion request.
Name
usage.prompt_tokensTypeintegerDescriptionNumber of tokens in the prompt.
Name
usage.completion_tokensTypeintegerDescriptionNumber of tokens in the generated completion.
Name
usage.total_tokensTypeintegerDescriptionTotal number of tokens used in the request (prompt + completion).
The chat completion chunk object
{
"id": "9f6280c9e7e340edb043a1b1d512ea4b",
"object": "chat.completion.chunk",
"created": 1692154202,
"model": "claude-2",
"choices": [
{
"index": 0,
"delta": {
"role": "assistant",
"content": " I"
},
"finish_reason": null
}
]
}
......
{
"id": "9f6280c9e7e340edb043a1b1d512ea4b",
"object": "chat.completion.chunk",
"created": 1692154202,
"model": "claude-2",
"choices": [
{
"index": 0,
"delta": {},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 9,
"total_tokens": 21
}
}
[DONE]Creates a model response for the given chat conversation.
POST https://api.Slier.ai/v1/chat/completions
POST https://api.Slier.ai/v1/chat/completions
Available parameters
model string Required ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
messages array Required A list of messages comprising the conversation so far.
rolestring RequiredThe role of the messages author. One of
system,userorassistant.
contentstring RequiredThe contents of the message.
contentis required for all messages.
stream boolean or null Optional Defaults to false If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
model_params object Optional Each model is different Used to set various parameters of the model.
temperaturenumber or null OptionalHigher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
top_pnumber or null OptionalAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or
temperaturebut not both.
Obtain the other configurable parameters from the model interface.
NO STREAMINGSTREAMING
Example Request
POST/v1/chat/completions
import requests
import json
url = "https://api.Slier.ai/v1/chat/completions"
# url = "https://api.Slier.ai/v2/chat/completions"
payload = json.dumps({
"model": "claude-2",
"messages": [
{
"role": "user",
"content": "How are you?"
}
],
"stream": false,
"model_params": {
"temperature": 1
}
})
headers = {
'Authorization': 'Bearer $API_KEY',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.json())The chat completion object
{
"id": "9f6280c9e7e340edb043a1b1d512ea4b",
"object": "chat.completion",
"created": 1692154125,
"model": "claude-2",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": " I'm doing well, thanks for asking!"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 9,
"total_tokens": 21
}
}Contents Filtered
{
"error": {
"message": "This content may violate our content policy, please check https://Slier.ai/terms to learn more. If you believe this to be in error, please contact us at [email protected] to keep our systems safe and available.",
"param": "prompt",
"reason": "content_filter",
"status": 400
}
}Last updated