Slier.AI Documents
Home
  • GET STARTED
    • Introduction
    • Quick Start
    • SDKs
  • API REFERENCE
    • API URLs
    • Models
    • Chat
    • Search
    • Images
  • Privacy & Legal
    • How do you use personal data in model training?
    • How long do you store personal data?
Powered by GitBook
On this page
  • The chat completion object
  • The chat completion chunk object
  • Create chat completion
  1. API REFERENCE

Chat

The chat completion object

Represents a chat completion response returned by model, based on the provided input.

  • NameidTypestringDescription

    A unique identifier for the chat completion.

  • NameobjectTypestringDescription

    The object type, which is always chat.completion.

  • NamecreatedTypestringDescription

    A unix timestamp of when the chat completion was created.

  • NamemodelTypestringDescription

    The model used for the chat completion.

  • NamechoicesTypearrayDescription

    A list of chat completion choices. Can be more than one if n is greater than 1.

  • Namechoices[].indexTypeintegerDescription

    The index of the choice in the list of choices.

  • Namechoices[].messageTypeobjectDescription

    A chat completion message generated by the model.

  • Namechoices[].message.roleTypestringDescription

    The role of the author of this message.

  • Namechoices[].message.contentTypestring or nullDescription

    The contents of the message.

  • Namechoices[].finish_reasonTypestringDescription

    The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached.

  • NameusageTypeobjectDescription

    Usage statistics for the completion request.

  • Nameusage.prompt_tokensTypeintegerDescription

    Number of tokens in the prompt.

  • Nameusage.completion_tokensTypeintegerDescription

    Number of tokens in the generated completion.

  • Nameusage.total_tokensTypeintegerDescription

    Total number of tokens used in the request (prompt + completion).

The chat completion object

{
  "id": "9f6280c9e7e340edb043a1b1d512ea4b",
  "object": "chat.completion",
  "created": 1692154125,
  "model": "claude-2",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": " I'm doing well, thanks for asking!"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 9,
    "total_tokens": 21
  }
}

The chat completion chunk object

Represents a streamed chunk of a chat completion response returned by model, based on the provided input.

  • NameidTypestringDescription

    A unique identifier for the chat completion.

  • NameobjectTypestringDescription

    The object type, which is always chat.completion.

  • NamecreatedTypestringDescription

    A unix timestamp of when the chat completion was created.

  • NamemodelTypestringDescription

    The model used for the chat completion.

  • NamechoicesTypearrayDescription

    A list of chat completion choices. Can be more than one if n is greater than 1.

  • Namechoices[].indexTypeintegerDescription

    The index of the choice in the list of choices.

  • Namechoices[].deltaTypeobjectDescription

    A chat completion delta generated by streamed model responses.

  • Namechoices[].delta.roleTypestringDescription

    The role of the author of this message.

  • Namechoices[].delta.contentTypestring or nullDescription

    The contents of the chunk message.

  • Namechoices[].finish_reasonTypestring or nullDescription

    The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached.

  • NameusageTypeobjectDescription

    Usage statistics for the completion request.

  • Nameusage.prompt_tokensTypeintegerDescription

    Number of tokens in the prompt.

  • Nameusage.completion_tokensTypeintegerDescription

    Number of tokens in the generated completion.

  • Nameusage.total_tokensTypeintegerDescription

    Total number of tokens used in the request (prompt + completion).

The chat completion chunk object

{
  "id": "9f6280c9e7e340edb043a1b1d512ea4b",
  "object": "chat.completion.chunk",
  "created": 1692154202,
  "model": "claude-2",
  "choices": [
    {
      "index": 0,
      "delta": {
        "role": "assistant",
        "content": " I"
      },
      "finish_reason": null
    }
  ]
}

......

{
  "id": "9f6280c9e7e340edb043a1b1d512ea4b",
  "object": "chat.completion.chunk",
  "created": 1692154202,
  "model": "claude-2",
  "choices": [
    {
      "index": 0,
      "delta": {},
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 9,
    "total_tokens": 21
  }
}

[DONE]

Creates a model response for the given chat conversation.

POST https://api.Slier.ai/v1/chat/completions

POST https://api.Slier.ai/v1/chat/completions

Available parameters

model string Required ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.

messages array Required A list of messages comprising the conversation so far.

  • role string Required

    • The role of the messages author. One of system, user or assistant.

  • content string Required

    • The contents of the message. content is required for all messages.

model_params object Optional Each model is different Used to set various parameters of the model.

  • temperature number or null Optional

    • Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

  • top_p number or null Optional

    • An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both.

  • Obtain the other configurable parameters from the model interface.

NO STREAMINGSTREAMING

Example Request

POST/v1/chat/completions

import requests
import json

url = "https://api.Slier.ai/v1/chat/completions"
# url = "https://api.Slier.ai/v2/chat/completions"

payload = json.dumps({
  "model": "claude-2",
  "messages": [
    {
      "role": "user",
      "content": "How are you?"
    }
  ],
  "stream": false,
  "model_params": {
    "temperature": 1
  }
})
headers = {
  'Authorization': 'Bearer $API_KEY',
  'Content-Type': 'application/json'
}

response = requests.request("POST", url, headers=headers, data=payload)

print(response.json())

The chat completion object

{
  "id": "9f6280c9e7e340edb043a1b1d512ea4b",
  "object": "chat.completion",
  "created": 1692154125,
  "model": "claude-2",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": " I'm doing well, thanks for asking!"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 9,
    "total_tokens": 21
  }
}

Contents Filtered

{
  "error": {
    "message": "This content may violate our content policy, please check https://Slier.ai/terms to learn more. If you believe this to be in error, please contact us at safety@Slier.ai to keep our systems safe and available.",
    "param": "prompt",
    "reason": "content_filter",
    "status": 400
  }
}
PreviousModelsNextSearch

Last updated 1 year ago

stream boolean or null Optional Defaults to false If set, partial message deltas will be sent. Tokens will be sent as data-only as they become available, with the stream terminated by a data: [DONE] message.

Create chat completion
server-sent events