# Chat

## The chat completion object

Represents a chat completion response returned by model, based on the provided input.

* Name`id`TypestringDescription

  A unique identifier for the chat completion.
* Name`object`TypestringDescription

  The object type, which is always `chat.completion`.
* Name`created`TypestringDescription

  A unix timestamp of when the chat completion was created.
* Name`model`TypestringDescription

  The model used for the chat completion.
* Name`choices`TypearrayDescription

  A list of chat completion choices. Can be more than one if `n` is greater than 1.
* Name`choices[].index`TypeintegerDescription

  The index of the choice in the list of choices.
* Name`choices[].message`TypeobjectDescription

  A chat completion message generated by the model.
* Name`choices[].message.role`TypestringDescription

  The role of the author of this message.
* Name`choices[].message.content`Typestring or nullDescription

  The contents of the message.
* Name`choices[].finish_reason`TypestringDescription

  The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached.
* Name`usage`TypeobjectDescription

  Usage statistics for the completion request.
* Name`usage.prompt_tokens`TypeintegerDescription

  Number of tokens in the prompt.
* Name`usage.completion_tokens`TypeintegerDescription

  Number of tokens in the generated completion.
* Name`usage.total_tokens`TypeintegerDescription

  Total number of tokens used in the request (prompt + completion).

#### The chat completion object

```json
{
  "id": "9f6280c9e7e340edb043a1b1d512ea4b",
  "object": "chat.completion",
  "created": 1692154125,
  "model": "claude-2",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": " I'm doing well, thanks for asking!"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 9,
    "total_tokens": 21
  }
}
```

***

### The chat completion chunk object <a href="#the-chat-completion-chunk-object" id="the-chat-completion-chunk-object"></a>

Represents a streamed chunk of a chat completion response returned by model, based on the provided input.

* Name`id`TypestringDescription

  A unique identifier for the chat completion.
* Name`object`TypestringDescription

  The object type, which is always `chat.completion`.
* Name`created`TypestringDescription

  A unix timestamp of when the chat completion was created.
* Name`model`TypestringDescription

  The model used for the chat completion.
* Name`choices`TypearrayDescription

  A list of chat completion choices. Can be more than one if `n` is greater than 1.
* Name`choices[].index`TypeintegerDescription

  The index of the choice in the list of choices.
* Name`choices[].delta`TypeobjectDescription

  A chat completion delta generated by streamed model responses.
* Name`choices[].delta.role`TypestringDescription

  The role of the author of this message.
* Name`choices[].delta.content`Typestring or nullDescription

  The contents of the chunk message.
* Name`choices[].finish_reason`Typestring or nullDescription

  The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached.
* Name`usage`TypeobjectDescription

  Usage statistics for the completion request.
* Name`usage.prompt_tokens`TypeintegerDescription

  Number of tokens in the prompt.
* Name`usage.completion_tokens`TypeintegerDescription

  Number of tokens in the generated completion.
* Name`usage.total_tokens`TypeintegerDescription

  Total number of tokens used in the request (prompt + completion).

#### The chat completion chunk object

```json
{
  "id": "9f6280c9e7e340edb043a1b1d512ea4b",
  "object": "chat.completion.chunk",
  "created": 1692154202,
  "model": "claude-2",
  "choices": [
    {
      "index": 0,
      "delta": {
        "role": "assistant",
        "content": " I"
      },
      "finish_reason": null
    }
  ]
}

......

{
  "id": "9f6280c9e7e340edb043a1b1d512ea4b",
  "object": "chat.completion.chunk",
  "created": 1692154202,
  "model": "claude-2",
  "choices": [
    {
      "index": 0,
      "delta": {},
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 9,
    "total_tokens": 21
  }
}

[DONE]
```

***

### [Create chat completion](https://docs.theb.ai/chat#create-chat-completion) <a href="#create-chat-completion" id="create-chat-completion"></a>

Creates a model response for the given chat conversation.

**POST <https://api.Slier.ai/v1/chat/completions>**

**POST <https://api.Slier.ai/v1/chat/completions>**

**Available parameters**

`model` string Required ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.

`messages` array Required A list of messages comprising the conversation so far.

* `role` string Required
  * The role of the messages author. One of `system`, `user` or `assistant`.
* `content` string Required
  * The contents of the message. `content` is required for all messages.

`stream` boolean or null Optional Defaults to false If set, partial message deltas will be sent. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.

`model_params` object Optional Each model is different Used to set various parameters of the model.

* `temperature` number or null Optional
  * Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
* `top_p` number or null Optional
  * An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or `temperature` but not both.
* Obtain the other configurable parameters from the model interface.

NO STREAMINGSTREAMING

#### Example Request

POST/v1/chat/completions

```python
import requests
import json

url = "https://api.Slier.ai/v1/chat/completions"
# url = "https://api.Slier.ai/v2/chat/completions"

payload = json.dumps({
  "model": "claude-2",
  "messages": [
    {
      "role": "user",
      "content": "How are you?"
    }
  ],
  "stream": false,
  "model_params": {
    "temperature": 1
  }
})
headers = {
  'Authorization': 'Bearer $API_KEY',
  'Content-Type': 'application/json'
}

response = requests.request("POST", url, headers=headers, data=payload)

print(response.json())
```

#### The chat completion object

```json
{
  "id": "9f6280c9e7e340edb043a1b1d512ea4b",
  "object": "chat.completion",
  "created": 1692154125,
  "model": "claude-2",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": " I'm doing well, thanks for asking!"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 9,
    "total_tokens": 21
  }
}
```

#### Contents Filtered

```json
{
  "error": {
    "message": "This content may violate our content policy, please check https://Slier.ai/terms to learn more. If you believe this to be in error, please contact us at safety@Slier.ai to keep our systems safe and available.",
    "param": "prompt",
    "reason": "content_filter",
    "status": 400
  }
}
```
