Generate text completions using the specified model.
| Parameter | Type | Required | Description |
|---|---|---|---|
| Authorization | string | Yes | Bearer authentication header |
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model identifier to use for completion |
| prompt | string or array | Yes | Text prompt(s) to complete. Can be a single string or array of strings |
| suffix | string | No | Text to append after completion for insert mode |
| max_tokens | integer | No | Maximum tokens to generate. Default: 16 |
| temperature | float | No | Sampling temperature (0.0 to 2.0). Default: 1.0 |
| top_p | float | No | Nucleus sampling parameter. Default: 1.0 |
| n | integer | No | Number of completions to generate. Default: 1 |
| stream | boolean | No | Enable streaming response. Default: false |
| logprobs | integer | No | Include log probabilities on the most likely tokens |
| echo | boolean | No | Echo back the prompt in addition to completion. Default: false |
| stop | string or array | No | Sequences where the API will stop generating |
| presence_penalty | float | No | Penalize new tokens based on presence (-2.0 to 2.0). Default: 0 |
| frequency_penalty | float | No | Penalize new tokens based on frequency (-2.0 to 2.0). Default: 0 |
| repetition_penalty | float | No | Penalize token repetition. Default: 1.0 |
| best_of | integer | No | Generate n completions server-side and return the best |
| logit_bias | object | No | Modify likelihood of specified tokens |
| user | string | No | Unique identifier representing your end-user |
| seed | integer | No | Random seed for deterministic sampling |
| ignore_eos | boolean | No | Continue generating until max_tokens is reached. Default: false |