Skip to main content
POST
/
openai
/
completions
Creates a completion for the provided prompt and parameters
curl --request POST \
  --url https://api.usesecond.com/openai/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "prompt": "<string>",
  "best_of": 1,
  "echo": false,
  "frequency_penalty": 0,
  "logit_bias": {},
  "logprobs": 123,
  "max_tokens": 16,
  "n": 1,
  "presence_penalty": 0,
  "seed": 689760,
  "stop": [
    "<string>"
  ],
  "stream": false,
  "stream_options": {
    "include_usage": false
  },
  "tempature": 1,
  "top_p": 1,
  "user": "<string>"
}
'
{
  "id": "<string>",
  "object": "text_completion",
  "model": "<string>",
  "choices": [
    {
      "text": "<string>",
      "index": 123,
      "logprobs": {
        "tokens": [
          "<string>"
        ],
        "token_logprobs": [
          123
        ],
        "top_logprobs": [
          {}
        ],
        "text_offset": [
          123
        ]
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123
  },
  "created": 1722811396105,
  "system_fingerprint": "<string>"
}
Create a completion based on the provided prompt. This endpoint requires API keys generated from the settings, standard access tokens will not work.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
model
string
required

The unique identifier of the model to be used for generating the completion

prompt
string
required

The input text that serves as the starting point for the AI to generate a completion

best_of
number
default:1

The number of completion choices to generate and return. The model will generate multiple completions and return the best one(s) based on the specified criteria

echo
boolean
default:false

When set to true, the API will include the original prompt in the completion response, effectively echoing it back

frequency_penalty
number
default:0

A value between -2.0 and 2.0 that penalizes new tokens based on their frequency in the text so far. Higher values decrease the model's likelihood to repeat the same lines verbatim

logit_bias
object

A dictionary that allows fine-tuning the likelihood of specified tokens appearing in the completion. Each key is a token, and the value is the bias (between -100 and 100)

logprobs
number

The number of most likely tokens to return with their log probabilities. If specified, the API will return a list of the most likely tokens for each position

max_tokens
number
default:16

The maximum number of tokens to generate in the completion. The total length of input tokens and generated tokens is limited by the model's context length

n
number
default:1

The number of completions to generate for each prompt. Note that this may conflict with best_of if both are specified

presence_penalty
number
default:0

A value between -2.0 and 2.0 that penalizes new tokens based on whether they appear in the text so far. Positive values increase the model's likelihood to talk about new topics

seed
number
default:689760

A seed for deterministic sampling. Using the same seed with the same parameters will generate the same completion

stop
string[]

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence

Maximum array length: 4
stream
boolean
default:false

If set to true, partial message deltas will be sent as data-only server-sent events. Tokens will be sent as they become available

stream_options
object

Additional options to configure the behavior of streaming responses

tempature
number
default:1

A value between 0 and 2 that controls the randomness of the completion. Lower values make the output more focused and deterministic, while higher values make it more random

top_p
number
default:1

An alternative to temperature, called nucleus sampling. It considers the results of the tokens with top_p probability mass. 0.1 means only the tokens comprising the top 10% probability mass are considered

user
string

A unique identifier representing your end-user, which can help the API to monitor and detect abuse

Response

Successful response

id
string
required
object
enum<string>
required
Available options:
text_completion
model
string
required
choices
object[]
required
usage
object
required
created
number
default:1722811396105
system_fingerprint
string