- Print
- PDF
Chat Completions
- Print
- PDF
The latest service changes have not yet been reflected in this content. We will update the content as soon as possible. Please refer to the Korean version for information on the latest updates.
Available in Classic and VPC
Generate interactive sentences utilizing the HyperCLOVA X model.
Request
The following describes the request format for the endpoint. The request format is as follows:
Method | URI |
---|---|
POST |
|
Request headers
The following describes the request headers.
Headers | Required | Description |
---|---|---|
X-NCP-CLOVASTUDIO-API-KEY | Required | API key issued when creating the test app or service app |
X-NCP-APIGW-API-KEY | Required | API Gateway key issued when creating the test app or service app |
X-NCP-CLOVASTUDIO-REQUEST-ID | Optional | Request ID for the request |
Content-Type | Required | Request data format
|
Accept | Conditional | Response data format
|
- Response results are returned in JSON by default, but if you specify
Accept
astext/event-stream
, then the response results are returned as a stream. - When using the response stream, use the API URL as
https://clovastudio.stream.ntruss.com/
.
Request path parameters
The following describes the request path parameters.
Field | Type | Required | Description |
---|---|---|---|
modelName | String | Conditional | Model name
|
taskId | String | Conditional | Training ID
|
Request body
The following describes the response body.
Field | Type | Required | Description |
---|---|---|---|
messages | Object | Required | Conversation messages |
messages.role | Enum | Required | Role of conversation messages
|
messages.content | String | Required | Content of conversation messages |
temperature | Double | Optional | Degree of diversity for the generated tokens (higher values generate more diverse sentences)
|
topK | Integer | Optional | Sample K high-probability candidates from the pool of generated token candidatestopK ≤ 128 (default: 0) |
topP | Double | Optional | Sample generated token candidates based on cumulative probability
|
repeatPenalty | Double | Optional | Degree of penalty for generating the same token (the higher the setting, the less likely it is to generate the same result repeatedly)repeatPenalty ≤ 10.0 (default: 5.0) |
stopBefore | Array | Optional | Character to abort token generation |
maxTokens | Integer | Optional | Maximum number of generated tokensmaxTokens ≤ 2048 (default: 100) |
includeAiFilters | Boolean | Optional | Whether to display the AI Filter results (degree of the generated results in categories such as profanity, degradation/discrimination/hate, sexual harassment/obscenity, etc.)
|
seed | Integer | Optional | Adjust the consistency level of results in model iterationsseed ≤ 4294967295: the seed value of the results you want to generate consistently, or a seed value you specify |
When entering some fields, check the following.
Request example
The following is a sample request.
curl --location --request POST 'https://clovastudio.stream.ntruss.com/testapp/v1/chat-completions/HCX-003' \
--header 'X-NCP-CLOVASTUDIO-API-KEY: {CLOVA Studio API Key}' \
--header 'X-NCP-APIGW-API-KEY: {API Gateway API Key}' \
--header 'X-NCP-CLOVASTUDIO-REQUEST-ID: {Request ID}' \
--header 'Content-Type: application/json' \
--header 'Accept: text/event-stream' \
--data '{
"topK" : 0,
"includeAiFilters" : true,
"maxTokens" : 256,
"temperature" : 0.5,
"messages" : [ {
"role" : "system",
"content" : "test"
}, {
"role" : "user",
"content" : "Let's test it."
}, {
"role" : "assistant",
"content" : "Understood. What would you like to test?"
} ],
"stopBefore" : [ ],
"repeatPenalty" : 5.0,
"topP" : 0.8
}'
Response
The following describes the response format.
Response headers
The following describes the response headers.
Headers | Required | Description |
---|---|---|
Content-Type | - | Response data format
|
Response body
The following describes the response body.
Field | Type | Required | Description |
---|---|---|---|
status | Object | - | Response status |
result | Object | - | Response result |
result.message | Object | - | Conversation messages |
result.message.role | Enum | - | Role of conversation messages
|
result.message.content | String | - | Content of conversation messages |
result.stopReason | Enum | - | Reason for stopping results generation
|
result.inputLength | Integer | - | Number of input tokens (including special tokens such as END OF TURN based on billing) |
result.outputLength | Integer | - | Number of response tokens |
result.seed | int | - | Input seed value (Return a random value when 0 is entered or not entered) |
result.aiFilter | Array | - | AI Filter result |
aiFilter
The following describes aiFilter
.
Field | Type | Required | Description |
---|---|---|---|
groupName | String | - | AI Filter category
|
name | String | - | AI Filter subcategory
|
score | String | - | AI Filter score
|
result | String | - | Whether AI Filter is operating properly
|
AI Filter can analyze up to 500 characters. However, if the text being analyzed contains many unusual formats, emojis, or special characters, it may not be analyzed correctly.
Response example
The following is a sample example.
Succeeded
The following is a sample response upon a successful call.
{
"status": {
"code": "20000",
"message": "OK"
},
"result": {
"message": {
"role": "assistant",
"content": "Phrase: Record what happened today, and prepare for tomorrow. A journal will make your life richer.\n"
},
"stopReason": "LENGTH",
"inputLength": 100,
"outputLength": 10,
"aiFilter": [
{
"groupName": "curse",
"name": "insult",
"score": "1"
},
{
"groupName": "curse",
"name": "discrimination",
"score": "0"
},
{
"groupName": "unsafeContents",
"name": "sexualHarassment",
"score": "2"
}
]
}
}
Failure
The following is a sample response upon a failed call.
Response stream
You can use token streaming to output the tokens as they are generated, one by one. The following describes the token streaming format.
Response headers
The following describes the response headers.
Headers | Required | Description |
---|---|---|
Accept | - | Response data format
|
Response body
The following describes the response body.
StreamingChatCompletionsResultEvent
The following describes StreamingChatCompletionsResultEvent
.
Field | Type | Required | Description |
---|---|---|---|
message | Object | - | Conversation messages |
message.role | Enum | - | Role of conversation messages
|
message.content | String | - | Content of conversation messages |
stopReason | Enum | - | Reason for stopping results generation
|
inputLength | Integer | - | Number of input tokens (including special tokens such as END OF TURN based on billing) |
outputLength | Integer | - | Number of response tokens |
aiFilter | Array | - | AI Filter result |
StreamingChatCompletionsTokenEvent
The following describes StreamingChatCompletionsResultEvent
.
Field | Type | Required | Description |
---|---|---|---|
id | String | - | Event ID that identifies the request |
message | Object | - | Conversation messages |
message.role | Enum | - | Role of conversation messages
|
message.content | String | - | Content of conversation messages |
inputLength | Integer | - | Number of input tokens (including special tokens such as END OF TURN based on billing) |
stopReason | Enum | - | Reason for stopping results generation
|
ErrorEvent
The following describes ErrorEvent
.
Field | Type | Required | Description |
---|---|---|---|
status | Object | - | Response status |
SignalEvent
The following describes SignalEvent
.
Field | Type | Required | Description |
---|---|---|---|
data | String | - | Signal data information to pass |
Response example
The following is a sample example.
Succeeded
The following is a sample response upon a successful call.
id: aabdfe-dfgwr-edf-hpqwd-f2asd-g
event: token
data: {"message": {"role": "assistant", "content": “H”}}
id: aabdfe-dfgwr-edf-hpqwd-f1asd-g
event: result
data: {"message": {"role": "assistant", "content": “i”}}
Failure
The following is a sample response upon a failed call.