Request media asset analysis

Prev Next

Available in VPC

Analyze a media asset to create an index, which is the result of analysis.
After media asset registration is complete, requesting an analysis will execute the analysis process. For more information, see the workflow image below.

maiu-request-analysis_workflow

Request

This section describes the request format. The method and URI are as follows:

Method URI
POST /api/v1/workspaces/{workspace_name}/projects/{project_id}/assets/{asset_id}/analyze

Request headers

For information about the headers common to all Media Intelligence APIs, see Media Intelligence request headers.

Request path parameters

The following describes the parameters.

Field Type Required Description
workspace_name String Required Workspace name
project_id String Required Project ID
asset_id String Required Media asset ID

Request body

You can include the following data in the body of your request:

Field Type Required Description
sceneRange String Optional (During video analysis) Length of automatically split scenes
  • EXTRA_SHORT | SHORT | MEDIUM (default) | LONG
    • EXTRA_SHORT: Between 10-20 seconds
    • SHORT: Analyze detailed scenes in the video with second-by-second segmentation.
    • MEDIUM: Capture key changes while maintaining proper flow between scenes.
    • LONG: Maintain a natural flow by enhancing connectivity between scenes.
analysisPersonCount Integer Required Number of people to detect when analyzing
  • 0-10
  • When 0 is entered, people are not analyzed, which speeds up analysis.
tagIdList Array<String> Optional Tag of people to detect when analyzing
  • Length: 1-30
sourceLanguage enum Optional Language information of the original being analyzed
  • ENKO (default) | EN | KO | JA | ZH-CN | ZH-TW
    • ENKO: Korean + English
    • EN: English
    • KO: Korean
    • JA: Japanese
    • ZH-CN: Chinese (Simplified)
    • ZH-TW: Chinese (Traditional)
detectAudioEffects Boolean Optional (During video analysis) Audio effect detection
  • true | false (default)
    • Detect "applause," "laughter," and "background music" items when enabled.

Request example

The request example is as follows:

curl --location --request POST 'https://mi.apigw.ntruss.com/api/v1/workspaces/my-workspace/projects/1234/assets/5678/analyze' \
--header 'x-ncp-apigw-timestamp: {Timestamp}' \
--header 'x-ncp-iam-access-key: {Access Key}' \
--header 'x-ncp-apigw-signature-v2: {API Gateway Signature}' \
--header 'Content-Type: application/json' \
--data '{
   "sceneRange" : "LONG",
   "sourceLanguage": "ENKO",
   "detectAudioEffects": false,   
   "analysisPersonCount" : 3,
   "tagIdList": [101, 203, 307]
}'

Response

This section describes the response format.

Response body

The response body includes the following data:

Field Type Required Description
code String Required API processing result code
message String Required API processing result message
result Object Required Analysis result
result.id Integer Required Media asset ID
result.createdTime String Required Index creation date and time
  • ISO 8601 format
result.createdUserName String Required User name that created the index

Response status codes

For information about the HTTP status codes common to all Media Intelligence APIs, see Media Intelligence response status codes.

Response example

The response example is as follows:

{
    "code": "0",
    "message": "success",
    "result": {
        "id": 1001,
        "createdTime": "2025-04-23T17:13:48",
        "createdUserName": "username"
    }
}