updateJobExecuteConfig

Prev Next

Available in VPC

Edit job execution options.

Request

This section describes the request format. The method and URI are as follows:

Method URI
PUT /api/v1/jobs/{jobId}/executions

Request headers

For information about the headers common to all Data Flow APIs, see Data Flow request headers.

Request path parameters

You can use the following path parameters with your request:

Field Type Required Description
jobId String Required Job ID

Request body

You can include the following data in the body of your request:

Field Type Required Description
jobCondition Object Required Job execution option information
jobCondition.workerType String Optional Worker type
  • DATAFLOW (default)
jobCondition.numWorker Integer Optional Number of workers
  • 2 (default)
jobCondition.timeout Integer Optional Execution timeout (minute)
  • Time spent waiting for the results of a job when it is executed once
  • 0-1440 (default: 360)
jobCondition.nrn String Required NAVER Cloud Platform resource identification value for job
jobCondition.scriptPath String Required Job execution script storage path
jobCondition.logPath String Required Job execution history storage path

Request example

The request example is as follows:

curl --location --request PUT 'https://dataflow.apigw.ntruss.com/api/v1/jobs/gqigvH******/executions' \
--header 'x-ncp-apigw-timestamp: {Timestamp}' \
--header 'x-ncp-iam-access-key: {Access Key}' \
--header 'x-ncp-apigw-signature-v2: {API Gateway Signature}' \
--data '{
    "jobCondition": {
        "workerType": "DEFAULT",
        "numWorker": 2,
        "timeout": 360,
        "nrn": "nrn:PUB:IAM::*******:Role/********-0496-11f0-baf6-246e96591a38",
        "scriptPath": "dataflow-2706412-****/scripts/",
        "logPath": "dataflow-2706412-****/sparkHistoryLogs/"
    }
}'

Response

This section describes the response format.

Response body

The response body includes the following data:

Field Type Required Description
jobId String - Job ID
name String - Job name
description String - Job description
type String - Job type
  • DATAFLOW (default)
status String - Job status
  • RUNNABLE | RUNNING | DELETED | UNKNOWN | DRAFT | STOPPED | EDITING
    • RUNNABLE: job execution available
    • RUNNING: job execution in progress
    • DELETED: job being deleted or deletion completed
    • UNKNOWN: other
    • DRAFT: job edit incomplete
    • STOPPED: job being stopped
    • EDITING: job being edited (validation required)
nodes Array - Job node information
runCondition Object - Job execution option
runCondition.workerType String - Worker type
  • DEFAULT (default)
runCondition.numWorker Integer - Number of workers
  • 2 (default)
runCondition.timeout Integer - Execution timeout (minute)
  • Time spent waiting for the results of a job when it is executed once
  • 0-1440 (default: 360)
runCondition.nrn String - NAVER Cloud Platform resource identification value for job
runCondition.scriptPath String - Job execution script storage path
runCondition.logPath String - Job execution history storage path
createdDate String - Job creation date and time
  • ISO 8601 format (including UTC+9)
updatedDate String - Job modification date and time
  • ISO 8601 format (including UTC+9)

Response status codes

For response status codes common to all Data Flow APIs, see Data Flow API response status codes.

Response example

The response example is as follows:

{
    "jobId": "gqigvH******",
    "name": "job001",
    "description": "",
    "type": "DATAFLOW",
    "status": "RUNNABLE",
    "nodes": [
        {
            "type": "SOURCE_OBS",
            "id": 169777*******,
            "name": "Object Storage",
            "parentNodeIds": [],
            "regionNo": "1",
            "bucketName": "aitems",
            "prefix": "dataflow1",
            "dataType": "CSV",
            "fieldList": [
                {
                    "name": "id",
                    "type": "string",
                    "properties": []
                },
                {
                    "name": "name",
                    "type": "string",
                    "properties": []
                },
                {
                    "name": "description",
                    "type": "string",
                    "properties": []
                }
            ]
        },
        {
            "type": "TRANSFORM_FILTER",
            "id": 169777*******,
            "name": "Filters",
            "parentNodeIds": [
                169777*******6
            ],
            "filterType": "AND",
            "filterConditionList": [
                {
                    "name": "name",
                    "operator": "EQ",
                    "value": "A"
                }
            ]
        },
        {
            "type": "TARGET_OBS",
            "id": 169777*******,
            "name": "Object Storage",
            "parentNodeIds": [
                169777*******
            ],
            "regionNo": "1",
            "bucketName": "aitems",
            "prefix": "dataflow1",
            "dataType": "CSV",
            "updateType": "OVERWRITE",
            "fieldList": [
                {
                    "name": "id",
                    "type": "string",
                    "properties": []
                },
                {
                    "name": "name",
                    "type": "string",
                    "properties": []
                },
                {
                    "name": "description",
                    "type": "string",
                    "properties": []
                }
            ]
        }
    ],
    "runCondition": {
        "workerType": "DEFAULT",
        "numWorker": 2,
        "timeout": 2880,
        "nrn": "",
        "scriptPath": "dataflow-****/scripts/",
        "logPath": "dataflow-****/sparkHistoryLogs/"
    },
    "createdDate": "2025-03-19T15:03:42+09:00",
    "updatedDate": "2025-03-20T13:07:34+09:00"
}