createWorkflow

Prev Next

Available in VPC

Create a workflow.

Request

This section describes the request format. The method and URI are as follows:

Method URI
POST /api/v1/workflows

Request headers

For information about the headers common to all Data Flow APIs, see Data Flow request headers.

Request body

You can include the following data in the body of your request:

Field Type Required Description
name String Required Workflow name
  • Enter 3-100 characters by combining English letters, numbers, and special characters "_" and "-".
  • It must start with an English letter or "-".
nodes Array Required Node information
isTemp Boolean Required Temporary storage status
  • true | false
    • true: Save temporarily.
    • false: Complete immediately (do not save temporarily).

nodes

The following describes nodes.

Field Type Required Description
id String Required Node ID
  • triggerId | jobId
name String Required Node name
  • Trigger name | job name
    • Trigger name: See getTriggers (if type is TRIGGER).
    • Job name: See getJobs (if type is JOB).
type String Required Node type
  • TRIGGER | JOB
parentNodeIds Array Required Parent node ID list
  • Enter the ID of the trigger node in the job node object.
  • Maintain empty values for trigger node objects.
triggerRule String Optional Trigger node execution option information
  • RECURRING | ONE_TIME | NONE
    • RECURRING: Run on a cycle.
    • ONE_TIME: Run on a specific date.
    • NONE: no execution option
jobCondition Object Required Job execution option information
jobCondition.workerType String Optional Worker type
  • DATAFLOW (default)
jobCondition.numWorker Integer Optional Number of workers
  • 2 (default)
jobCondition.timeout Integer Optional Execution timeout (minute)
  • Time spent waiting for the results of a job when it is executed once
  • 0-1440 (default: 360)
jobCondition.nrn String Required NAVER Cloud Platform resource identification value for job
jobCondition.scriptPath String Required Job execution script storage path
jobCondition.logPath String Required Job execution history storage path

Request example

The request example is as follows:

curl --location --request POST 'https://dataflow.apigw.ntruss.com/api/v1/workflows' \
--header 'x-ncp-apigw-timestamp: {Timestamp}' \
--header 'x-ncp-iam-access-key: {Access Key}' \
--header 'x-ncp-apigw-signature-v2: {API Gateway Signature}' \
--data '{
  "name": "workflow001",
  "nodes": [
    {
      "id": "gqigvH******",
      "name": "job001",
      "type": "JOB",
      "parentNodeIds": [
        "d125za******"
      ],
      "jobCondition": {
        "workerType": "DEFAULT",
        "numWorker": 3,
        "timeout": 360,
        "nrn": "nrn:PUB:IAM::25****4:Role/****-a9d6-****-a85d-****",
        "scriptPath": "dataflow-33-****/scripts/",
        "logPath": "dataflow-33-****/sparkHistoryLogs/"
      }
    },
    {
      "id": "d125za******",
      "name": "trigger001",
      "type": "TRIGGER",
      "parentNodeIds": [],
      "triggerRule": "RECURRING"
    }
  ],
  "isTemp": false
}'

Response

This section describes the response format.

Response body

The response body includes the following data:

Field Type Required Description
workflowId String - Workflow ID
name String - Workflow name
nodes Array - Node information
createdDate String - Workflow creation date and time
  • ISO 8601 format (including UTC+9)
updatedDate String - Workflow modification date and time
  • ISO 8601 format (including UTC+9)
isTemp Boolean - Temporary storage status
  • true | false
    • true: Save temporarily.
    • false: Complete immediately (do not save temporarily).

nodes

The following describes nodes.

Field Type Required Description
id String - Node ID
  • Display triggerId or jobId according to type.
name String - Node name
  • Display the trigger name or job name according to type.
type String - Node type
  • TRIGGER | JOB
parentNodeIds Array - Parent node ID list
  • Display the ID of the trigger node in the job node object.
  • Display an empty value for trigger node objects.
triggerRule String - Trigger node execution option information
  • RECURRING | ONE_TIME | NONE
    • RECURRING: Run on a cycle.
    • ONE_TIME: Run on a specific date.
    • NONE: no execution option
isValid Boolean - Node deletion status (node availability)
  • true | false
    • true: not deleted (available)
    • false: deleted (available)
jobCondition Object - Job execution option information
jobCondition.workerType String - Worker type
  • DATAFLOW (default)
jobCondition.numWorker Integer - Number of workers
  • 2 (default)
jobCondition.timeout Integer - Execution timeout (minute)
  • Time spent waiting for the results of a job when it is executed once
  • 0-1440 (default: 360)
jobCondition.nrn String - NAVER Cloud Platform resource identification value for job
jobCondition.scriptPath String - Job execution script storage path
jobCondition.logPath String - Job execution history storage path

Response status codes

For response status codes common to all Data Flow APIs, see Data Flow API response status codes.

Response example

The response example is as follows:

{
  "workflowId" : "3tfNLP******",
  "name" : "workflow001",
  "nodes" : [ {
    "id" : "gqigvH******",
    "name" : "job001",
    "type" : "JOB",
    "parentNodeIds" : [ "8bjF7E******" ],
    "isValid" : true,
    "jobCondition" : {
      "workerType" : "DEFAULT",
      "numWorker" : 3,
      "timeout" : 360,
      "nrn" : "nrn:PUB:IAM::*******:Role/********-0496-11f0-baf6-246e96591a38",
      "scriptPath" : "dataflow-2706412-****/scripts/",
      "logPath" : "dataflow-2706412-****/sparkHistoryLogs/"
    }
  }, {
    "id" : "8bjF7E******",
    "name" : "trigger001",
    "type" : "TRIGGER",
    "parentNodeIds" : [ ],
    "triggerRule" : "RECURRING",
    "isValid" : true
  } ],
  "createdDate" : "2025-03-24T14:46:24+09:00",
  "updatedDate" : "2025-03-24T14:46:24+09:00",
  "isTemp" : false
}