createCloudHadoopInstance
- Print
- PDF
createCloudHadoopInstance
- Print
- PDF
Article summary
Did you find this summary helpful?
Thank you for your feedback
Available in VPC
Create a Cloud Hadoop instance.
Request
Request parameters
Parameter name | Required | Type | Restrictions | Description |
---|---|---|---|---|
regionCode | No | String | - Region code Determine the Region in which the Cloud Hadoop instance will be created regionCode can be obtained through the getRegionList action Default: Select the first region of the getRegionList search results | |
vpcNo | Yes | String | - VPC number Determine the VPC in which the Cloud Hadoop instance will be created vpcNo can be obtained through the getVpcList action | |
cloudHadoopImageProductCode | No | String | - Cloud Hadoop image product code cloudHadoopImageProductCode can be obtained through productCode of the getCloudHadoopImageProductList action It will be created with the default value if no input is entered Default: latest version of Cloud Hadoop | |
masterNodeProductCode | Conditional | String | - Cloud Hadoop master server product code Determine the server specifications for the Cloud Hadoop instance to create masterNodeProductCode can be obtained when calling the getCloudHadoopProductList action by setting its infraResourceDetailTypeCode to "MSTDT" Default: The minimum specification is selected The minimum specification is based on 1. memory and 2. CPU | |
edgeNodeProductCode | Conditional | String | - Cloud Hadoop edge server product code Determine the server specifications for the Cloud Hadoop instance to create edgeNodeProductCode can be obtained when calling the getCloudHadoopProductList action by setting its infraResourceDetailTypeCode to "EDGND" Default: The minimum specification is selected The minimum specification is based on 1. memory and 2. CPU | |
workerNodeProductCode | Conditional | String | - Cloud Hadoop worker server product code Determine the server specifications for the Cloud Hadoop instance to create workerNodeProductCode can be obtained when calling the getCloudHadoopProductList action by setting its infraResourceDetailTypeCode to "MSTDT" Default: The minimum specification is selected The minimum specification is based on 1. memory and 2. CPU | |
cloudHadoopClusterName | Yes | String | Only English letters, numbers, hyphens (-), and Korean letters can be used The first and last character must be a lowercase English letter or number Min: 3 Max: 15 | Cloud Hadoop cluster name |
cloudHadoopClusterTypeCode | Yes | String | - Cloud Hadoop cluster type code cloudHadoopClusterTypeCode can be obtained through the getCloudHadoopClusterTypeList action | |
cloudHadoopAddOnCodeList | No | List<String> | - Cloud Hadoop add-on list Use only on Cloud Hadoop version 1.5 or later cloudHadoopAddOnCode can be obtained through the getCloudHadoopAddOnList action Options: PRESTO | HBASE | IMPALA | KUDU - <e.g.> cloudHadoopAddOnCodeList.1=PRESTO&cloudHadoopAddOnCodeList.2=HBASE | |
cloudHadoopAdminUserName | Yes | String | Only lowercase English letters, numbers, and hyphens (-) can be used The first and last character must be a lowercase English letter or number Min: 3 - Max: 15 | - Cluster admin username Admin account required to access the Ambari management console |
cloudHadoopAdminUserPassword | Yes | String | Must contain at least one uppercase English letter, one special character, and one number Spaces or the following special characters can't be included: single quotes (' '), double quotes (" "), KRW symbols (₩), slashes (/), ampersands (&), or backticks (`) Min: 8 Max: 20 | - Cluster admin password Admin account password required to access the Ambari management console |
loginKeyName | Yes | String | - Authentication key name loginKeyName can be obtained through the getCloudHadoopLoginKeyList action Set the SSH authentication key required to connect directly to the node | |
edgeNodeSubnetNo | Yes | String | - Subnet number of the edge node Select the subnet on which the edge node will be located Edge nodes are located in private/public subnets edgeNodeSubnetNo can be obtained through the getSubnetList action | |
masterNodeSubnetNo | Yes | String | - Subnet number of the master node Select the subnet on which the master node will be located Master nodes are located in private/public subnets masterNodeSubnetNo can be obtained through the getSubnetList action | |
bucketName | Yes | String | - Bucket name Space for storing data in Object Storage bucketName can be obtained through the getCloudHadoopBucketList action | |
workerNodeSubnetNo | Yes | String | - Subnet number of the worker node Select the subnet on which the worker node will be located Worker nodes are only located in private subnets workerNodeSubnetNo can be obtained through the getSubnetList action | |
masterNodeDataStorageTypeCode | Yes | String | SSD | HDD | - Master node's data storage type code Data storage type can't be changed after installation Options: SSD | HDD Default: SSD |
workerNodeDataStorageTypeCode | Yes | String | SSD | HDD | - Worker node's data storage type code Data storage type can't be changed after installation Options: SSD | HDD Default: SSD |
masterNodeDataStorageSize | Yes | Integer | Min: 100 Max: 2000 | - Master node's data storage size Can be entered in 10 GB increments from 100 GB to 2000 GB 4000 GB or 6000 GB can also be used |
workerNodeDataStorageSize | Yes | Integer | Min: 100 Max: 2000 | - Worker node's data storage size Can be entered in 10 GB increments from 100 GB to 2000 GB 4000 GB or 6000 GB can also be used |
workerNodeCount | No | Integer | - Number of worker nodes The number of worker nodes can be selected from 2 to 8 Default: 2 | |
useKdc | No | Boolean | - Whether to enable Kerberos authentication configuration Configure a secure Hadoop cluster using Kerberos Default: false | |
kdcRealm | Conditional | String | - KDC's realm information Enter only if useKdc is true It is ignored if useKdc is false Only allow domain rules in the realm format | |
kdcPassword | Conditional | String | - KDC admin account's password Enter only if useKdc is true It is ignored if useKdc is false | |
useBootstrapScript | No | Boolean | - Whether to use Cloud Hadoop bootstrap scripts Default: false | |
bootstrapScript | Conditional | String | Only English is supported Spaces or special characters are not allowed The maximum length is 1024 bytes | - Cloud Hadoop bootstrap script Enter only if useBootstrapScript is true It is ignored if useBootstrapScript is false Scripts can only be run with buckets that are integrated with Cloud Hadoop Need to enter folder and file name, excluding bucket name Default: false |
useDataCatalog | No | Boolean | - Whether to use Cloud Hadoop Data Catalog Cloud Hadoop Hive metastores are served using catalogs from the Data Catalog service Integration is possible only if the catalog status of the Data Catalog service is normal Only Cloud Hadoop version 2.0 or later is supported Default: false | |
engineVersionCode | Conditional | String | - Engine version code Check through getCloudHadoopImageProductList Enter only when creating with Rocky cluster, ignore CentOS specifications If not entered, it is created with the latest version currently available | |
useDataCatalog | No | Boolean | - Whether to use Cloud Hadoop Data Catalog Cloud Hadoop Hive metastores are served using catalogs from the Data Catalog service Integration is possible only if the catalog status of the Data Catalog service is normal Only Cloud Hadoop version 2.0 or later is supported Default: false | |
responseFormatType | No | String | - Respond result's format type Options: xml | json Default: xml |
Request headers
See Cloud Hadoop (VPC) overview.
Request body
{
"regionCode": "string",
"vpcNo": "string",
"cloudHadoopClusterName": "string",
"cloudHadoopImageProductCode": "string",
"cloudHadoopClusterTypeCode": "string",
"cloudHadoopAddOnCodeList": [
"string"
],
"cloudHadoopAdminUserName": "string",
"cloudHadoopAdminUserPassword": "string",
"loginKeyName": "string",
"bucketName": "string",
"edgeNodeProductCode": "string",
"edgeNodeSubnetNo": "string",
"masterNodeProductCode": "string",
"masterNodeSubnetNo": "string",
"masterNodeDataStorageTypeCode": "string",
"masterNodeDataStorageSize": 0,
"workerNodeProductCode": "string",
"workerNodeCount": 0,
"workerNodeSubnetNo": "string",
"workerNodeDataStorageTypeCode": "string",
"workerNodeDataStorageSize": 0,
"useKdc": "string",
"kdcRealm": "string",
"kdcPassword": "string",
"useBootstrapScript": "string",
"bootstrapScript": "string",
"useDataCatalog": true,
"responseFormatType": "string"
}
Response
Response body
Errors
See Cloud Hadoop (VPC) overview.
Examples
Request example
GET {API_URL}/createCloudHadoopInstance
?regionCode=KR
&vpcNo=**65
&cloudHadoopImageProductCode=SW.VCHDP.LNX64.CNTOS.0708.HDP.15.B050
&masterNodeProductCode=SVR.VCHDP.MSTDT.HIMEM.C004.M032.NET.HDD.B050.G002
&edgeNodeProductCode=SVR.VCHDP.EDGND.STAND.C004.M016.NET.HDD.B050.G002
&workerNodeProductCode=SVR.VCHDP.MSTDT.HICPU.C008.M016.NET.HDD.B050.G002
&cloudHadoopClusterName=test***
&cloudHadoopClusterTypeCode=CORE_HADOOP_WITH_SPARK
&cloudHadoopAddOnCodeList.1=PRESTO
&cloudHadoopAdminUserName=test-***
&cloudHadoopAdminUserPassword=*******
&loginKeyName=key****
&bucketName=buc***
&edgeNodeSubnetNo=11**
&masterNodeSubnetNo=11**
&workerNodeSubnetNo=12**
&masterNodeDataStorageTypeCode=SSD
&workerNodeDataStorageTypeCode=SSD
&masterNodeDataStorageSize=100
&workerNodeDataStorageSize=100
&workerNodeCount=2
&useKdc=true
&kdcRealm=EX**LE.COM
&kdcPassword=*********
&useBootstrapScript=true
&useDataCatalog=true
&bootstrapScript=init-script/example.sh
Response example
{
"createCloudHadoopInstanceResponse": {
"totalRows": 1,
"cloudHadoopInstanceList": [
{
"cloudHadoopInstanceNo": "2775778",
"cloudHadoopClusterName": "test123",
"cloudHadoopInstanceStatusName": "creating",
"cloudHadoopInstanceStatus": {
"code": "INIT",
"codeName": "CLOUD DATABASE(VPC) Init State"
},
"cloudHadoopInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Creat OP"
},
"cloudHadoopClusterType": {
"code": "CORE_HADOOP_WITH_SPARK",
"codeName": "Core Hadoop with Spark : HDFS(3.1.1), YARN(3.1.1), Zookeeper(3.4.9), Ranger(2.0.0), HIVE(3.1.2), Hue(4.8.0), Zeppelin Notebook(0.10.1), Spark (2.4.8)"
},
"cloudHadoopVersion": {
"code": "HADOOP2.0",
"codeName": "Cloud Hadoop 2.0"
},
"cloudHadoopAddOnList": [],
"ambariServerHost": "e-001-test123-15iv-hd",
"clusterDirectAccessAccount": "sshuser",
"loginKey": "newkey",
"objectStorageBucket": "ffdd",
"cloudHadoopImageProductCode": "SW.VCHDP.LNX64.CNTOS.0708.HDP.20.B050",
"isHa": true,
"createDate": "2023-02-08T21:26:09+0900",
"accessControlGroupNoList": [],
"cloudHadoopServerInstanceList": [
{
"cloudHadoopServerName": "e-001-test123-15iv-hd",
"cloudHadoopServerRole": {
"code": "E",
"codeName": "Edge Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.EDGND.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "m-001-test123-15it-hd",
"cloudHadoopServerRole": {
"code": "M",
"codeName": "Master Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "m-002-test123-15iu-hd",
"cloudHadoopServerRole": {
"code": "M",
"codeName": "Master Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "d-001-test123-15iw-hd",
"cloudHadoopServerRole": {
"code": "D",
"codeName": "Date Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "d-002-test123-15ix-hd",
"cloudHadoopServerRole": {
"code": "D",
"codeName": "Date Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
}
]
}
],
"requestId": "aa789745-34de-416c-a4c2-159482eaa9ed",
"returnCode": "0",
"returnMessage": "success"
}
}
<createCloudHadoopInstanceResponse>
<requestId>b8828eca-c3f8-4ddb-86dd-3355026b4b94</requestId>
<returnCode>0</returnCode>
<returnMessage>success</returnMessage>
<totalRows>1</totalRows>
<cloudHadoopInstanceList>
<cloudHadoopInstance>
<cloudHadoopInstanceNo>***4904</cloudHadoopInstanceNo>
<cloudHadoopClusterName>test***</cloudHadoopClusterName>
<cloudHadoopInstanceStatusName>creating</cloudHadoopInstanceStatusName>
<cloudHadoopInstanceStatus>
<code>INIT</code>
<codeName>CLOUD DATABASE(VPC) Init State</codeName>
</cloudHadoopInstanceStatus>
<cloudHadoopInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Creat OP</codeName>
</cloudHadoopInstanceOperation>
<cloudHadoopClusterType>
<code>CORE_HADOOP_WITH_SPARK</code>
<codeName>Core Hadoop with Spark : HDFS(3.1.1), YARN(3.1.1), Zookeeper(3.4.6), Ranger(1.2.0), HIVE(3.1.0), Hue(4.3.0), Zepplin Notebook(0.8.0), Spark(2.4.8)</codeName>
</cloudHadoopClusterType>
<cloudHadoopVersion>
<code>HADOOP1.6</code>
<codeName>Cloud Hadoop 1.6</codeName>
</cloudHadoopVersion>
<ambariServerHost>e-001-dasfxc-mel-hd</ambariServerHost>
<clusterDirectAccessAccount>***user</clusterDirectAccessAccount>
<loginKey>****</loginKey>
<objectStorageBucket>****</objectStorageBucket>
<cloudHadoopImageProductCode>SW.VCHDP.LNX64.CNTOS.0708.HDP.16.B050</cloudHadoopImageProductCode>
<isHa>true</isHa>
<createDate>2021-11-09T18:55:29+0900</createDate>
<accessControlGroupNoList>
<accessControlGroupNo>11728</accessControlGroupNo>
</accessControlGroupNoList>
<cloudHadoopServerInstanceList>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>E</code>
<codeName>Edge Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.EDGND.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>M</code>
<codeName>Master Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>M</code>
<codeName>Master Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>D</code>
<codeName>Date Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>D</code>
<codeName>Date Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
</cloudHadoopServerInstanceList>
</cloudHadoopInstance>
</cloudHadoopInstanceList>
</createCloudHadoopInstanceResponse>
Was this article helpful?