createCloudHadoopInstance
- Print
- PDF
createCloudHadoopInstance
- Print
- PDF
Article Summary
Share feedback
Thanks for sharing your feedback!
Available in VPC
Creates a Cloud Hadoop instance.
Requests
Request Parameters
Parameter Name | Requirement Status | Type | Restrictions | Description |
---|---|---|---|---|
regionCode | No | String | - Region code Can decide the Region where the Cloud Hadoop instance will be created regionCode can be obtained via the getRegionList action Default: select the first Region of the getRegionList search results | |
vpcNo | Yes | String | - VPC number Determines the VPC where the Cloud Hadoop instance will be created vpcNo can be obtained through the getVpcList action | |
cloudHadoopImageProductCode | No | String | - Cloud Hadoop image product code cloudHadoopImageProductCode can be obtained with productCode of the getCloudHadoopImageProductList action Generated in default value if data is not entered. Default: Cloud Hadoop's latest version | |
masterNodeProductCode | Conditional | String | - Cloud Hadoop Master server product code Can determine the server specifications of the Cloud Hadoop instance to be created masterNodeProductCode can be obtained by setting and calling as infraResourceDetailTypeCode = 'MSTDT' of the getCloudHadoopProductList action Default: the minimum specifications are selected Standard of minimum specifications: 1. Memory 2. CPU | |
edgeNodeProductCode | Conditional | String | - Cloud Hadoop Edge server product code Can determine the server specifications of the Cloud Hadoop instance to be created edgeNodeProductCode can be obtained by setting and calling as infraResourceDetailTypeCode = 'EDGND' of the getCloudHadoopProductList action Default: the minimum specifications are selected Standard of minimum specifications: 1. Memory 2. CPU | |
workerNodeProductCode | Conditional | String | - Cloud Hadoop Worker server product code Can determine the server specifications of the Cloud Hadoop instance to be created workerNodeProductCode can be obtained by setting and calling as infraResourceDetailTypeCode = 'MSTDT' of the getCloudHadoopProductList action Default: the minimum specifications are selected Standard of minimum specifications: 1. Memory 2. CPU | |
cloudHadoopClusterName | Yes | String | Can only enter English letters, numbers, and dashes (-), and Korean letters. Must start and end with an English letter (lowercase) or a number Min: 3 Max: 15 | Cloud Hadoop cluster name |
cloudHadoopClusterTypeCode | Yes | String | - Cloud Hadoop cluster type code cloudHadoopClusterTypeCode can be obtained through the getCloudHadoopClusterTypeList action | |
cloudHadoopAddOnCodeList | No | List<String> | - Cloud Hadoop Add-On list Available only for Cloud Hadoop 1.5 version or higher cloudHadoopAddOnCode can be obtained through the getCloudHadoopAddOnList action Options: PRESTO | HBASE | IMPALA | KUDU - <examples> cloudHadoopAddOnCodeList.1=PRESTO&cloudHadoopAddOnCodeList.2=HBASE | |
cloudHadoopAdminUserName | Yes | String | Can only be composed of English letters (lowercase), numbers, and dashes (-) Must start and end with an English letter (lowercase) or a number Min: 3 - Max: 15 | - Cluster administrator user name Admin account required for accessing ambari management console |
cloudHadoopAdminUserPassword | Yes | String | Must include at least 1 alphabetical character (capital letter), special character, and number Special characters, such as single quotations ('), double quotations ("), the KRW symbol (₩), slashes (/), ampersands (&), back quotes (`), and spaces cannot be included Min: 8 Max: 20 | - Cluster admin password Admin account password required for accessing ambari management console |
loginKeyName | Yes | String | - Authentication key name loginKeyName can be obtained through the getCloudHadoopLoginKeyList action Set the SSH authentication key required for connecting directly to the node | |
edgeNodeSubnetNo | Yes | String | - Subnet number of edge node Can select a subnet that will locate the edge node Edge nodes are located in private/public subnets edgeNodeSubnetNo can be obtained through the getSubnetList action | |
masterNodeSubnetNo | Yes | String | - Subnet number of master node Can select a subnet that will locate the master node Master nodes are located in private/public subnets masterNodeSubnetNo can be obtained through the getSubnetList action | |
bucketName | Yes | String | - Bucket name Refers to a space to store data in Object Storage bucketName can be obtained through the getCloudHadoopBucketList action | |
workerNodeSubnetNo | Yes | String | - Subnet number of worker node Can select a subnet that will locate the worker node Worker nodes are only located in private subnets workerNodeSubnetNo can be obtained through the getSubnetList action | |
masterNodeDataStorageTypeCode | Yes | String | SSD | HDD | - Data storage type code of master nodes Data storage type cannot be changed after installation Options: SSD | HDD Default: SSD |
workerNodeDataStorageTypeCode | Yes | String | SSD | HDD | - Data storage type code of worker nodes Data storage type cannot be changed after installation Options: SSD | HDD Default: SSD |
masterNodeDataStorageSize | Yes | Integer | Min: 100 Max: 2000 | - Data storage size of the master node Can be entered in 10 GB units between 100 GB and 2000 GB 4000 GB and 6000 GB are also available |
workerNodeDataStorageSize | Yes | Integer | Min: 100 Max: 2000 | - Data storage size of the worker node Can be entered in 10 GB units between 100 GB and 2000 GB 4000 GB and 6000 GB are also available |
workerNodeCount | No | Integer | - Number of worker nodes You can select between 2 and 8 worker nodes. Default : 2 | |
useKdc | No | Boolean | - Whether to use Kerberos authentication configuration Kerberos is used to configure secure Hadoop clusters. Default: false | |
kdcRealm | Conditional | String | - KDC's Realm information Can be entered only if useKdc is true Will be ignored if useKdc is false Only realm-format domain rules are allowed | |
kdcPassword | Conditional | String | - KDC admin account password Can be entered only if useKdc is true Will be ignored if useKdc is false | |
useBootstrapScript | No | Boolean | - Whether to use Cloud Hadoop bootstrap script Default: false | |
bootstrapScript | Conditional | String | Only English is supported Cannot use spaces or special characters Available up to 1024 bytes | - Cloud Hadoop bootstrap script Can be entered only if useBootstrapScript is true Will be ignored if useBootstrapScript is false Can run scripts only with the bucket that is connected to Cloud Hadoop Must enter the folder and file names except for the bucket name Default: false |
useDataCatalog | No | Boolean | - Whether to use Cloud Hadoop Data Catalog It provides Cloud Hadoop hive metastore by using Data Catalog service's catalog. You can connect only if the Data Catalog service's Catalog status is normal. You can only connect with Cloud Hadoop 2.0 version or higher. Default: false | |
responseFormatType | No | String | - Format type of the response result Options: xml | json Default: xml |
Request headers
See Cloud Hadoop(VPC) overview.
Request bodies
{
"regionCode": "string",
"vpcNo": "string",
"cloudHadoopClusterName": "string",
"cloudHadoopImageProductCode": "string",
"cloudHadoopClusterTypeCode": "string",
"cloudHadoopAddOnCodeList": [
"string"
],
"cloudHadoopAdminUserName": "string",
"cloudHadoopAdminUserPassword": "string",
"loginKeyName": "string",
"bucketName": "string",
"edgeNodeProductCode": "string",
"edgeNodeSubnetNo": "string",
"masterNodeProductCode": "string",
"masterNodeSubnetNo": "string",
"masterNodeDataStorageTypeCode": "string",
"masterNodeDataStorageSize": 0,
"workerNodeProductCode": "string",
"workerNodeCount": 0,
"workerNodeSubnetNo": "string",
"workerNodeDataStorageTypeCode": "string",
"workerNodeDataStorageSize": 0,
"useKdc": "string",
"kdcRealm": "string",
"kdcPassword": "string",
"useBootstrapScript": "string",
"bootstrapScript": "string",
"useDataCatalog": true,
"responseFormatType": "string"
}
Responses
Response bodies
Errors
See Cloud Hadoop(VPC) overview.
Examples
Request Examples
GET {API_URL}/createCloudHadoopInstance
?regionCode=KR
&vpcNo=**65
&cloudHadoopImageProductCode=SW.VCHDP.LNX64.CNTOS.0708.HDP.15.B050
&masterNodeProductCode=SVR.VCHDP.MSTDT.HIMEM.C004.M032.NET.HDD.B050.G002
&edgeNodeProductCode=SVR.VCHDP.EDGND.STAND.C004.M016.NET.HDD.B050.G002
&workerNodeProductCode=SVR.VCHDP.MSTDT.HICPU.C008.M016.NET.HDD.B050.G002
&cloudHadoopClusterName=test***
&cloudHadoopClusterTypeCode=CORE_HADOOP_WITH_SPARK
&cloudHadoopAddOnCodeList.1=PRESTO
&cloudHadoopAdminUserName=test-***
&cloudHadoopAdminUserPassword=*******
&loginKeyName=key****
&bucketName=buc***
&edgeNodeSubnetNo=11**
&masterNodeSubnetNo=11**
&workerNodeSubnetNo=12**
&masterNodeDataStorageTypeCode=SSD
&workerNodeDataStorageTypeCode=SSD
&masterNodeDataStorageSize=100
&workerNodeDataStorageSize=100
&workerNodeCount=2
&useKdc=true
&kdcRealm=EX**LE.COM
&kdcPassword=*********
&useBootstrapScript=true
&useDataCatalog=true
&bootstrapScript=init-script/example.sh
Response examples
{
"createCloudHadoopInstanceResponse": {
"totalRows": 1,
"cloudHadoopInstanceList": [
{
"cloudHadoopInstanceNo": "2775778",
"cloudHadoopClusterName": "test123",
"cloudHadoopInstanceStatusName": "creating",
"cloudHadoopInstanceStatus": {
"code": "INIT",
"codeName": "CLOUD DATABASE(VPC) Init State"
},
"cloudHadoopInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Creat OP"
},
"cloudHadoopClusterType": {
"code": "CORE_HADOOP_WITH_SPARK",
"codeName": "Core Hadoop with Spark : HDFS(3.1.1), YARN(3.1.1), Zookeeper(3.4.9), Ranger(2.0.0), HIVE(3.1.2), Hue(4.8.0), Zeppelin Notebook(0.10.1), Spark (2.4.8)"
},
"cloudHadoopVersion": {
"code": "HADOOP2.0",
"codeName": "Cloud Hadoop 2.0"
},
"cloudHadoopAddOnList": [],
"ambariServerHost": "e-001-test123-15iv-hd",
"clusterDirectAccessAccount": "sshuser",
"loginKey": "newkey",
"objectStorageBucket": "ffdd",
"cloudHadoopImageProductCode": "SW.VCHDP.LNX64.CNTOS.0708.HDP.20.B050",
"isHa": true,
"createDate": "2023-02-08T21:26:09+0900",
"accessControlGroupNoList": [],
"cloudHadoopServerInstanceList": [
{
"cloudHadoopServerName": "e-001-test123-15iv-hd",
"cloudHadoopServerRole": {
"code": "E",
"codeName": "Edge Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.EDGND.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "m-001-test123-15it-hd",
"cloudHadoopServerRole": {
"code": "M",
"codeName": "Master Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "m-002-test123-15iu-hd",
"cloudHadoopServerRole": {
"code": "M",
"codeName": "Master Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "d-001-test123-15iw-hd",
"cloudHadoopServerRole": {
"code": "D",
"codeName": "Date Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "d-002-test123-15ix-hd",
"cloudHadoopServerRole": {
"code": "D",
"codeName": "Date Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"subnetNo": "5746",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
}
]
}
],
"requestId": "aa789745-34de-416c-a4c2-159482eaa9ed",
"returnCode": "0",
"returnMessage": "success"
}
}
<createCloudHadoopInstanceResponse>
<requestId>b8828eca-c3f8-4ddb-86dd-3355026b4b94</requestId>
<returnCode>0</returnCode>
<returnMessage>success</returnMessage>
<totalRows>1</totalRows>
<cloudHadoopInstanceList>
<cloudHadoopInstance>
<cloudHadoopInstanceNo>***4904</cloudHadoopInstanceNo>
<cloudHadoopClusterName>test***</cloudHadoopClusterName>
<cloudHadoopInstanceStatusName>creating</cloudHadoopInstanceStatusName>
<cloudHadoopInstanceStatus>
<code>INIT</code>
<codeName>CLOUD DATABASE(VPC) Init State</codeName>
</cloudHadoopInstanceStatus>
<cloudHadoopInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Creat OP</codeName>
</cloudHadoopInstanceOperation>
<cloudHadoopClusterType>
<code>CORE_HADOOP_WITH_SPARK</code>
<codeName>Core Hadoop with Spark : HDFS(3.1.1), YARN(3.1.1), Zookeeper(3.4.6), Ranger(1.2.0), HIVE(3.1.0), Hue(4.3.0), Zepplin Notebook(0.8.0), Spark(2.4.8)</codeName>
</cloudHadoopClusterType>
<cloudHadoopVersion>
<code>HADOOP1.6</code>
<codeName>Cloud Hadoop 1.6</codeName>
</cloudHadoopVersion>
<ambariServerHost>e-001-dasfxc-mel-hd</ambariServerHost>
<clusterDirectAccessAccount>***user</clusterDirectAccessAccount>
<loginKey>****</loginKey>
<objectStorageBucket>****</objectStorageBucket>
<cloudHadoopImageProductCode>SW.VCHDP.LNX64.CNTOS.0708.HDP.16.B050</cloudHadoopImageProductCode>
<isHa>true</isHa>
<createDate>2021-11-09T18:55:29+0900</createDate>
<accessControlGroupNoList>
<accessControlGroupNo>11728</accessControlGroupNo>
</accessControlGroupNoList>
<cloudHadoopServerInstanceList>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>E</code>
<codeName>Edge Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.EDGND.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>M</code>
<codeName>Master Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>M</code>
<codeName>Master Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>D</code>
<codeName>Date Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>D</code>
<codeName>Date Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<subnetNo>**23</subnetNo>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
</cloudHadoopServerInstanceList>
</cloudHadoopInstance>
</cloudHadoopInstanceList>
</createCloudHadoopInstanceResponse>
Was this article helpful?