Hi,
I'm using BDE 1.1 on vsphere 5.5. I have deployed BDE VCO Plugin version 0.5.0.70. When I try running create basic Hadoop cluster workflow from VCO, it fails with the below error. Can someone help?
Content as string: {"code":"BDD.BAD_REST_CALL","message":"Failed REST API call: Could not read JSON: Unrecognized field \"networkName\" (Class com.vmware.bdd.apitypes.ClusterCreate), not marked as ignorable\n at [Source: org.apache.catalina.connector.CoyoteInputStream@7e542721; line: 1, column: 578] (through reference chain: com.vmware.bdd.apitypes.ClusterCreate[\"networkName\"]); nested exception is org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field \"networkName\" (Class com.vmware.bdd.apitypes.ClusterCreate), not marked as ignorable\n at [Source: org.apache.catalina.connector.CoyoteInputStream@7e542721; line: 1, column: 578] (through reference chain: com.vmware.bdd.apitypes.ClusterCreate[\"networkName\"])"}
Thanks and Regards,
Nagesh
Hi dvnagesh,
Assume your rest msg for BDE 1.0 is: "networkName": "nw1"
The equivalent in BDE 1.1 should be: "networkConfig": { "MGT_NETWORK": ["nw1"] }
-bxd
Hi Nagesh,
The BDE VCO Plugin works with BDE 1.0, but not BDE 1.1. Our solution engineer is working on this to make it work with BDE 1.1.
Jesse
Thanks Jesse, Even I thought the same. Any idea by when the updated VCO plug-in would be available? Is there some workaround until then?
Regards,
Nagesh
I will contact the plugin developer and let you know then.
That would be a great help! Thanks.
Hi Jesse,
Can you please let me know the Network argument names to be passed with BDE 1.1? I mean equivalent for "networkName" in BDE 1.0?
Hi dvnagesh,
Assume your rest msg for BDE 1.0 is: "networkName": "nw1"
The equivalent in BDE 1.1 should be: "networkConfig": { "MGT_NETWORK": ["nw1"] }
-bxd
Hi dvnagesh,
Our engineer working on the solution made a fix. But it's not fully tested yet. Could you let me know your contact or shoot a mail to bde-info@vmware.com? I will contact with you in providing the package.
Thanks
-bo
Hi bxd,
Thanks. This is working. Can you also please let me know the rest msg arguments to specify a specific Hadoop Distribution and Password for the nodes?
Regards,
Nagesh
Hi Nagesh,
bxd is on PTO today, could you contact dongbobo and provide your contact or send an email to bde-info@vmware.com as mentioned? The relative engineer of vCAC solution will contact with you about the updated package.
Thanks,
Gavin
1) The Distro arguments should be:
######################
"distro": "apache",
"distroVendor": "Apache",
"distroVersion": "1.2.1"
######################
You can check /opt/serengeti/www/distros/manifest on BDE management server for the exact values of these three keys.
2) Passsword for nodes is: "password": "yourpasswd"
Please feel free to let us know if any questions. Thanks!
-bxd
Thanks. This is working.
Hi
Getting the following Error message
Where to change
The equivalent in BDE 1.1 should be: "networkConfig": { "MGT_NETWORK": ["nw1"] }
---------
Error: Operation failed, the status code is: 400. The Reason is: {"code":"BDD.BAD_REST_CALL","message":"Failed REST API call: Could not read JSON: Unrecognized field \"networkName\" (Class com.vmware.bdd.apitypes.ClusterCreate), not marked as ignorable\n at [Source: org.apache.catalina.connector.CoyoteInputStream@6a8b7e01; line: 1, column: 619] (through reference chain: com.vmware.bdd.apitypes.ClusterCreate[\"networkName\"]); nested exception is org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field \"networkName\" (Class com.vmware.bdd.apitypes.ClusterCreate), not marked as ignorable\n at [Source: org.apache.catalina.connector.CoyoteInputStream@6a8b7e01; line: 1, column: 619] (through reference chain: com.vmware.bdd.apitypes.ClusterCreate[\"networkName\"])"} (Workflow:Execute Create Cluster Operation / Execute Operation (item2)#39)
--------------
Hi mapdeep,
In BDE 1.0.0, when creating cluster, your JSON file posted to web service probably includes this item:
"networkName": "nw1"
But in BDE 1.1.0, our web service's API is changed, field "networkName" is not supported any more.
Instead, the entry should be:
"networkConfig": { "MGT_NETWORK": ["nw1"] }
I am not sure if I understand your problem exactly, pls feel free to let us know if any questions. Thanks!
-bxd
Hi bian,
Thanks for your message.
I wanted to know the exact fie in Serengeti where we can change the config information of "networkName".
--
Deepak.
Hi Deepak,
This needs to be changed in the json spec. Not in serengeti server.
Incase if you are invoking cluster create operation from VCO running BDE 1.0 plugin to BDE server 1.1, then this spec needs to be changed in the VCO Workflow.
Regards,
Nagesh
Hi Deepak,
Here is an example:
##############
To create a cluster though REST API, rather than BDE CLI/GUI, you can run command like:
curl -i -H "Content-type:application/json" -3 -b cookies.txt -X POST -d "@clusterCreate.json" https://<serengeti-server-ip>:8443/serengeti/api/clusters --insecure --digest
The file that contains the data sent to (lcusterCreate.json in the example above) would look like this:
{
"name":"test1",
"externalHDFS":null,
"distro":"apache",
"distroVendor":"Apache",
"networkConfig": {"MGT_NETWORK": ["nw1"]}
"topologyPolicy":"NONE",
"nodeGroups":[
{
"name":"master",
"roles":[
"hadoop_namenode",
"hadoop_jobtracker"
],
"cpuNum":2,
"memCapacityMB":7500,
"swapRatio":1.0,
"storage":{
"type":"LOCAL",
"shares":null,
"sizeGB":10,
"dsNames":null,
"splitPolicy":null,
"controllerType":null,
"allocType":null
},
"instanceNum":1
},
{
"name":"worker",
"roles":[
"hadoop_datanode",
"hadoop_tasktracker"
],
"cpuNum":1,
"memCapacityMB":3748,
"swapRatio":1.0,
"storage":{
"type":"LOCAL",
"shares":null,
"sizeGB":10,
"dsNames":null,
"splitPolicy":null,
"controllerType":null,
"allocType":null
},
"instanceNum":3
}
]
}
-bxd
Hi bxd/Nagesh
Thanks for the help.
Making the change in VCO workflow did the trick. Now i am able to deploy the cluster from the VCAC.
--
Deepak.
Hi bxd,
can we mention the Distribution-Distro type in the VCO workflow.
If say i want to mention PivitolHD or Cloudera.
--
Regards,
Deepak M.
Hi
I have added the following in the VCO workflow script to deploy the cluster with PivotalHD distro
"distro":"PivotalHD",
"distroVendor":"PHD",
the workflow returns the error as below
Error: Operation failed, the status code is: 400. The Reason is: {"code":"CLUSTER_CONFIG.INVALID_SPECIFICATION","message":"Invalid cluster specification file: master.roles=hadoop_jobtracker., worker.roles=hadoop_tasktracker.."} (Workflow:Copy of Execute Create Cluster Operation / Execute Operation (item2)#39)
FYI output of distro list is as below
-----------------------------
serengeti>distro list
NAME VENDOR VERSION HVE ROLES
------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------
PivotalHD PHD 1.0.1 false [hadoop_client, hadoop_datanode, hadoop_journalnode, hadoop_namenode, hadoop_nodemanager, hadoop_resourcemana ger, hbase_client, hbase_master, hbase_regionserver, hive, hive_server, pig, zookeeper]
apache Apache 1.2.1 true [hadoop_client, hadoop_datanode, hadoop_jobtracker, hadoop_namenode, hadoop_tasktracker, hbase_client, hbase_ master, hbase_regionserver, hive, hive_server, pig, zookeeper]
----------------------------
--
Regards,
Deepak