I'm trying to create a blueprint that will add an additional disk if requested. I can get it to work with a single VM deployment, but when adding more than one VM, I can't get the logic to work to create/attach a new disk to each VM.
I've tried modifying the attachedDisks property to '${map_to_object(resource.disk[count.index].id, "source")}' but then the build fails with "Failed to allocate resource machine: ReadTimeoutException". Leaving the attachedDisks to '${map_to_object(resource.disk[*].id, "source")}' fails with "Provisioning operation failed. Error from vCenter: Unable to access file [DATASTORE_NAME] lstomcat1882/lstomcat1882_1.vmdk since it is locked"
Any ideas? Here is my blueprint:
formatVersion: 1
inputs:
os:
type: string
title: Operating System
description: Choose an operating system
format: ''
default: rhel7
oneOf:
- title: RHEL 7
const: rhel7
- title: RHEL 8
const: rhel8
size:
type: string
description: Choose the size for the new VM
title: VM Size
oneOf:
- title: 'Small Memory (1cpu, 4GB)'
const: smallmem
- title: 'Small CPU (2cpu, 2GB)'
const: smallcpu
- title: 'Medium Memory (2cpu, 8GB)'
const: mediummem
- title: 'Medium CPU (4cpu, 4GB)'
const: mediumcpu
- title: 'Large Memory (4cpu, 16GB)'
const: largemem
- title: 'Large CPU (8cpu, 8GB)'
const: largecpu
- title: 'XL Memory (8cpu, 32GB)'
const: xlmem
- title: 'XL CPU (16cpu, 16GB)'
const: xlcpu
- title: 'XXL Memory (16cpu, 64GB)'
const: xxlmem
- title: 'XXL CPU (32cpu, 32GB)'
const: xxlcpu
default: smallmem
workload:
type: string
description: 'Workload function, example - tomcat'
title: Node Workload
default: tomcat
pattern: '^[a-z0-9]+$'
environment:
type: string
description: Choose the environment for this deployment
title: Environment
oneOf:
- title: Sandbox
const: sbx
- title: UAT
const: uat
- title: Production
const: prd
default: sbx
nodecount:
type: integer
description: How many nodes do you need?
title: Node Count
default: 1
minimum: 1
maximum: 10
disks:
type: array
title: Data Disks
description: Data disk mount points and sizes.
default:
- size: 10
mountpoint: /app
minItems: 1
maxItems: 1
items:
type: object
properties:
mountpoint:
type: string
title: Mountpoint
size:
type: integer
title: Size (GB)
maximum: 1024
min: 5
custom-files:
type: array
title: Custom files
description: Custom files to write via cloud-init
items:
type: object
properties:
path:
type: string
title: Full file path
content:
type: string
title: File contents
encoding:
type: string
title: File encoding
oneOf:
- title: Base64 Encoding
const: b64
default: b64
owner:
type: string
title: User/Group Owner
oneOf:
- title: 'root:root'
const: root
default: 'root:root'
permissions:
type: string
title: File permissions
oneOf:
- title: Read Write (0644)
const: '0644'
- title: Read Only (0600)
const: '0600'
- title: Executable (0755)
const: '0755'
default: '0600'
publickey:
type: string
description: SSH public key to add to the machine
title: SSH Public Key
resources:
network:
type: Cloud.NSX.Network
properties:
networkType: existing
machine:
type: Cloud.vSphere.Machine
properties:
name: 'ls${input.workload}'
image: '${input.os}'
flavor: '${input.size}'
networks:
- network: '${resource.network.id}'
assignment: static
DNSZone: foo.bar
Infoblox.IPAM.Network.dnsSuffix: '${env.projectName}.${input.environment}.${self.DNSZone}'
count: '${input.nodecount}'
attachedDisks: '${map_to_object(resource.disk[*].id, "source")}'
cloudConfig: |
#cloud-config
preserve_hostname: false
hostname: ${self.resourceName}
fqdn: ${self.resourceName}.${env.projectName}.${input.environment}.foo.bar
users:
- default
- name: vradmin
gecos: vRealize Created Account
ssh-authorized-keys:
- ${input.publickey}
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
bootcmd:
- mkdir -p ${input.disks[0].mountpoint}
disk_setup:
/dev/sdb:
table_type: mbr
layout: true
overwrite: true
fs_setup:
- label: ${to_upper(replace(input.disks[0].mountpoint,"/",""))}
filesystem: xfs
device: /dev/sdb1
mounts:
- [ /dev/sdb1, ${input.disks[0].mountpoint} ]
write_files:
${input.custom-files}
disk:
type: Cloud.vSphere.Disk
properties:
capacityGb: '${input.disks[0].size}'
name: '${to_upper(replace(input.disks[0].mountpoint,"/",""))}'
count: '${length(input.disks) >= 1 ? input.nodecount : 0}'
So, I had a moment yesterday where a lightbulb went off. I was trying to control the count of VMs via the blueprint without even thinking about the ability to control the count in the Service Broker catalog. I have removed the nodecount input and modified the "Max. instances per request" via "Configure Item" in the "Content" section of Service Broker > Content & Policies. I consider this to be an acceptable solution to my original issue, as these resources are not clustered, we just needed multiple of the same configuration.
I want to do the same, but I haven't found a BP yet that can change both the count and size properties. I have confirmed that if you want to make the size different among multiple disks, it is described as follows. However, assuming that there is an upper limit value, disk objects must be prepared up to the upper limit number, and maintainability is extremely poor. It's a workaround, but I haven't found any other good way. )
It may be difficult to understand because it is translated by Google.
```
vm1:
type: Cloud.vSphere.Machine
properties:
image: '${input.vm1_image}'
cpuCount: '${input.vm1_cpu}'
totalMemoryMB: '${ceil( input.vm1_memory * 1024 )}'
networks:
- network: '${resource.network.id}'
attachedDisks: '${map_to_object(resource.vm1_disk01[*].id + resource.vm1_disk02[*].id + resource.vm1_disk03[*].id, "source")}'
constraints:
- tag: 'cluster:user'
storage:
constraints:
- tag: 'type:data'
vm1_disk01:
type: Cloud.vSphere.Disk
properties:
capacityGb: '${input.vm1_disk01_size}'
count: '${input.vm1_disk_count >= 1 ? 1 : 0 }'
vm1_disk02:
type: Cloud.vSphere.Disk
properties:
capacityGb: '${input.vm1_disk02_size}'
count: '${input.vm1_disk_count >= 1 ? 1 : 0 }'
vm1_disk03:
type: Cloud.vSphere.Disk
properties:
capacityGb: '${input.vm1_disk03_size}'
count: '${input.vm1_disk_count >= 1 ? 1 : 0 }'
```
So, I had a moment yesterday where a lightbulb went off. I was trying to control the count of VMs via the blueprint without even thinking about the ability to control the count in the Service Broker catalog. I have removed the nodecount input and modified the "Max. instances per request" via "Configure Item" in the "Content" section of Service Broker > Content & Policies. I consider this to be an acceptable solution to my original issue, as these resources are not clustered, we just needed multiple of the same configuration.
Slightly off-topic, but does this part work for you?
write_files:
${input.custom-files}
Generally I'm trying to create some complex input for cloud-init but the above definition feeds cloud-init with JSON input and ignored. I could map each attribute one by one but want to allow arbitrary number of rows of an array input.
Try this:
attachedDisks: ${map_to_object(slice(resource.disk[*].id, length(input.disks) * count.index, length(input.disks) * (count.index +1)), "source")}