I am creating multiple VMs in Azure using cloud-init, they are created in parallel and when any of them fails, I can see in the logs:
Error: error executing "/tmp/terraform_876543210.sh": Process exited with status 1
But I have no way to figure out which VM is failing, I need to ssh each of them and check The script path seems to be defined for provisioning Terraform
Is there a way to override it also for cloud-init to something like: /tmp/terraform_vmName_876543210.sh ?
I am not using provisioner but cloud-init, any idea how I can force terraform to override the terraform sh file?
Below my script:
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
size = "Standard_F2"
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.example.id,
]
admin_ssh_key {
username = "adminuser"
public_key = file("~/.ssh/id_rsa.pub")
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
custom_data = base64encode(templatefile(
"my-cloud-init.tmpl", {
var1 = "value1"
var2 = "value2"
})
)
}
And my cloud-init script:
## template: jinja
#cloud-config
runcmd:
- sudo /tmp/bootstrap.sh
write_files:
- path: /tmp/bootstrap.sh
permissions: 00700
content: |
#!/bin/sh -e
echo hello