Installing bare metal servers in OVH using PXE
Recently we needed to use OpenShift 4 virtualization (kube-virt) lab which requires installing OpenShift on bare metal servers. Although it is possible to deploy OpenShift nodes manually I preferred to do it automatically using PXE boot. Another requirement was to have a dedicated private network between servers.
Public clouds such as AWS, Google Cloud and others don’t offer option to install physical server from PXE.
We chose OVH as a good option. It offers more or less reasonable prices for dedicated servers, unmetered public traffic and has layer 2 private networking (so called vRack) for free. It is worth mentioning that private networking is available only for certain server types and its bandwidth also depends.
We purchased three Infra-1 servers (Intel Xeon-E 2274G 4c/8t, 64GB DDR4 2666 MHz, 2x960GB SSD NVME soft RAID, 1Gbps Internet and 2Gbps private network) for $129/month with one-time setup fee of $89.
OVH has a large number of ready-to-go server templates, however it appeared to be unclear how to install server using my own template. OVH documentation unfortunately is not very useful so I had to use google as usual.
I found that it is possible to install server using iPXE script. For that you need to use OVH API. Strangely there is no user friendly web interface for that. One way to explore OVH API is to use their Swagger-like UI: https://api.us.ovhcloud.com/console. However, it is not convenient to configure each server manually using Swagger.
Luckily Terraform supports OVH provider, so it is possible to configure all servers at once using Terraform.
First thing you need to do is to create OVH application keys. These are long-term authentication tokens for OVH API similar to AWS access keys. Instructions are here: https://github.com/ovh/python-ovh#1-create-an-application, but for me it worked only after:
pip3 install -e git+https://github.com/ovh/python-ovh.git#egg=ovhpip3 install --no-deps ovhcliovh setup init
After that you may test your credentials using command:
ovh me info
I installed VMware ESXi on first server and installed 1 service, 1 bootstrap and 1 master VM on it. Service VM included DHCP, DNS, NAT, Apache (for ignition configs), Haproxy (for OpenShift load balancers and Router). It is important to use first server’s NIC as a public network and second NIC as a private network in ESXi.
I used the following iPXE script to setup OpenShift nodes:
#!ipxeset base-url https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/latest/latestkernel ${base-url}/rhcos-4.4.3-x86_64-installer-kernel-x86_64 initrd=rhcos-4.4.3-x86_64-installer-initramfs.x86_64.img ip=dhcp rd.neednet=1 console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.install_dev=nvme0n1 coreos.inst.image_url=${base-url}/rhcos-4.4.3-x86_64-metal.x86_64.raw.gz coreos.inst.ignition_url=http://10.0.0.1:8080/baremetal/worker.igninitrd ${base-url}/rhcos-4.4.3-x86_64-installer-initramfs.x86_64.img
boot
Here http://10.0.0.1:8080/baremetal/worker.ign is the location of worker node ignition config on service VM.
After that I used OVH API to make my servers boot from above iPXE script. The following Terraform manifest did the job for me:
provider "ovh" {
endpoint = "ovh-us"
}variable "server_names" {
type = list
default = [
"nsXXXXX1.ip-147-135-97.us",
"nsXXXXX2.ip-147-135-97.us"
]
}data ovh_dedicated_server_boots "ipxe" {
count = length(var.server_names)
service_name = var.server_names[count.index]
boot_type = "ipxeCustomerScript"
depends_on = [
ovh_me_ipxe_script.ocp4-worker
]
}resource ovh_me_ipxe_script "ocp4-worker" {
name = "ocp4-worker1"
script = file("${path.module}/ocp4-worker.ipxe")
}resource ovh_dedicated_server_update "server" {
count = length(var.server_names)
service_name = var.server_names[count.index]
boot_id = data.ovh_dedicated_server_boots.ipxe[count.index].result[length(data.ovh_dedicated_server_boots.ipxe[count.index].result) - 1]
state = "ok"
}
Here variable “server_names” contains all the servers that you need to boot and ocp4-worker.ipxe is the script itself provided above.
After that all you need to do is to reboot you servers and they will start booting from your iPXE script.