the farm is our currently-two-node proxmox cluster for running virtual machines
these virtual machines can have public static ip addresses. we have lots of ips. currently those ip addresses have to be assigned by hand, but who knows what the future holds...
the farm console can be accessed at 192.168.1.51:8006
at the internet location (you have to be on the lan)
owners
these VMs have different owners. some of them are owned by bunk collectively, others are owned by individuals. here is the canonical list of VM owners
farm nodes
-
bibb
-
192.168.1.50
-
10.10.10.2
-
-
radish
-
192.168.1.51
-
10.10.10.1
-
-
turnip
-
192.168.1.52
-
10.10.10.3
-
notes
-
proxmox host does not get its ip address from dhcp. it's statically configured on the host itself. i'm going to set it to an ip address outside the dhcp range and reboot. wish me luck-
worked flawlessly
-
-
now changing the proxmox repos from the subscription repos to the no-subscription repos. hopefully it's fine....
-
success
-
-
had to change proxmox node to use DHCP with static ip allocation from the DHCP pool because at&t router will only route requests within the DHCP pool. each node also has static ip assigned on the box, so that it's obvious both from router UI and network config on the machine what is at that IP
adding a new proxmox node
-
install proxmox
-
configure networking to be dhcp
-
allocate desired static ip in router interface
-
configure network to self-assign same static ip
-
fix proxmox repositories (reference another node)
-
install vim (thank god finally)
-
configure additional private networking on vmbr0
-
update hostnames
-
add to cluster (use FQDN)
-
test a VM migration
vm templates
we wanna make the same vm over and over, making a template right now following this guide: https://pve.proxmox.com/wiki/Cloud-Init_Support
make a template vm:
#!/bin/sh -e
vmid=666
# create minimum viable vm
qm create $vmid --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-single
# give it the debian base image
qm set $vmid --scsi0 local-lvm:0,import-from=/root/images/debian-12-genericcloud-amd64.qcow2
# give it a cdrom drive for cloudinit
qm set $vmid --ide2 local-lvm:cloudinit
# make it boot faster
qm set $vmid --boot order=scsi0
# give it some display thing it needs for cloudinit to work
qm set $vmid --serial0 socket --vga serial0
# grow the disk a bit, outcome of 10G total disk size because image is 2G
qm resize $vmid scsi0 +8G
# set max migrate speed to avoid flooding interface
qm set $vmid --migrate_speed 100
# set various options
qm set $vmid \
--scsi0 file=local-lvm:vm-${vmid}-disk-0,discard=on,iothread=1,ssd=1 \
--ostype l26 \
--onboot 1 \
--cores 1 \
--sockets 1 \
--ipconfig0 ip=dhcp \
--cpu cputype=x86-64-v2-AES \
--memory 512 \
--net0 virtio,bridge=vmbr0,rate=100
# make it a template
qm template $vmid
this will hang for a while midway through. after you make a template, make a vm from it!
make a VM
in a shell on the farm itself, you must copy the ssh key you want to add to /root/.ssh/<name>.pub
first. then:
qm clone 666 $new_vm_id_number --name $new_vm_name
qm set $new_vm_id_number --sshkey ~/.ssh/<name>.pub
after assigning the vm a static ip and rebooting it, the user can then log in with:
ssh debian@<ip-address>
making a cluster
wow this is a lot of steps and fussing
-
had to assign 10. ips, in /etc/network/interfaces
-
had to create cluster and tell it to use those 10. links
-
struggling to add new node with "hostname verification failed"
-
related to hostname fussing i did? just rebooted, we will see
-
zomg, in order to connect new node, needed to connect with FQDN. DID NOT need to add to /etc/hosts as described here
-
maybe did need to add to /etc/hosts?
hardware testing for farm nodes
-
bibb
-
passed
-
-
turnip
-
not done
-
-
radish
-
not done
-
remote access from outside LAN
-
complet! there is a runbook