Datastore Remove
Once you are done with a datastore it is best to remove it, as the Proxmox-LVM-ssd_discard process is not very reliable.
Warning
Change window only: Mistakes in this process can result in service outages.
Confirmation
Confirm that there are no disk devices referenced by the datastore in proxmox.
This can be done by selecting the datastore in the GUI and looking at all of the tables (not just the VM tabs)
If there are dangling devices (because sometimes prox gets confused), please escalate for help
Remove Datastore from Prox
You can remove the datastore from the proxmox inventory from the Proxmox GUI
- Select Datacentre
- Select Storage
- Select the datastore you wish to remove
- Click Remove
- Accept the prompt
LVM cleanup
You will now need to clean up the underlying LVM datastore information.
Information
Run the following steps on ALL of the cluster nodes
You can execute all of the below commands on all nodes with Oxide Cluster Run (e.g.
oxide -j1pi 'whomai', where j1pi is the jnb1 proxmox internal/uat cluster (useoxide --helpfor other cluster options))
Zombie LV references
Prox tends to leave behind some dangling refernces to non-existant LVM devices, so check in the VG mapper for dangling devices.
You can check if there are any zombie disk references by looking in the /dev/<vgname>/ path. It is often different on each node, so each should be inspected.
Clean zombie references
If there are any references to LVs in those folders, yet the LVS are not found in the lvs output, you will need to clean those references.
Warning
Double check your vgname here.
vg_name="<vgname>"
for disk in $(ls /dev/$vg_name/); do echo " > $vg_name -> $disk"; dmsetup remove $(readlink -f /dev/$vg_name/$disk); done
Information
Run the following steps on ONLY ONE of the cluster nodes
Remove Volume Group
You should now be able to remove the volume group
Offline the VG
Remove the VGRemove the Physical volume from LVM
Warning
Run the following steps on ALL of the cluster nodes. HOWEVER, the mapper devices will differ on the nodes, so you will need to do this one-by-one
You will now need to inform LVM to release the specific mpath device which differs between hosts
Update LVM status
Information
Run the following steps on ALL of the cluster nodes
You can execute all of the below commands on all nodes with Oxide Cluster Run (e.g.
oxide -j1pi 'whomai', where j1pi is the jnb1 proxmox internal/uat cluster (useoxide --helpfor other cluster options))
You can now rescan the LVM physical volumes.
Storage Array Volume Unmap
You should now unmap the storage volume from the cluster (the guides may offer assistance here)
Cleanups
You can now run the final cleanups on all of the nodes
iSCSI Rescan
Inform the hosts to check the device paths
Multipath Rescan
Inform the multipath drivers to update, based on the previous rescan
Add volume to LVM
You can now trigger LVM on each host to scan and update the volume information.