Skip to content

Datastore Remove

Once you are done with a datastore it is best to remove it, as the Proxmox-LVM-ssd_discard process is not very reliable.

Warning

Change window only: Mistakes in this process can result in service outages.

Confirmation

Confirm that there are no disk devices referenced by the datastore in proxmox.

This can be done by selecting the datastore in the GUI and looking at all of the tables (not just the VM tabs)

If there are dangling devices (because sometimes prox gets confused), please escalate for help

Remove Datastore from Prox

You can remove the datastore from the proxmox inventory from the Proxmox GUI

  1. Select Datacentre
  2. Select Storage
  3. Select the datastore you wish to remove
  4. Click Remove
  5. Accept the prompt

LVM cleanup

You will now need to clean up the underlying LVM datastore information.

Information

Run the following steps on ALL of the cluster nodes

You can execute all of the below commands on all nodes with Oxide Cluster Run (e.g. oxide -j1pi 'whomai', where j1pi is the jnb1 proxmox internal/uat cluster (use oxide --help for other cluster options))

Zombie LV references

Prox tends to leave behind some dangling refernces to non-existant LVM devices, so check in the VG mapper for dangling devices.

You can check if there are any zombie disk references by looking in the /dev/<vgname>/ path. It is often different on each node, so each should be inspected.

ls /dev/<vgname>

Clean zombie references

If there are any references to LVs in those folders, yet the LVS are not found in the lvs output, you will need to clean those references.

Warning

Double check your vgname here.

vg_name="<vgname>"
for disk in $(ls /dev/$vg_name/); do echo " > $vg_name -> $disk"; dmsetup remove $(readlink -f /dev/$vg_name/$disk); done

Information

Run the following steps on ONLY ONE of the cluster nodes

Remove Volume Group

You should now be able to remove the volume group

Offline the VG

vgchange -an <vgname>
Remove the VG
vgremove <vgname>

Remove the Physical volume from LVM

Warning

Run the following steps on ALL of the cluster nodes. HOWEVER, the mapper devices will differ on the nodes, so you will need to do this one-by-one

You will now need to inform LVM to release the specific mpath device which differs between hosts

pvremove /dev/mapper/<mpath_device>

Update LVM status

Information

Run the following steps on ALL of the cluster nodes

You can execute all of the below commands on all nodes with Oxide Cluster Run (e.g. oxide -j1pi 'whomai', where j1pi is the jnb1 proxmox internal/uat cluster (use oxide --help for other cluster options))

You can now rescan the LVM physical volumes.

pvscan

Storage Array Volume Unmap

You should now unmap the storage volume from the cluster (the guides may offer assistance here)

Cleanups

You can now run the final cleanups on all of the nodes

iSCSI Rescan

Inform the hosts to check the device paths

iscsiadm -m node --rescan

Multipath Rescan

Inform the multipath drivers to update, based on the previous rescan

multipath -r

Add volume to LVM

You can now trigger LVM on each host to scan and update the volume information.

pvscan