Skip to content

Vap Host Installation

Below are the steps to install and add a new host to VAP.


Reserve IPs on Netbox

Warning

Reserve IPs on NetBox

Create the VM on netbox and add the appropriate interfaces and reserve the IPs. Check other (already created) VAP VMs for a template of the required interfaces and IPs.

Clone VM on Proxmox

Do a full clone of the template jnb1srvdscocsprxvap.vlan612.tmpl on Proxmox and name the clone appropriatly.

Set up Network and Routing

On the Proxmox shell run the following command to set up the networking and routing:

./network_setup.sh {new_name} {internal_ip} {public_ip} {gateway}

Test the connection of the VM by pinging - Google (google.com) - Gateway - The 2 Pure Storages (10.13.1.108 and 10.14.1.108) - Another VAP node (10.20.1.4)

Connect to Pure

Log into the Pure storage (Pure04) and follow the steps below.

1. Create a Host

  • Click on Storage > Hosts > Plus Icon

  • Give the Host a name (ex. ...vap10)

2. Create a Volume

  • Click on Volume > Plus icon

  • Uncheck the add to default protection group box

  • Give the volume a name ending in _vz (ex. ...vap10_vz)

3. Add Volume to Protection Group

  • Click on Protection > Protection Group> ocs-teraco-infra > Add Member > Select the volume you just created

4. Setting up QoS and connecting the volume to the host

  • Click on Storage > Volumes > Search for Volume > QoS Settings > Change the settings to 15k iops (1,5k for t2 vap nodes)

  • Hosts > Search for Host > Connect Volume

5. Setting up IQNs

  • ssh into the server from one of the infra nodes
  • run the command
    cat /etc/iscsi/intiatorname.iscsi
    
  • Copy everything after the '='
  • Go to Host > Host ports > Configure IQNs > paste the coppied text

6. Connecting the VM to the Pure

All of the following commands will be run on the new VAP node in an ssh session

Depending on the Pure that you are trying to connect to, run the appropriate command listed bellow:

  1. Pure01

    ip_addresses=("10.13.1.104" "10.13.1.114" "10.14.1.105" "10.14.1.115")
    target_iqn="iqn.2010-06.com.purestorage:flasharray.6b8e2cb1e2a96bb2"
    
    for ip in "${ip_addresses[@]}"
    do
        echo "Connecting to target iSCSI IP Address: $ip"
        iscsiadm --mode node --targetname $target_iqn -p $ip -o new
        iscsiadm --mode node --targetname $target_iqn -p $ip --op update -n node.startup -v automatic
        iscsiadm --mode node --targetname $target_iqn -p $ip -n discovery.sendtargets.use_discoveryd -v Yes
        iscsiadm --mode node --targetname $target_iqn -p $ip -n discovery.sendtargets.use_discoveryd_poll_inval -v 30
    done
    

  2. Pure02

    ip_addresses=("10.15.1.104", "10.15.1.114" , "10.16.1.105" , "10.16.1.115")
    target_iqn="iqn.2010-06.com.purestorage:flasharray.3b51fa4b419dd6cc"
    
    for ip in "${ip_addresses[@]}"
    do
        echo "Connecting to target iSCSI IP Address: $ip"
        iscsiadm --mode node --targetname $target_iqn -p $ip -o new
        iscsiadm --mode node --targetname $target_iqn -p $ip --op update -n node.startup -v automatic
        iscsiadm --mode node --targetname $target_iqn -p $ip -n discovery.sendtargets.use_discoveryd -v Yes
        iscsiadm --mode node --targetname $target_iqn -p $ip -n discovery.sendtargets.use_discoveryd_poll_inval -v 30
    done
    

  3. Pure04

    ip_addresses=("10.13.1.108", "10.13.1.109" , "10.13.1.118" , "10.13.1.119", "10.14.1.108", "10.14.1.109" , "10.14.1.118" , "10.14.1.119")
    target_iqn="iqn.2010-06.com.purestorage:flasharray.4c3e35a66f7dd789"
    
    for ip in "${ip_addresses[@]}"
    do
        echo "Connecting to target iSCSI IP Address: $ip"
        iscsiadm --mode node --targetname $target_iqn -p $ip -o new
        iscsiadm --mode node --targetname $target_iqn -p $ip --op update -n node.startup -v automatic
        iscsiadm --mode node --targetname $target_iqn -p $ip -n discovery.sendtargets.use_discoveryd -v Yes
        iscsiadm --mode node --targetname $target_iqn -p $ip -n discovery.sendtargets.use_discoveryd_poll_inval -v 30
    done
    

Log into pure from the host

iscsiadm -m node -l

Create the Volumes by ruunning thse commands in order:

vgcreate pure /dev/mapper/mpatha
lvcreate pure -l +100%FREE --name puredata
mkfs.ext4 /dev/pure/puredata

Append /etc/fstab with by adding this to the end of the file:

/dev/pure/puredata   /vz  ext4 _netdev   0 0

Run the following commands to mount the disk:

mount -a 
mkdir /vz/backups
systemctl enable remote-fs.target

Check if the disk has been mounted by running:

df -Th

Warning

Change the password to a generated password from Bitwarden (command: passwd)

Adding Host to VAP

1. VAP frontend

  • Log into the VAP frontend
  • Click on Hosts > Add > UI Wizard> Fill in the host details
  • Check the "Install VZ" checkbox
  • Click Add
  • Wait for it to finish

2. Enabling kdump

  • Log into the VM from an infra node via ssh
  • Run the following commands in order
mkdir /vz/crash
sed -i 's#path /var/crash#path /vz/crash#' /etc/kdump.conf
systemctl restart kdump
systemctl enable kdump

IPset rules and IPtable configuration

IPset rules

Depending on if the vap node is for customers or not, choose the corresponding command below.

  1. If this is not a customer VAP node, paste this command and run:
ipset create EXTERNAL_SSH_ACCESS hash:net family inet hashsize 1024 maxelem 65536
ipset add EXTERNAL_SSH_ACCESS 195.69.222.93
ipset add EXTERNAL_SSH_ACCESS 193.24.222.50
ipset add EXTERNAL_SSH_ACCESS 54.246.93.137
ipset add EXTERNAL_SSH_ACCESS 193.24.222.54
ipset add EXTERNAL_SSH_ACCESS 193.24.222.53
ipset add EXTERNAL_SSH_ACCESS 164.132.8.128
ipset add EXTERNAL_SSH_ACCESS 164.132.8.129
ipset add EXTERNAL_SSH_ACCESS 164.132.8.0
ipset add EXTERNAL_SSH_ACCESS 164.132.8.1
ipset add EXTERNAL_SSH_ACCESS 158.69.106.128
ipset add EXTERNAL_SSH_ACCESS 158.69.106.129
ipset add EXTERNAL_SSH_ACCESS 164.132.8.139
ipset add EXTERNAL_SSH_ACCESS 196.212.102.169
ipset add EXTERNAL_SSH_ACCESS 196.212.102.170
ipset add EXTERNAL_SSH_ACCESS 196.212.102.171
ipset add EXTERNAL_SSH_ACCESS 196.212.102.172
ipset add EXTERNAL_SSH_ACCESS 196.212.102.173
ipset add EXTERNAL_SSH_ACCESS 197.242.147.86
ipset add EXTERNAL_SSH_ACCESS 197.242.146.130
ipset create INTERNAL_SSH_ACCESS hash:net family inet hashsize 1024 maxelem 65536 
ipset add INTERNAL_SSH_ACCESS 10.20.1.0/24
ipset create JELASTIC_INFRASTRUCTURE hash:net family inet hashsize 1024 maxelem 65536
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.5
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.6
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.9
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.10
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.22
ipset create NFS_ACCESS hash:net family inet hashsize 1024 maxelem 65536
ipset add NFS_ACCESS 10.20.2.15
service ipset save
systemctl enable ipset
  1. If this is a customer VAP node, paste this command and run:
ipset create EXTERNAL_SSH_ACCESS hash:net family inet hashsize 1024 maxelem 65536
ipset add EXTERNAL_SSH_ACCESS 195.69.222.93
ipset add EXTERNAL_SSH_ACCESS 193.24.222.50
ipset add EXTERNAL_SSH_ACCESS 54.246.93.137
ipset add EXTERNAL_SSH_ACCESS 193.24.222.54
ipset add EXTERNAL_SSH_ACCESS 193.24.222.53
ipset add EXTERNAL_SSH_ACCESS 164.132.8.128
ipset add EXTERNAL_SSH_ACCESS 164.132.8.129
ipset add EXTERNAL_SSH_ACCESS 164.132.8.0
ipset add EXTERNAL_SSH_ACCESS 164.132.8.1
ipset add EXTERNAL_SSH_ACCESS 158.69.106.128
ipset add EXTERNAL_SSH_ACCESS 158.69.106.129
ipset add EXTERNAL_SSH_ACCESS 164.132.8.139
ipset add EXTERNAL_SSH_ACCESS 196.212.102.169
ipset add EXTERNAL_SSH_ACCESS 196.212.102.170
ipset add EXTERNAL_SSH_ACCESS 196.212.102.171
ipset add EXTERNAL_SSH_ACCESS 196.212.102.172
ipset add EXTERNAL_SSH_ACCESS 196.212.102.173
ipset add EXTERNAL_SSH_ACCESS 197.242.147.86
ipset add EXTERNAL_SSH_ACCESS 197.242.146.130
ipset create INTERNAL_SSH_ACCESS hash:net family inet hashsize 1024 maxelem 65536 
ipset add INTERNAL_SSH_ACCESS 10.20.1.0/24
ipset add INTERNAL_SSH_ACCESS 10.60.12.0/24
ipset create JELASTIC_INFRASTRUCTURE hash:net family inet hashsize 1024 maxelem 65536
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.5
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.6
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.9
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.10
ipset add JELASTIC_INFRASTRUCTURE 10.20.2.22
ipset create NFS_ACCESS hash:net family inet hashsize 1024 maxelem 65536
ipset add NFS_ACCESS 10.20.2.15
service ipset save
systemctl enable ipset

IP tables configuration

Depending on if the vap node is for customers or not, choose the corresponding command below.

  1. If this is not a customer VAP node, paste this command and run:
iptables -t nat -F
iptables -t nat -P PREROUTING ACCEPT
iptables -t nat -P INPUT ACCEPT
iptables -t nat -P OUTPUT ACCEPT
iptables -t nat -P POSTROUTING ACCEPT
iptables -t nat -N INTERNAL
iptables -t nat -A POSTROUTING -s 10.20.0.0/16 -j INTERNAL
iptables -t nat -A POSTROUTING -o venet0 -j ACCEPT
iptables -t nat -A INTERNAL -d 10.20.0.0/16 -j ACCEPT
iptables -t nat -A INTERNAL -o br0 -j MASQUERADE
service iptables save
  1. If this is a customer VAP node, paste this command and run:
iptables -t nat -F
iptables -t nat -P PREROUTING ACCEPT
iptables -t nat -P INPUT ACCEPT
iptables -t nat -P OUTPUT ACCEPT
iptables -t nat -P POSTROUTING ACCEPT
iptables -t nat -N INTERNAL
iptables -t nat -A POSTROUTING -s 10.20.0.0/16 -j INTERNAL
iptables -t nat -A POSTROUTING -o venet0 -j ACCEPT
iptables -t nat -A INTERNAL -d 10.20.0.0/16 -j ACCEPT
iptables -t nat -A INTERNAL -o br0 -j MASQUERADE
iptables -t nat -A POSTROUTING -s 10.60.12.0/24 -j INTERNAL
iptables -t nat -A INTERNAL -d 10.60.12.0/24 -j ACCEPT
service iptables save

These iptables rules should be run regardless if it is a customer VAP node or not. Change the ip addresses in the iptables rules accordingly.

iptables -P INPUT ACCEPT
iptables -t filter -F
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -N LAN_SERVICES
iptables -N WAN_SERVICES
iptables -A INPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT
iptables -A INPUT -p icmp -m conntrack --ctstate RELATED -j ACCEPT
iptables -A INPUT -p icmp -m icmp --icmp-type 3/4 -j ACCEPT
iptables -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -d {public_ip}/32 -i br0 -j WAN_SERVICES
iptables -A INPUT -d {internal_ip}/32 -i br1 -j LAN_SERVICES
iptables -A INPUT -d {internal_ip}/32 -i venet0 -j LAN_SERVICES
iptables -A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
iptables -A LAN_SERVICES -p tcp -m multiport --dports 22,5555,64000 -m set --match-set INTERNAL_SSH_ACCESS src -j ACCEPT
iptables -A LAN_SERVICES -p tcp -m multiport --dports 22,4433,4646,10050,8080,5555 -m set --match-set JELASTIC_INFRASTRUCTURE src -j ACCEPT
iptables -A LAN_SERVICES -p tcp -m multiport --dports 111,892,20048,2049 -m set --match-set NFS_ACCESS src -j ACCEPT
iptables -A LAN_SERVICES -s 10.20.2.19/32 -p tcp -m multiport --dports 10050 -j ACCEPT
iptables -A WAN_SERVICES -p tcp -m tcp --dport 22 -m set --match-set EXTERNAL_SSH_ACCESS src -j ACCEPT
iptables -P INPUT DROP
service iptables save

These custom forward rules need to be modified for each customer

iptables -N CUSTOM_FORWARD
iptables -A FORWARD -o venet0 -j CUSTOM_FORWARD
iptables -A FORWARD -i venet0 -j CUSTOM_FORWARD
iptables -A CUSTOM_FORWARD -s 10.60.22.0/24 -j ACCEPT
iptables -A CUSTOM_FORWARD -d 10.60.22.0/24 -j ACCEPT
service iptables save

Installing Qemu Agent

  1. Log into the new VAP node
  2. Run the following commands
yum install -y qemu-guest-agent
systemctl enable --now qemu-guest-agent

Setting up Prometheus

Installing node_exporter

Run the following commands on the new VAP node

yum install prometheus-node-exporter
systemctl enable --now prometheus-node-exporter

Configuring Prometheus

  1. Log into host monitor VM and add a job to the config file

Direct Disk writes

Run this command to enable direct disk writes

filename=/etc/sysctl.d/fs.odirect_enable.conf && echo 'fs.odirect_enable = 1' > $filename && sysctl -p $filename

Setting up management server access

  1. Log into the management server
  2. run the following command and log into the new VAP server using bitwarden password:
ssh-copy-id root@{server ip}
  1. Edit the file /.ocs/config/vap.conf on the management server and add the ip and name of the new VAP server to the config file.

Taking node out of maintenance

Change the state of the node on the VAP admin panel from maintenance to Active