Monday, November 26, 2018

SSH Tunnel (Local, Remote)

ssh-host is host where "ssh" command is executed to create ssh-tunnel
ssh-peer is host to which ssh-host connects via SSH to form an ssh-tunnel
destination-host is a host we want to access over the ssh-tunnel

For both Local and Remote SSH tunnels:
  1. connection from ssh-host to ssh-peer is allowed
  2. only traffic between ssh-host ans ssh-peer is encrypted (this traffic is in ssh-tunnel), traffic after ssh-tunnel (connection to destination-host itself) is not encrypted and security here based on TCP protocol being used (HTTP, FTP will remain unencrypted / HTTPS, SSH will be encrypted)

Local

Generally: 
  1. created on ssh-host
  2. accessed from ssh-host 
  3. port is listened on ssh-host

Local - tunnel is created and accessed on the <ssh-host>, <destination-host>:<destination-port> must be accessible from the <ssh-peer> (see below explanation):
[root@ssh-host  ~] ssh -L <port-to-listen-on-ssh-host>:<destination-host>:<destination-port> <ssh-peer>

<ssh-host> connects to <ssh-peer> with SSH protocol to form an ssh-tunnel. When <ssh-host> connects to  <localhost>:<port-to-listen-on-ssh-host> on itself, <ssh-peer> connects to the <destination-host>:<destination-port> and sends this connection over SSH to the <ssh-host> which listens to this connection traffic at the <port-to-listen-on-ssh-host>
Or in other words:
<destination-host>:<destination-port> is accessed as <localhost>:<port-to-listen-on-ssh-host> from <ssh-host>

PS if <destination-host>:<destination-port> will be for example localhost:80, then ssh-peer will connect to itself on 80 port

Remote

Generally: 
  1. created on ssh-host
  2. accessed from ssh-peer
  3. port is listened on ssh-peer

Remote - tunnel is created on the <ssh-host> and accessed on the <ssh-peer>, <destination-host>:<destination-port> must be accessible from the <ssh-host> (see below explanation):
[root@ssh-host  ~] ssh -R <port-to-listen-on-ssh-peer>:<destination-host>:<destination-port> <ssh-peer>

<ssh-host> connects to <ssh-peer> with SSH protocol to form an ssh-tunnel. When <ssh-peer> connects to  <localhost>:<port-to-listen-on-ssh-peer> on itself, <ssh-host> connects to the <destination-host>:<destination-port> and sends this connection over SSH to the <ssh-peer> which listens to this connection traffic at the <port-to-listen-on-ssh-peer>
Or in other words:
<destination-host>:<destination-port> is accessed as <localhost>:<port-to-listen-on-ssh-peer> from <ssh-peer>

PS if <destination-host>:<destination-port> will be for example localhost:80, then ssh-host will connect to itself on 80 port

Check


To view existent SSH tunnels (IPv4 (option -i4), IPv6 (option -i6) or both IPv4 and IPv6 (option -i) tunnels, don't do IP resolving - option -n; show numerical ports - option -P):
[admin@localhost ~]$ lsof -i4 -n -P | grep ssh

As background process


If you want create tunnels in the background and don't want to send any commands through SSH tunnel, then use options "-f" (forces going to the background and sending stdin to /dev/null but asks for passwords) and "-N" (do not execute a remote command), use below syntax:
[root@ssh-host  ~] ssh -f -N -L  <port-to-listen-on-ssh-host>:<destination-host>:<destination-port> <ssh-peer>
[root@ssh-host  ~] ssh -f -N -R <port-to-listen-on-ssh-peer>:<destination-host>:<destination-port> <ssh-peer>

Address binding

If you want, you can use address binding to more easily identify SSH tunnels:
  1. Tunnel to 10.10.10.100:
    1. ssh -L 127.0.0.100:2100:localhost:22 root@10.10.10.100
  2. Tunnel to 10.11.11.200:
    1. ssh -L 127.0.0.200:2200:localhost:22 root@10.11.11.200
  3. Now you have two "links" to access these SSH tunnels:
    1. ssh root@127.0.0.100 -p 2100 for 10.10.10.100
    2. ssh root@127.0.0.200 -p 2200 for 10.11.11.200

Accessing one host over another

If you have 2 servers you want to interconnect (the servers can't access each other directly) and have third server (can access both 1st and 2nd server) and have all traffic in the route being encrypted:
  1. Setup
    1. 1st server IP 10.10.10.1
    2. 2nd server IP 11.11.11.2
    3. 3rd server IP 12.12.12.3
    4. you want to access 10.10.10.1 from 11.11.11.2
  2. Configure
    1. [admin@12.12.12.3 ~] ssh -L 2001:localhost:22 root@10.10.10.1
    2. [admin@12.12.12.3 ~] ssh -R 2003:localhost:2001 root@11.11.11.2
  3. Access 10.10.10.1 from 11.11.11.2
    1. Access with SSH
      1. ssh root@localhost -p 2003
    2. rsync
      1. rsync -av -e "ssh -p2003" /some-dir/some-file root@localhost:/sync-dest-dir/
    3. rsync with synchronized files (directories are not deleted) deletion on source server:
      1.  rsync -av -e "ssh -p2003"  --remove-source-files /some-dir/some-file root@localhost:/sync-dest-dir/

Cisco ASA IPSec VPN (IKEv1 / IKEv2) with pre-shared key

Setup - we'll interconnect two branches:
  1. Peers use VLAN 56:
    1. one branch has interface with an IP address 10.10.10.1/24 (we'll call these "their-side")
    2. the other branch has an interface IP address 10.10.10.2/24 (we'll call these "our-side")
  2. Encryption domains (network which we want to interconnect via VPN):
    1. their-side has LAN net  192.168.1.0/24
    2. our-side has LAN net 192.168.2.0/24
  3. IKEv1 or IKEv2 can be used
  4. Also assume that both branches use dedicated interface for VPN connection and this is not interface facing the Interntet (this made for simplicity and you can use the same setup to configure already functioning interfaces)

Aggressive or main mode


Normally main mode is used, so check, that aggressive mode is disabled globally: 
sh run | grep crypto ikev1 am-disable

Phase1


Check if needed IKE Phase1 policy is already created (choose IKE version you need 1 or 2):
  1. IKEv1 Phase1 policy:
    1. sh run crypto ikev1 | grep crypto ikev1 policy|pre-share|aes-256| sha|group 5|86400
  2. IKEv2 Phase1 policy (for IKEv2 integrity=hash, prf (Pseudo-Random Function must be = integrity):
    1. sh run crypto ikev2 | grep crypto ikev2 policy|aes-256| sha|group 5|sha|86400

If need is not found, create Phase1 policy (choose IKE version you need 1 or 2):
  1. for IKEv1:
    1. crypto ikev1 policy 160
      1.  authentication pre-share
      2.  encryption aes-256
      3.  hash sha
      4.  group 5
      5.  lifetime 86400
  2. for IKEv2:
    1. crypto ikev2 policy 40
      1.  encryption des
      2.  integrity sha
      3.  group 5 2
      4.  prf sha
      5.  lifetime seconds 86400

Phase2


Check if needed IKE Phase1 policy is already created (choose IKE version you need 1 or 2):
  1. IKEv1 Phase2 policy:
    1. sh run crypto ipsec | grep ikev1.+esp-aes-256.+sha
  2. IKEv1 Phase2 policy:
    1. sh run crypto ipsec | grep ikev2|aes-256|sha-1
If need is not found, create Phase1 policy (choose IKE version you need 1 or 2):
  1. for IKEv1:
    1. crypto ipsec ikev1 transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac
  2. for IKEv2:
    1. crypto ipsec ikev2 ipsec-proposal AES256-SHA1
      1.  protocol esp encryption aes-256
      2.  protocol esp integrity sha-1

Interface,  & route 


Setup interface which will be used for IPSec VPN initiation (this interface is one peer and the other side is also peer of the VPN tunnel), I suppose that VLAN is used:
interface GigabitEthernet1/10.56
vlan 56
nameif TEST 
security-level 1 
ip address 10.10.10.2 255.255.255.0 # the other peer is 10.10.10.1/24

Check reverse-path:
ip verify reverse-path interface TEST

Set fragment-chain length:
fragment chain 1 TEST

If you don't use proxy-ARP, disable it:
sysopt noproxyarp TEST

If you use ASA cluster and don't want this interfeace link to be monitored: 
no monitor-interface TEST

Create route to the other side (other side encryption domain):
route TEST 192.168.1.0 255.255.255.0 10.10.10.1 1

Group-Policy


VPN Group-Policy (peer IP address is used in naming):
group-policy GP_10.10.10.1 internal 
group-policy GP_10.10.10.1 attributes     
vpn-tunnel-protocol ikev1 
OR 
vpn-tunnel-protocol ikev2
OR
vpn-tunnel-protocol ikev1 ikev2

Tunnel-Group


VPN Tunnel-Group  (peer IP address is used in naming):
tunnel-group 10.10.10.1 type ipsec-l2l 
tunnel-group 10.10.10.1 general-attributes   
default-group-policy GP_10.10.10.1 
tunnel-group 10.10.10.1 ipsec-attributes

Then:
  1. for IKEv1:
    1. ikev1 pre-shared-key PSK-KEY-GOES-HERE
  2. for IKEv2:
    1. ikev2 local-authentication pre-shared-key PSK-KEY-GOES-HERE  
    2. ikev2 remote-authentication pre-shared-key PSK-KEY-GOES-HERE  
If keepalive is needed (normally this doesn't create a problem even if peer doesn't use this option)
isakmp keepalive threshold 10 retry 2 

Objects


Object local encryption domain (our LAN network - our network which will be seen from the other side of the VPN):
object network TEST_VPN_our_ED  
subnet 192.168.2.0 255.255.255.0 

Object remote local encryption domain (their LAN network - their network which will be seen by our side):
object network TEST_VPN_their_ED  
subnet 192.168.1.0 255.255.255.0

If you have another LAN network and want this network to access VPN too (but don't want or don't allowed to add this network to the VPN setup as another encryption domain), you can achieve this using NAT. For simplicity use VLAN ID as network NAT identifier (it will you to more easily identify NAT-ted traffic in log files):  
object network TEST_VPN_our_NET57_NAT 
host 192.168.2.57

VPN ACL & enable protocol on an interface (note here we first write "our IP" and then "their IP")


VPN ACL:
access-list TEST-VPN line 1 extended permit ip object TEST_VPN_our_ED object TEST_VPN_their_ED 

Enable VPN IKEV1 protocol on an interface (only once):
crypto ikev1 enable TEST
OR 
crypto ikev2 enable TEST

Crypto-Map & add map to the interface


TEST_map crypto-map creation:
crypto map TEST_map 1 match address TEST-VPN
crypto map TEST_map 1 set peer 10.10.10.1

Then:
  1. for IKEv1:
    1. crypto map TEST_map 1 set ikev1 transform-set ESP-AES-256-SHA
  2. for IKEv2:
    1. crypto map TEST_map 1 set ikev2 ipsec-proposal AES256-SHA1
crypto map TEST_map 1 set security-association lifetime seconds 28800
crypto map TEST_map 1 set security-association lifetime kilobytes unlimited

If PFS is needed:
crypto map TEST_map 1 set pfs group5

Add map to the interface (only once - when creating TEST_map):
crypto map TEST_map interface TEST

Interface ACL & access-group


Interface ACL:
access-list TEST_access_in extended permit ip object TEST_VPN_their_ED object TEST_VPN_our_ED 
access-list TEST_access_in extended permit icmp host 10.10.10.1 host 10.10.10.2
access-list TEST_access_in extended permit esp any4 interface TEST 
access-list TEST_access_in extended permit udp any4 interface TEST eq isakmp 
access-list TEST_access_in extended permit icmp any4 interface TEST 
access-list TEST_access_in extended deny ip any any 

access-group TEST_access_in in interface TEST

NAT & no-NAT (NAT exemption) examples/templates


Host 192.168.3.2 VPN-traffic NAT exemption (no-NAT):
nat (LAN57,TEST) source static lan57.srv.3.2 TEST_VPN_our_NET57_NAT destination static TEST_VPN_their_ED TEST_VPN_their_ED no-proxy-arp

Allowing Host 192.168.3.2
access-list TEST_access_in extended permit ip object TEST_VPN_their_ED object  lan57.srv.3.2

Group-Policy ACL (note here we first write "their IP" and then "our IP")

You can setup VPN with simple rules as TEST-VPN above and then make restrictions for ports, source IP etc:

Create group-policy ACL (we'll permit access from their net IP 192.168.1.10 to our net IP 192.168.2.10 port 443 and deny access for all others):
access-list TEST-VPN_GP_FILTER extended permit tcp host 192.168.1.10 host 192.168.2.10 eq 443
access-list TEST-VPN_GP_FILTER extended deny ip any any

group-policy GP_10.10.10.1 attributes
 vpn-filter value TEST-VPN_GP_FILTER

Monday, November 19, 2018

Cluster 26. Renaming pcs resource.

Below procedure found in:
https://bugzilla.redhat.com/show_bug.cgi?id=1126835 and tested by changing of the name of the resource type "ocf:heartbeat:VirtualDomain":

First:

  1. make resource unmanaged: pcs resource unmanage resource-old-name
  2. If this is VirtualDomain resource:
    1. change old-name to the new-name in XML definition files of your VM.

Backup existing config:
pcs cluster cib /tmp/cib.xml

Globally (not only first occurrence) change old-name to the new-name: 
sed 's/resource-old-name/resource-new-name/g' -i /tmp/cib.xml

Verify changes:
vi /tmp/cib.xml 

Push changed config to the cluster:
pcs cluster cib-push /tmp/cib.xml

Verify name change:
pcs status

Verify name change in the config dump:
pcs config | grep resource-new-name

Make resource managed again:
pcs resource manage resource-new-name

Thursday, November 8, 2018

Cluster 25. Restoring failed node.

In case of hardware failure and need to restore one of the node (ex. agrp-c01n02):
  1. go through all steps in Cluster 1 - Cluster 11 blog-posts (do only stuff related to the failed node)
  2. Cluster 12 blog-post - go through steps till "Login to any of the cluster node and authenticate hacluster user." part  (do only stuff related to the failed node), then:
    1. passwd hacluster
    2. from an active node:
      1. pcs node maintenance agrp-c01n02
      2. pcs cluster auth agrp-c01n02
    3. from agrp-c01n02:
      1. pcs cluster auth
      2. pcs cluster start
      3. pcs cluster status # node must be in maintenance mode with many errors due to absence of drbd / virsh and other packages
    4. then go through Cluster 12, starting at "Check cluster is functioning properly (on both nodes)" till "Quorum:" part
  3. go through all steps in Cluster 14 blog-post (do only stuff related to the failed node)
  4. Cluster 16 blog-post - go through steps till "Setup common DRBD options" part  (do only stuff related to the failed node), then:
    1. from agrp-c01n01:
      1. rsync -av /etc/drbd.d root@agrp-c01n02:/etc/
    2. from agrp-c01n02:
      1. drbdadm create-md r{0,1}
      2. drbdadm up r0; drbdadm secondary r0
      3. drbd-overview
      4. drbdadm up r1; drbdadm secondary r1
      5. drbd-overview
      6. wait till full synchronisation
      7. reboot failed node
  5. Cluster 17 blog-post - go through steps till "Setup DLM and CLVM" (do only stuff related to the failed node), then:
    1. drbdadm up all
    2. cat /proc/drbd
  6. Cluster 19 blog-post - only do check of the SNMP from the failed node:
    1. snmpwalk -v 2c -c agrp-c01-community 10.10.53.12
    2. fence_ifmib --ip agrp-stack01 --community agrp-c01-community --plug Port-channel3 --action list
    3. fence_ifmib --ip agrp-stack01 --community agrp-c01-community --plug Port-channel2 --action list
  7. Cluster 20 blog-post - go through steps till "Provision Planning" (do only stuff related to the failed node), then:
    1. rsync -av /etc/libvirt/qemu/networks/ovs-network.xml  root@agrp-c01n02:/root
    2. systemctl start libvirtd 
    3. virsh net-define /root/ovs-network.xml 
    4. virsh net-list --all 
    5. virsh net-start ovs-network 
    6. virsh net-autostart ovs-network 
    7. virsh net-list 
    8. systemctl stop libvirtd
    9. rm  /root/ovs-network.xml
  8. For each VM add constraint to ban VM start on failed node (I assume n02 to fail). Below command adds -INFINITY location constraint for specified resource and node:
    1. pcs resource ban vm01-rntp agrp-c01n02
    2. pcs resource ban vm02-rftp agrp-c01n02
  9. Unmaintenance failed node from survived one and start cluster on the failed node:
    1. pcs node unmaintenance agrp-c01n02
    2. pcs cluster start
    3. pcs status
    4. wait till r0 & r1 DRBD resources are masters on both nodes and all resources (besides all VMs) are started on both nodes
  10. Cluster 18 blog-post, do only:
    1. yum install gfs2-utils -y
    2. tunegfs2 -l /dev/agrp-c01n01_vg0/shared # to view shared LV
    3. dlm_tool ls # name clvmd z& shared / members 1 2 
    4. pvs # should only show drbd and sdb devices
    5. lvscan # List all logical volumes in all volume groups (3 OS LV, shared & 1 LV per VM)
  11. Cluster 21 blog-post:
    1. do "Firewall setup to support KVM Live Migration" (do only stuff related to the failed node)
    2. crm_simulate -sL | grep " vm[0-9]"
    3. SELunux related:
      1. ls -laZ /shared # must show "virt_etc_t" in all lines except related to ".."
      2. if above line is not true, do stuff in "SELinux related issues" (do only stuff related to the failed node)
  12. One by one (for each VM):
    1. remove ban constraint for the first VM:
      1. pcs resource clear vm01-rntp
    2. verify that constraints are removed:
      1. pcs constraint  location
    3. if this VM must be started on the restored node - wait till live migration is performed
  13. Congratulations your cluster is restored into normal operation