Cluster 19. Fencing level 2 with SNMP.
Please, reread these blog-posts => Cluster 7 / Cluster 11 / Cluster 12
We have level-1 fencing (iLO/IPMI-fencing) setup. But what is the problem with this? The problem is when a node vanishes and fencing fails (if one node loses power - the IPMI fencing will fail for that node). Then, not knowing what the other node might be doing, the only safe option is to block, otherwise you risk a split-brain. To verify that:
ip access-list standard agrp-c01-acl
We have level-1 fencing (iLO/IPMI-fencing) setup. But what is the problem with this? The problem is when a node vanishes and fencing fails (if one node loses power - the IPMI fencing will fail for that node). Then, not knowing what the other node might be doing, the only safe option is to block, otherwise you risk a split-brain. To verify that:
- pcs cluster start --all
- then pull power cable out of one of the nodes
- dlm_tool ls # you'll see that:
- new change member 1 joined 0 remove 1 failed 1 seq 2,2
- new status wait fencing
This means that dlm will wait till the powered-off node status get cleared. You'll see the same messages for both "shared" and "clvmd" lock-spaces. This means that GFS2 file-system and CLVMD will hung (pcs status will still show this resources as "Started"). This is because GFS2 file systems will freeze to ensure data integrity in the event of a failed fence.
This is why multiple-level fencing is so important.Overview SNMP fencing
The logic behind this mechanism is very simple: once a node has been marked as dead the agent uses the SNMP SET method to tell the managed switch to shut the ports down. For SNMP fencing we'll use fence_ifmib (see: pcs stonith list fence_ifmib and pcs stonith describe fence_ifmib). Only two OIDs are needed by the agent: ifDescr and ifAdminStatus. The first is used to match the interface name used on the Cisco device (fence_ifmib can be used with any vendor while it's supporting SNMP) with the one provided in the cluster configuration, the latter to get/set the port status.
Setup Cisco stack switches to support SNMP
As we setup in Cluster 3, Switch IP for the 1st cluster is 10.10.53.12 and the 1st cluster uses ports gi1/0/1-4,17 & gi2/0/1-4,17. So:
- agrp-c01n01 uses ports
- 1/0/1, 1/0/3, 2/0/2, 2/0/4
- (config)# int ra gi1/0/1, gi1/0/3, gi2/0/2, gi2/0/4
- (config-if-range) # description agrp-c01n01
- these are all members of the channel-group 2
- (config)#int Port-channel 2
- (config-if)#description agrp-c01n01
- agrp-c01n02 uses ports
- 2/0/1, 2/0/3, 1/0/2, 1/0/4
- (config)# int ra gi2/0/1, gi2/0/3, gi1/0/2, gi1/0/4
- (config-if-range) # description agrp-c01n02
- these are all members of the channel-group 3
- (config)#int Port-channel 3
- (config-if)#description agrp-c01n02
- save and verify with show interface status | incl agrp-c01
Setup ACL for SNMP view:
permit 10.10.53.1
permit 10.10.53.2
deny any
Setup SNMP view and community for that view:
Setup community (this enables SNMP agent):
snmp-server community agrp-c01-community RW agrp-c01-acl
Test with:
snmpwalk -v 2c -c agrp-c01-community 10.10.53.12 # huge list of OID will be shown
Find IfIndex of needed interfaces (we'll use them to restrict access in set up community to the needed values only):
You can search only for Port-channel interfaces if you use LACP:
agrp-c01n01: show snmp mib ifmib ifindex | incl Port-channel2
agrp-c01n02: show snmp mib ifmib ifindex | incl Port-channel3
Or search for all connected interfaces, if don't using LACP:
You can search only for Port-channel interfaces if you use LACP:
agrp-c01n01: show snmp mib ifmib ifindex | incl Port-channel2
agrp-c01n02: show snmp mib ifmib ifindex | incl Port-channel3
Or search for all connected interfaces, if don't using LACP:
agrp-c01n01: show snmp mib ifmib ifindex | incl net(1/0/[13]|2/0/[24]):
agrp-c01n02: show snmp mib ifmib ifindex | incl net(2/0/[13]|1/0/[24]):
Setup SNMP view:
If you use LACP:
#agrp-c01n01
#For Port-channel2: Ifindex = 5002
#agrp-c01n02
#For Port-channel3: Ifindex = 5003
If you don't use LACP:
#agrp-c01n01
If you use LACP:
#agrp-c01n01
#For Port-channel2: Ifindex = 5002
snmp-server view agrp-c01-view ifDescr.5002 included
snmp-server view agrp-c01-view ifAdminStatus.5002 included
#For Port-channel3: Ifindex = 5003
snmp-server view agrp-c01-view ifDescr.5003 included
snmp-server view agrp-c01-view ifAdminStatus.5003 included
If you don't use LACP:
#agrp-c01n01
#For GigabitEthernet2/0/4: Ifindex = 10604
snmp-server view agrp-c01-view ifDescr.10604 included
snmp-server view agrp-c01-view ifAdminStatus.10604 included
#For GigabitEthernet2/0/2: Ifindex = 10602
snmp-server view agrp-c01-view ifDescr.10602 included
snmp-server view agrp-c01-view ifAdminStatus.10602 included
#For GigabitEthernet1/0/3: Ifindex = 10103
snmp-server view agrp-c01-view ifDescr.10103 included
snmp-server view agrp-c01-view ifAdminStatus.10103 included
#For GigabitEthernet1/0/1: Ifindex = 10101
snmp-server view agrp-c01-view ifDescr.10101 included
snmp-server view agrp-c01-view ifAdminStatus.10101 included
#agrp-c01n02
#For GigabitEthernet2/0/3: Ifindex = 10603
snmp-server view agrp-c01-view ifDescr.10603 included
snmp-server view agrp-c01-view ifAdminStatus.10603 included
#For GigabitEthernet2/0/1: Ifindex = 10601
snmp-server view agrp-c01-view ifDescr.10601 included
snmp-server view agrp-c01-view ifAdminStatus.10601 included
#For GigabitEthernet1/0/4: Ifindex = 10104
snmp-server view agrp-c01-view ifDescr.10104 included
snmp-server view agrp-c01-view ifAdminStatus.10104 included
#For GigabitEthernet1/0/2: Ifindex = 10102
snmp-server view agrp-c01-view ifDescr.10102 included
snmp-server view agrp-c01-view ifAdminStatus.10102 included
Modify community to include setup view:
snmp-server community agrp-c01-community view agrp-c01-view RW agrp-c01-acl
Verify:
sh run | incl snmp
Test with (from any of the cluster nodes):
snmpwalk -v 2c -c agrp-c01-community 10.10.53.12 # only configured OIDs will be shown
Also test fence_ifmib itself:
fence_ifmib --ip agrp-stack01 --community agrp-c01-community --plug Port-channel3 --action list # only names of the needed port must be shown
Also test fence_ifmib itself:
fence_ifmib --ip agrp-stack01 --community agrp-c01-community --plug Port-channel3 --action list # only names of the needed port must be shown
Setup Pacemaker Stonith SNMP
Here I'll be using LACP Port-channel if you don't use LACP, just create one fence_ifmib fence device per port, surely with different names. i.e. fence_ifmib_n01_1-0-1 for GigabitEthernet1/0/1, fence_ifmib_n02_1-0-3 for GigabitEthernet1/0/3 etc.
Create fence device fence_ifmib_n01 for agrp-c01n01:
pcs stonith create fence_ifmib_n01 fence_ifmib pcmk_host_list="agrp-c01n01" ipaddr="agrp-stack01" snmp_version="2c" community="agrp-c01-community" inet4_only="1" port="Port-channel2" power_wait=4 delay=15 op monitor interval=60s
Create fence device fence_ifmib_n02 for agrp-c01n02:
pcs stonith create fence_ifmib_n02 fence_ifmib pcmk_host_list="agrp-c01n02" ipaddr="agrp-stack01" snmp_version="2c" community="agrp-c01-community" inet4_only="1" port="Port-channel3" power_wait=4 op monitor interval=60s
Setup constraints (fence_ifmib_n01 will start on agrp-c01n02 & fence_ifmib_n02 will start on agrp-c01n01):
pcs constraint location add lc_fence_ifmib_n01 fence_ifmib_n01 agrp-c01n01 -INFINITYpcs constraint location add lc_fence_ifmib_n02 fence_ifmib_n02 agrp-c01n02 -INFINITY
Adding stonith level 2:
pcs stonith level add 2 agrp-c01n01 fence_ifmib_n01
pcs stonith level add 2 agrp-c01n02 fence_ifmib_n02
ping agrp-c01n02
After successful ping:
pcs cluster start agrp-c01n02
pcs stonith level add 2 agrp-c01n02 fence_ifmib_n02
Review Cluster 13 to understand options.
Verify that now GFS2 and CLVMD works even if one node loses power
To verify that:
- pcs cluster start --all
- then pull power cable out of one of the nodes
- dlm_tool ls # you'll see that:
- new change member 1 joined 0 remove 1 failed 1 seq 2,2
- new status wait fencing
This means that dlm fence was successfull and node is removed (down). You'll see the same messages for both "shared" and "clvmd" lock-spaces. This means that GFS2 file-system and CLVMD will work.
How to rejoin node to the cluster after above test (I've powered off agrp-c01n02):
fence_ifmib --ip agrp-stack01 --community agrp-c01-community --plug Port-channel3 --action status
fence_ifmib --ip agrp-stack01 --community agrp-c01-community --plug Port-channel3 --action onfence_ifmib --ip agrp-stack01 --community agrp-c01-community --plug Port-channel3 --action status
ping agrp-c01n02
After successful ping:
pcs cluster start agrp-c01n02
This tutorials were used to understand and setup clustering:
AN!Cluster
unixarena
redhat.com
Pierky's Blog
AN!Cluster
unixarena
redhat.com
Pierky's Blog
No comments:
Post a Comment