Cluster 2. Post Install steps, Networking initial setup.
First and most important:
YOU MUST HAVE PHYSICAL ACCESS TO BOTH NODES IN ORDER TO SETUP CLUSTER
As for now we have 2 nodes (physical hardware server, which is/will be a member of a cluster) with installed OS. In this article we will do some steps needed after installation of the OS and iLO access.
YOU MUST HAVE PHYSICAL ACCESS TO BOTH NODES IN ORDER TO SETUP CLUSTER
As for now we have 2 nodes (physical hardware server, which is/will be a member of a cluster) with installed OS. In this article we will do some steps needed after installation of the OS and iLO access.
- First of all connect both servers to the network and give them Internet access permissions (don't think about IP addressing scheme, for now we just need Internet access)
- on both nodes
- yum update -y
- NM makes many decisions itself which is not appropriate for cluster:
- yum remove NetworkManager -y
- verify that firewalld enabled and started:
- systemctl status firewalld
Backup existing network configs (on both nodes):
mkdir -p /root/backups/
yum install rsync -y
# -v - be verbose
# -a archive-mode (recursive, copy links, preserve (permissions, timestamps, owners, groups, dev-files)
rsync -av /etc/sysconfig/network-scripts /root/backups/
yum install rsync -y
# -v - be verbose
# -a archive-mode (recursive, copy links, preserve (permissions, timestamps, owners, groups, dev-files)
rsync -av /etc/sysconfig/network-scripts /root/backups/
Enabling all interfaces (on both nodes):
cd /etc/sysconfig/network-scripts
for int in $(ls -1 ifcfg-eno*);
do
sed -i 's/ONBOOT=.*/ONBOOT="yes"/' $int;
sed -i 's/BOOTPROTO=.*/BOOTPROTO="none"/' $int;
done
to check changes:
to check changes:
# -U0 will show anly changed line if diff
# verify all files:
for int in $(ls -1 ifcfg-eno*);
do
diff -U0 /root/backups/network-scripts/ifcfg-$int ifcfg-$int;
done
systemctl enable network.service
systemctl start network.service
systemctl status network.service
systemctl status network.service
# to verify that all interfaces are enabled:
ip link
We will be using four interfaces, bonded into three pairs of one physical NIC with VLAN and one SubNIC with VLAN in Active/Passive (mode=1 other types are not recommended for reliable clustering environment) configuration.
It's our fisrst cluster, so clusterSerialNumber = 1. IP address 2nd octet will be 1 *10 = 10
Four different networks will be used in our cluster:
- BCN (Back-Channel Network - 10.clusterSerialNumber*10.53.nodeIP/24) - for cluster management
- IPMIN (IPMI/iLO Network - 10.clusterSerialNumber*10.53.nodeIP+10+1/24) - for cluster management
- SN (Storage Network - 10.clusterSerialNumber*10.52.nodeIP/24) - for nodes storage replication
- IFN (Internet-Facing Network - 172.16.51.ServerIP/24) - for access to nodes anf for servers (virtual servers, hosted on a node)
Disable IPv6:
vi /etc/sysctl.conf
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.all.disable_ipv6 = 1
Disable zeroconf (Ip addresses starting with 169):
vi /etc/sysconfig/network
NOZEROCONF=true
reboot both servers
Two options in networking setup
We have two options for networking setup. One option is using Linux bonding and bridging, provided by kernel and bridge-utils package and the other is to use OvS (Open vSwitch) provided by OvS.
Linux bridge doesn't "understand" VLANs, it just connects VM Server virtual ports to the outer world. To support more than one VLAN with Linux bridges we need to setup as many bridges as VLANs count we desire to be.
OvS is more wide approach as in future you can easily add more VLANs to your cluster (you can serve VMs in more than one VLAN). Also we can say that OvS supports all (or many of) features normal hardware switch will support.
Linux bridge doesn't "understand" VLANs, it just connects VM Server virtual ports to the outer world. To support more than one VLAN with Linux bridges we need to setup as many bridges as VLANs count we desire to be.
OvS is more wide approach as in future you can easily add more VLANs to your cluster (you can serve VMs in more than one VLAN). Also we can say that OvS supports all (or many of) features normal hardware switch will support.
Linux bonding and bridging
yum install bridge-utils -yWe will be using four interfaces, bonded into three pairs of one physical NIC with VLAN and one SubNIC with VLAN in Active/Passive (mode=1 other types are not recommended for reliable clustering environment) configuration.
It's our fisrst cluster, so clusterSerialNumber = 1. IP address 2nd octet will be 1 *10 = 10
eno1 => bcn_link1
eno2 => sn_link1
eno3 => ifn_link1
eno4 => back_link.100 / back_link.200 / back_link.51
To find physical port corresponding to the CentOS links, you can use ethtool -p command, i.e.:
ethtool -p eno4 # physical port corresponding to this interface will blink until you Ctrl+C
To find physical port corresponding to the CentOS links, you can use ethtool -p command, i.e.:
ethtool -p eno4 # physical port corresponding to this interface will blink until you Ctrl+C
Subnet | VID | NIC | Link 1 | NIC | Link 2 | Bond | Net IP |
---|---|---|---|---|---|---|---|
BCN | 100 | eno1 | bcn_link1 | eno4 | back_link.100 | bcn_bond | 10.10.53.0/24 |
SN | 200 | eno2 | sn_link1 | eno4 | back_link.200 | sn_bond | 10.10.52.0/24 |
IFN | 51 | eno3 | ifn_link1 | eno4 | back_link.51 | ifn_bond | 172.16.51.0/24 |
Open vSwitch
Proceed to this link to install OvS.
It's our fisrst cluster, so clusterSerialNumber = 1. IP address 2nd octet will be 1 *10 = 10
eno1 => ovs_bond1
eno2 => ovs_bond2
eno3 => ovs_bond1
eno4 => ovs_bond2
To find physical port corresponding to the CentOS links, you can use ethtool -p command, i.e.:
ethtool -p eno4 # physical port corresponding to this interface will blink until you Ctrl+C
We will bond all 4 interfaces (from eno4 through eno4) to the OvS bonds. And then we'll create OvS internal ports and assign them IP:
This tutorial was used to understand and setup clustering: AN!ClusterTo find physical port corresponding to the CentOS links, you can use ethtool -p command, i.e.:
ethtool -p eno4 # physical port corresponding to this interface will blink until you Ctrl+C
We will bond all 4 interfaces (from eno4 through eno4) to the OvS bonds. And then we'll create OvS internal ports and assign them IP:
Subnet
|
VID
|
OvS internal port
|
Net IP
|
---|---|---|---|
BCN
|
100
|
bcn-bond1
|
10.10.53.0/24
|
SN
|
200
|
sn-bond1
|
10.10.52.0/24
|
IFN
|
51
|
ifn-bond1
|
172.16.51.0/24
|
Is it possible to setup cluster such that Node-1 has IPaddress 10.10.2.2 and Node-2 has IPaddress 10.10.3.3.
ReplyDeleteThere is a virtual IP with subnet 10.10.2.4 for application. When Node-1 goes down, virtual IP can migrate and run without issues on Node-2?
The main idea is that each node of the cluster must see the other over the network. So at least one common subnet must be used. Also this subnet will be used to check when one node goes down so that to handover virtual IP to the other node.
DeleteSo you can use 10.10.2.2 and 10.10.3.3 for BCN if both IP addresses share the same subnet (use big enough subnet mask). Or use other subnet as BCN and use 10.10.2.2 and 10.10.3.3 as IFN addresses.