In this tutorial we will see, How to configure network bonding in CentOS 7 and RHEL 7. So first of all we should know, What is network Bonding?
What is Network bonding?
The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical “bonded” interface. If there is any one physical interface will goes down then second interface will be accessible. This is also provide load balancing in network traffic.
Let’s configure Network bonding in CentOS7 server
First of all we need to check bonding module in our Linux server, if bonding module is not loaded in our Linux machine then we need to run below command to load.
[root@urclouds ~]# modprobe bonding [root@urclouds ~]#
We can use below command to display bonding module information.
[root@urclouds ~]# modinfo bonding filename: /lib/modules/3.10.0-693.11.1.el7.x86_64/kernel/drivers/net/bonding/bonding.ko.xz author: Thomas Davis, tadavis@lbl.gov and many others description: Ethernet Channel Bonding Driver, v3.7.1 version: 3.7.1 license: GPL alias: rtnl-link-bond rhelversion: 7.4 srcversion: CABF3D00ACCCD34FF1BE540 depends: intree: Y vermagic: 3.10.0-693.11.1.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing key sig_key: 61:B8:E8:7B:84:11:84:F6:2F:80:D6:07:79:AB:69:2A:49:D8:3B:AF sig_hashalgo: sha256 parm: max_bonds:Max number of bonded devices (int) parm: tx_queues:Max number of transmit queues (default = 16) (int) parm: num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int) parm: num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int) parm: miimon:Link check interval in milliseconds (int) parm: updelay:Delay before considering link up, in milliseconds (int) parm: downdelay:Delay before considering link down, in milliseconds (int) parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int) parm: mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp) parm: primary:Primary network device to use (charp) parm: primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp) parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp) parm: ad_select:802.3ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp) parm: min_links:Minimum number of available links before turning on carrier (int) parm: xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3, 3 for encap layer 2+3, 4 for encap layer 3+4 (charp) parm: arp_interval:arp interval in milliseconds (int) parm: arp_ip_target:arp targets in n.n.n.n form (array of charp) parm: arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp) parm: arp_all_targets:fail on any/all arp targets timeout; 0 for any (default), 1 for all (charp) parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp) parm: all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default), 1 for always. (int) parm: resend_igmp:Number of IGMP membership reports to send on link failure (int) parm: packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave, 1 packet per slave (default), >1 packets per slave. (int) parm: lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint) [root@urclouds ~]#
We can see above output of this commands.
Creating Bonding Interface File
Now we need to create bond file. Go to network configuration file on this path /etc/sysconfig/network-scripts/ and create ifcfg-bond0 file. like below:-
[root@urclouds ~]# cd /etc/sysconfig/network-scripts/ [root@urclouds network-scripts]# [root@urclouds network-scripts]# cat ifcfg-bond0 DEVICE=bond0 TYPE=Bond NAME=bond0 BONDING_MASTER=yes BOOTPROTO=none ONBOOT=yes IPADDR=192.168.43.95 NETMASK=255.255.255.0 GATEWAY=192.168.43.1 BONDING_OPTS="mode=5 miimon=100" [root@urclouds network-scripts]#
We need to define IP Address, Netmask, Gateway and bonding modes . I am going to use mode 5 which is used for fault tolerance and load balancing. You can see below bond mode option. You can chose as per your requirement.
Types of Network Bonding
These are bonding mode which can be use as per our requirement.
mode=0 (balance-rr)
Round-robin policy: Round-robin is a default mode. It transmits packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
mode=1 (active-backup)
Active-backup policy: In Active-backup mode, only one interface is active in bond. The other interface will become active, when the active interface fails. Active-backup also provides fault tolerance.
mode=2 (balance-xor)
XOR policy: This method will pick the same interface for a given MAC address and as a result is capable of load balancing and fault tolerance.
mode=3 (broadcast)
Broadcast policy: it transmits everything on all slave interfaces. This mode also provides fault tolerance.
mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Mode 4 Creates aggregation groups that share the same speed and duplex settings. Its utilizes all slaves in the active aggregator according to the 802.3ad specification.
mode=5 (balance-tlb)
Adaptive transmit load balancing: We can use mode 5 to distribute current load on each slave interface.
mode=6 (balance-alb)
Adaptive load balancing: In the alb bond currently active slave is the slave, whose MAC address is used as HW address of the bond. Its includes balance-tlb + receive load balancing (rlb) for IPV4 traffic.
Edit the NIC interface files
Go to network configuration file on this path /etc/sysconfig/network-scripts/ and create ifcfg-enp0s3 file. like below:-
[root@urclouds ~]# cd /etc/sysconfig/network-scripts/ [root@urclouds network-scripts]# [root@urclouds network-scripts]# cat ifcfg-enp0s3 TYPE=Ethernet BOOTPROTO=none DEVICE=enp0s3 ONBOOT=yes HWADDR="08:00:27:af:05:97" MASTER=bond0 SLAVE=yes [root@urclouds network-scripts]#
Go to network configuration file on this path /etc/sysconfig/network-scripts/ and create ifcfg-enp0s8 file. like below:-
[root@urclouds ~]# cd /etc/sysconfig/network-scripts/ [root@urclouds network-scripts]# [root@urclouds network-scripts]# cat ifcfg-enp0s8 TYPE=Ethernet BOOTPROTO=none DEVICE=enp0s8 ONBOOT=yes HWADDR="08:00:27:ac:bf:5a" MASTER=bond0 SLAVE=yes [root@urclouds network-scripts]#
Restart the Network Service
Now we need to restart network service. After successfully network service started we will see our bond0 interface will be up.
[root@urclouds ~]# systemctl restart network [root@urclouds ~]#
We can check our bond configuration is properly up or not using ifconfig –a command. See below output of ifconfig –a commands. You can see our bond0 is successfully up and running. We have 2 slave interface enp0s3 and enps0s8. If there is any interface will go down then other interface will be up and our server will be accessible.
[root@urclouds ~]# ifconfig -a bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet 192.168.43.95 netmask 255.255.255.0 broadcast 192.168.43.255 inet6 fe80::a00:27ff:feaf:597 prefixlen 64 scopeid 0x20<link> ether 08:00:27:af:05:97 txqueuelen 1000 (Ethernet) RX packets 32 bytes 2906 (2.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 117 bytes 10475 (10.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s3: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 08:00:27:af:05:97 txqueuelen 1000 (Ethernet) RX packets 1413 bytes 239680 (234.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1242 bytes 222903 (217.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 19 base 0xd020 enp0s8: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 08:00:27:ac:bf:5a txqueuelen 1000 (Ethernet) RX packets 122 bytes 10001 (9.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 187 bytes 27916 (27.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 16 base 0xd240 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 510 bytes 36085 (35.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 510 bytes 36085 (35.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:d3:32:d2 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0-nic: flags=4098<BROADCAST,MULTICAST> mtu 1500 ether 52:54:00:d3:32:d2 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@urclouds ~]#
We can use below commands to display our bond interface setting.
[root@urclouds ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: transmit load balancing Primary Slave: None Currently Active Slave: enp0s3 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: enp0s3 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 08:00:27:af:05:97 Slave queue ID: 0 Slave Interface: enp0s8 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 08:00:27:ac:bf:5a Slave queue ID: 0 [root@urclouds ~]#
You can see above command output our current active interface is enp0s3 if this interface will go down then active interface will be enp0s8. So let’s test our fault tolerance network.
Fault tolerance testing
For the fault tolerant testing I am going to down our current active interface enp0s3, and check our network will be accessible to enp0s8 or not. If our server will access with this interface enp0s8, this means our network bonding is working properly. So let’s start to down our current interface and check.
[root@urclouds ~]# ifdown enp0s3 Device 'enp0s3' successfully disconnected. [root@urclouds ~]#
Now you can see our enp0s8 is successfully active slave in below output.
[root@urclouds network-scripts]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: transmit load balancing Primary Slave: None Currently Active Slave: enp0s8 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: enp0s3 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 08:00:27:af:05:97 Slave queue ID: 0 Slave Interface: enp0s8 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 08:00:27:ac:bf:5a Slave queue ID: 0 [root@urclouds network-scripts]# [root@urclouds ~]#