Centos下部署DRBD+NFS+Keepalived高可用环境记录

2018-01-22 16:13:38 浏览数 (1)

使用NFS服务器(比如图片业务),一台为主,一台为备。通常主到备的数据同步是通过rsync来做(可以结合inotify做实时同步)。由于NFS服务是存在单点的,出于对业务在线率和数据安全的保障,可以采用"DRBD NFS Keepalived"架构来完成高可用方案部署。之前介绍了DRBD详细解说及配置过程记录,废话不多说了,基于之前的那篇文档的机器配置信息,以下记录部署过程:

代码语言:javascript复制
思路:
1)在两台机器上安装keepalived,VIP为192.168.1.200
2)将DRBD的挂载目录/data作为NFS的挂载目录。远程客户机使用vip地址挂载NFS
3)当Primary主机发生宕机或NFS挂了的故障时,Secondary主机提权升级为DRBD的主节点,并且VIP资源也会转移过来。
   当Primary主机的故障恢复时,会再次变为DRBD的主节点,并重新夺回VIP资源。从而实现故障转移
-----------------------------------------------------------------------------------------------------------
Primary和Secondary两台主机的DRBD环境部署,参见http://www.cnblogs.com/kevingrace/p/5740940.html
 
Primary主机(192.168.1.151)默认作为DRBD的主节点,DRBD挂载目录是/data
Secondary主机(192.168.1.152)是DRBD的备份节点
 
在Primary主机上查看DRBD状态,如下,可知Primary主机是DRBD的主节点
[root@Primary ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res  cs         ro                 ds                 p  mounted  fstype
0:r0   Connected  Primary/Secondary  UpToDate/UpToDate  C  /data    ext4
 
如下,DRBD已完成挂载,挂载目录是/data
[root@Primary ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      156G   36G  112G  25% /
tmpfs                 2.9G     0  2.9G   0% /dev/shm
/dev/vda1             190M   98M   83M  55% /boot
/dev/drbd0            9.8G   23M  9.2G   1% /data
 
DRBD数据如下
[root@Primary ~]# cd /data
[root@Primary data]# ll
total 16
-rw-r--r--. 1 root root 9 May 25 09:33 test3
-rw-r--r--. 1 root root 5 May 25 09:34 wangshibo
-rw-r--r--. 1 root root 5 May 25 09:34 wangshibo1
-rw-r--r--. 1 root root 5 May 25 09:34 wangshibo2
 
-----------------------------------------------------------------------------------------------------------
在Primary和Secondary两台主机上安装NFS(可以参考:http://www.cnblogs.com/kevingrace/p/6084604.html)
[root@Primary ~]# yum install rpcbind nfs-utils
[root@Primary ~]# vim /etc/exports
/data 192.168.1.0/24(rw,sync,no_root_squash)
 
[root@Primary ~]# /etc/init.d/rpcbind start
[root@Primary ~]# /etc/init.d/nfs start
---------------------------------------------------------------------------------------------------------
关闭两台主机的iptables防火墙
防火墙最好关闭,否则可能导致客户机挂载nfs时会失败!
若开启防火墙,需要在iptables中开放nfs相关端口机以及VRRP组播地址
[root@Primary ~]# /etc/init.d/iptables stop

两台机器上的selinux一定要关闭!!!!!!!!!!
否则下面在keepalived.conf里配置的notify_master.sh等脚本执行失败!这是曾经踩过的坑!
[root@Primary ~]# setenforce 0     //临时关闭。永久关闭的话,还需要在/etc/sysconfig/selinux 文件里将SELINUX改为disabled
[root@Primary ~]# getenforce 
Permissive
-----------------------------------------------------------------------------------------------------------
在两台主机上安装Keepalived,配合keepalived实现自动fail-over
 
安装Keepalived
[root@Primary ~]# yum install -y openssl-devel popt-devel
[root@Primary ~]# cd /usr/local/src/
[root@Primary src]# wget http://www.keepalived.org/software/keepalived-1.3.5.tar.gz
[root@Primary src]# tar -zvxf keepalived-1.3.5.tar.gz
[root@Primary src]# cd keepalived-1.3.5
[root@Primary keepalived-1.3.5]# ./configure --prefix=/usr/local/keepalived
[root@Primary keepalived-1.3.5]# make && make install
       
[root@Primary keepalived-1.3.5]# cp /usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/
[root@Primary keepalived-1.3.5]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@Primary keepalived-1.3.5]# mkdir /etc/keepalived/
[root@Primary keepalived-1.3.5]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@Primary keepalived-1.3.5]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@Primary keepalived-1.3.5]# echo "/etc/init.d/keepalived start" >> /etc/rc.local
        
[root@Primary keepalived-1.3.5]# chmod  x /etc/rc.d/init.d/keepalived      #添加执行权限
[root@Primary keepalived-1.3.5]# chkconfig keepalived on                   #设置开机启动
[root@Primary keepalived-1.3.5]# service keepalived start                   #启动
[root@Primary keepalived-1.3.5]# service keepalived stop                    #关闭
[root@Primary keepalived-1.3.5]# service keepalived restart                 #重启
 
 
-----------Primary主机的keepalived.conf配置
[root@Primary ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-bak
[root@Primary ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
  notification_email {
    root@localhost
    }
    
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id DRBD_HA_MASTER
}
    
vrrp_script chk_nfs {
        script "/etc/keepalived/check_nfs.sh"
        interval 5
    }
    vrrp_instance VI_1 {
        state MASTER
        interface eth0
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        track_script {
            chk_nfs
        }
    notify_stop /etc/keepalived/notify_stop.sh        
    notify_master /etc/keepalived/notify_master.sh    
    virtual_ipaddress {
        192.168.1.200
    }
}
 
启动keepalived服务
[root@Primary data]# /etc/init.d/keepalived start
Starting keepalived:                                       [  OK  ]
[root@Primary data]# ps -ef|grep keepalived
root     30937     1  0 11:49 ?        00:00:00 keepalived -D
root     30939 30937  0 11:49 ?        00:00:00 keepalived -D
root     30940 30937  0 11:49 ?        00:00:00 keepalived -D
root     31123 10364  0 11:50 pts/1    00:00:00 grep --color keepalived
 
查看VIP
[root@Primary data]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:35:d1:d6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.151/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.200/32 scope global eth0
    inet6 fe80::f816:3eff:fe35:d1d6/64 scope link
       valid_lft forever preferred_lft forever
 
-----------Secondary主机的keepalived.conf配置
[root@Secondary ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-bak
[root@Secondary ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
  notification_email {
    root@localhost
    }
    
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id DRBD_HA_BACKUP
}
    
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    notify_master /etc/keepalived/notify_master.sh               //当此机器为keepalived的master角色时执行这个脚本
 
    notify_backup /etc/keepalived/notify_backup.sh               //当此机器为keepalived的backup角色时执行这个脚本
 
    virtual_ipaddress {
        192.168.1.200
    }
}
 
启动keepalived服务
[root@Secondary ~]# /etc/init.d/keepalived start
Starting keepalived:                                       [  OK  ]
[root@Secondary ~]# ps -ef|grep keepalived
root     17128     1  0 11:50 ?        00:00:00 keepalived -D
root     17129 17128  0 11:50 ?        00:00:00 keepalived -D
root     17131 17128  0 11:50 ?        00:00:00 keepalived -D
root     17219 29939  0 11:50 pts/1    00:00:00 grep --color keepalived
 
-------------四个脚本配置---------------
1)此脚本只在Primary机器上配置
[root@Primary ~]# vim /etc/keepalived/check_nfs.sh
#!/bin/sh
 
###检查nfs可用性:进程和是否能够挂载
/sbin/service nfs status &>/dev/null
if [ $? -ne 0 ];then
    ###如果服务状态不正常,先尝试重启服务
    /sbin/service nfs restart
    /sbin/service nfs status &>/dev/null
    if [ $? -ne 0 ];then
        ###若重启nfs服务后,仍不正常
        ###卸载drbd设备
        umount /dev/drbd0
        ###将drbd主降级为备
        drbdadm secondary r0
        #关闭keepalived
        /sbin/service keepalived stop
    fi
fi
 
[root@Primary ~]# chmod 755 /etc/keepalived/check_nfs.sh
 
 
2)此脚本只在Primary机器上配置
[root@Primary ~]# mkdir /etc/keepalived/logs
[root@Primary ~]# vim /etc/keepalived/notify_stop.sh
#!/bin/bash
 
time=`date " %F  %H:%M:%S"`
echo -e "$time  ------notify_stop------n" >> /etc/keepalived/logs/notify_stop.log
/sbin/service nfs stop &>> /etc/keepalived/logs/notify_stop.log
/bin/umount /data &>> /etc/keepalived/logs/notify_stop.log
/sbin/drbdadm secondary r0 &>> /etc/keepalived/logs/notify_stop.log
echo -e "n" >> /etc/keepalived/logs/notify_stop.log
 
[root@Primary ~]# chmod 755 /etc/keepalived/notify_stop.sh
 
3)此脚本在两台机器上都要配置
[root@Primary ~]# vim /etc/keepalived/notify_master.sh
#!/bin/bash
 
time=`date " %F  %H:%M:%S"`
echo -e "$time    ------notify_master------n" >> /etc/keepalived/logs/notify_master.log
/sbin/drbdadm primary r0 &>> /etc/keepalived/logs/notify_master.log
/bin/mount /dev/drbd0 /data &>> /etc/keepalived/logs/notify_master.log
/sbin/service nfs restart &>> /etc/keepalived/logs/notify_master.log
echo -e "n" >> /etc/keepalived/logs/notify_master.log
 
[root@Primary ~]# chmod 755 /etc/keepalived/notify_master.sh
 
4)此脚本只在Secondary机器上配置
[root@Secondary ~]# mkdir /etc/keepalived/logs
[root@Secondary ~]# vim /etc/keepalived/notify_backup.sh
#!/bin/bash
 
time=`date " %F  %H:%M:%S"`
echo -e "$time    ------notify_backup------n" >> /etc/keepalived/logs/notify_backup.log
/sbin/service nfs stop &>> /etc/keepalived/logs/notify_backup.log
/bin/umount /dev/drbd0 &>> /etc/keepalived/logs/notify_backup.log
/sbin/drbdadm secondary r0 &>> /etc/keepalived/logs/notify_backup.log
echo -e "n" >> /etc/keepalived/logs/notify_backup.log
 
[root@Secondary ~]# chmod 755 /etc/keepalived/notify_backup.sh
-----------------------------------------------------------------------------------------------------------
在远程客户机上挂载NFS
客户端只需要安装rpcbind程序,并确认服务正常
[root@huanqiu ~]# yum install rpcbind nfs-utils
[root@huanqiu ~]# /etc/init.d/rpcbind start
 
挂载NFS
[root@huanqiu ~]# mount -t nfs 192.168.1.200:/data /web
 
如下查看,发现已经成功挂载了NFS
[root@huanqiu ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      107G   15G   87G  14% /
tmpfs                 2.9G     0  2.9G   0% /dev/shm
/dev/vda1             190M   67M  113M  38% /boot
192.168.1.200:/data   9.8G   23M  9.2G   1% /web
 
[root@huanqiu ~]# cd /web/
[root@huanqiu web]# ll
total 16
-rw-r--r--. 1 root root 9 May 25 09:33 test3
-rw-r--r--. 1 root root 5 May 25 09:34 wangshibo
-rw-r--r--. 1 root root 5 May 25 09:34 wangshibo1
-rw-r--r--. 1 root root 5 May 25 09:34 wangshibo2
-----------------------------------------------------------------------------------------------------------
接着进行fail-over(故障)自动切换测试:
 
1)
先关闭Primary主机上的keepalived服务。就会发现VIP资源已经转移到Secondary主机上了。
同时,Primary主机的nfs也会主动关闭,同时Secondary会升级为DRBD的主节点
[root@Primary ~]# /etc/init.d/keepalived stop
Stopping keepalived:                                       [  OK  ]
[root@Primary ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:35:d1:d6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.151/24 brd 192.168.1.255 scope global eth0
    inet6 fe80::f816:3eff:fe35:d1d6/64 scope link
       valid_lft forever preferred_lft forever
 
查看系统日志,也能看到VIP资源转移信息
[root@Primary ~]# tail -1000 /var/log/messages
........
May 25 11:50:03 localhost Keepalived_vrrp[30940]: Sending gratuitous ARP on eth0 for 192.168.1.200
May 25 11:50:03 localhost Keepalived_vrrp[30940]: Sending gratuitous ARP on eth0 for 192.168.1.200
May 25 11:50:03 localhost Keepalived_vrrp[30940]: Sending gratuitous ARP on eth0 for 192.168.1.200
May 25 11:50:03 localhost Keepalived_vrrp[30940]: Sending gratuitous ARP on eth0 for 192.168.1.200
May 25 11:58:51 localhost Keepalived[30937]: Stopping
May 25 11:58:51 localhost Keepalived_vrrp[30940]: VRRP_Instance(VI_1) sent 0 priority
May 25 11:58:51 localhost Keepalived_vrrp[30940]: VRRP_Instance(VI_1) removing protocol VIPs.
 
[root@Primary ~]# ps -ef|grep nfs
root       588 10364  0 12:13 pts/1    00:00:00 grep --color nfs
[root@Primary ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      156G   36G  112G  25% /
tmpfs                 2.9G     0  2.9G   0% /dev/shm
/dev/vda1             190M   98M   83M  55% /boot
[root@Primary ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res  cs         ro                   ds                 p  mounted  fstype
0:r0   Connected  Secondary/Secondary  UpToDate/UpToDate  C
 
登录到Secondary备份机器上,发现VIP资源已经转移过来
[root@Secondary ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:4c:7e:88 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.152/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.200/32 scope global eth0
    inet6 fe80::f816:3eff:fe4c:7e88/64 scope link
       valid_lft forever preferred_lft forever
 
[root@Secondary ~]# tail -1000 /var/log/messages
........
May 25 11:58:53 localhost Keepalived_vrrp[17131]: Sending gratuitous ARP on eth0 for 192.168.1.200
May 25 11:58:53 localhost Keepalived_vrrp[17131]: Sending gratuitous ARP on eth0 for 192.168.1.200
May 25 11:58:53 localhost Keepalived_vrrp[17131]: Sending gratuitous ARP on eth0 for 192.168.1.200
May 25 11:58:53 localhost Keepalived_vrrp[17131]: Sending gratuitous ARP on eth0 for 192.168.1.200
May 25 11:58:58 localhost Keepalived_vrrp[17131]: Sending gratuitous ARP on eth0 for 192.168.1.200
May 25 11:58:58 localhost Keepalived_vrrp[17131]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 192.168.1.200
 
[root@Secondary ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:4c:7e:88 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.152/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.200/32 scope global eth0
    inet6 fe80::f816:3eff:fe4c:7e88/64 scope link
       valid_lft forever preferred_lft forever
[root@Secondary ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      156G   13G  135G   9% /
tmpfs                 2.9G     0  2.9G   0% /dev/shm
/dev/vda1             190M   89M   92M  50% /boot
/dev/drbd0            9.8G   23M  9.2G   1% /data
 
当Primary机器的keepalived服务恢复启动后,VIP资源又会强制夺回来(可以查看/var/log/message系统日志)
并且Primary还会再次变为DRBD的主节点
 
2)
关闭Primary主机的nfs服务。根据监控脚本,会主动去启动nfs,只要当启动失败时,才会强制由DRBD的主节点降为备份节点,并关闭keepalived。
从而跟上面流程一样实现故障转移
 
结论:
在上面的主从故障切换过程中,对于客户端来说,挂载NFS不影响使用,只是会有一点的延迟。
这也验证了drbd提供的数据一致性功能(包括文件的打开和修改状态等),在客户端看来,真个切换过程就是"一次nfs重启"(主nfs停,备nfs启)。

0 人点赞