Keepalived+LVS+Nginx+DRBD+Heartbeat+Zabbix集群架构

2019-04-03 11:02:11 浏览数 (1)

本文由阿呆&zhdy合作完成!

一、准备工作:

1.1 6台模拟服务器:

确保每台机器 全部关闭 firewall 以及selinux 服务。

代码语言:javascript复制
# systemctl stop firewalld

# systemctl disable firewalld

# iptables -F

# setenforce 0

二、两台都需要配置脚本:

vim /usr/local/sbin/lvs_rs.sh

代码语言:javascript复制
#! /bin/bash
vip=192.168.96.200
#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端
ifdown lo
ifup lo
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
#以下操作为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

两台Real server分别执行脚本

代码语言:javascript复制
# sh /usr/local/sbin/lvs_rs.sh

查看一下两台real server的router -n

代码语言:javascript复制
# route -n

查看IP是否已经绑在lo卡上

代码语言:javascript复制
# ip addr

三、安装keepalived

zhdy01:

代码语言:javascript复制
[root@zhdy-01 ~]# yum install -y keepalived

[root@zhdy-01 ~]# vim /etc/keepalived/keepalived.conf

vrrp_instance VI_1 {
    #备用服务器上为 BACKUP
    state MASTER
    #绑定vip的网卡为ens33,你的网卡和阿铭的可能不一样,这里需要你改一下
    interface ens33
    virtual_router_id 51
    #备用服务器上为90
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zhangduanya
    }
    virtual_ipaddress {
        192.168.96.200
    }
}
virtual_server 192.168.96.200 80 {
    #(每隔10秒查询realserver状态)
    delay_loop 10
    #(lvs 算法)
    lb_algo wlc
    #(DR模式)
    lb_kind DR
    #(同一IP的连接60秒内被分配到同一台realserver)
    persistence_timeout 0
    #(用TCP协议检查realserver状态)
    protocol TCP
    real_server 192.168.96.131 80 {
        #(权重)
        weight 100
        TCP_CHECK {
        #(10秒无响应超时)
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
    }
    real_server 192.168.96.132 80 {
        weight 90
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
     }
}

重启keepalived服务

代码语言:javascript复制
systemctl restart keepalived

zhdy02:

代码语言:javascript复制
[root@zhdy-01 ~]# yum install -y keepalived

[root@zhdy-01 ~]# vim /etc/keepalived/keepalived.conf 

vrrp_instance VI_1 {
    #备用服务器上为 BACKUP
    state MASTER
    #绑定vip的网卡为ens33,你的网卡和阿铭的可能不一样,这里需要你改一下
    interface ens33
    virtual_router_id 51
    #备用服务器上为90
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zhangduanya
    }
    virtual_ipaddress {
        192.168.96.200
    }
}
virtual_server 192.168.96.200 80 {
    #(每隔10秒查询realserver状态)
    delay_loop 10
    #(lvs 算法)
    lb_algo wlc
    #(DR模式)
    lb_kind DR
    #(同一IP的连接60秒内被分配到同一台realserver)
    persistence_timeout 0
    #(用TCP协议检查realserver状态)
    protocol TCP
    real_server 192.168.96.131 80 {
        #(权重)
        weight 100
        TCP_CHECK {
        #(10秒无响应超时)
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
    }
    real_server 192.168.96.132 80 {
        weight 90
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
     }
}

四、配置两台nginx服务器

zhdy03(Nginx server1)

代码语言:javascript复制
# yum install -y nginx(其实是为了搭建集群,所以就简单用yum安装了nginx,线上一定要尽量编译去安装)

# systemctl start nginx

# ps aux | grep nginx

# netstat -lntp

# vim /usr/share/nginx/html/index.html 
this is master nginx!
代码语言:javascript复制
打开Nginx所在服务器的“路由”功能、关闭“ARP查询”功能

[root@zhdy03 ~]# echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
[root@zhdy03 ~]# echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
[root@zhdy03 ~]# echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
[root@zhdy03 ~]# echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
 
设置回环IP

[root@zhdy03 ~]# ifconfig lo:0 192.168.96.200 broadcast 192.168.96.200 netmask 255.255.255.255 up
[root@zhdy03 ~]# route add -host 192.168.96.200 dev lo:0

zhdy04(Nginx server2)一样的操作

代码语言:javascript复制
# yum install -y nginx(其实是为了搭建集群,所以就简单用yum安装了nginx,线上一定要尽量编译去安装)

# systemctl start nginx

# ps aux | grep nginx

# netstat -lntp

# vim /usr/share/nginx/html/index.html 
this is master nginx!
代码语言:javascript复制
打开Nginx所在服务器的“路由”功能、关闭“ARP查询”功能

[root@zhdy04 ~]# echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
[root@zhdy04 ~]# echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
[root@zhdy04 ~]# echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
[root@zhdy04 ~]# echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
 
设置回环IP

[root@zhdy04 ~]# ifconfig lo:0 192.168.96.200 broadcast 192.168.96.200 netmask 255.255.255.255 up
[root@zhdy04 ~]# route add -host 192.168.96.200 dev lo:0

五、检查并测试:

代码语言:javascript复制
zhdy01:

[root@zhdy-01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.96.200:80 rr
  -> 192.168.96.131:80            Route   1      0          0         
  -> 192.168.96.132:80            Route   1      1          0      

zhdy02:

[root@zhdy-02 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.96.200:80 rr
  -> 192.168.96.131:80            Route   100    0          0         
  -> 192.168.96.132:80            Route   90     1          0 

验证在浏览器内输入 192.168.96.200 。(如下图在站点)

这样LVS Keepalived Nginx方式的配置就做完了。

现在我们进行搭建监测: 停掉一台LVS keepalived。

再次测试,发现效果和上面的动画是一样的效果。

再次搞事情,把nginx也停掉一台。

不管怎么刷新都是一直显示一个。(如下,用事实说话)

代码语言:javascript复制
[root@zhdy-02 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.96.200:80 wlc
  -> 192.168.96.131:80            Route   100    1          1    

六、mysql的主从

  • 参照主主。
  • 主从架构。

为了减轻rs机器的压力,数据库采用远程连接的方式:↓

七、DRBD安装配置

  1. Mysql的主从复制功能是通过建立复制关系的多台或多台机器环境中,一台宕机就切换到另一台服务器上,保证mysql的可用性,可以实现90.000%的SLA。
  2. Mysql DRBD的复制功能,可以实现99.999%的SLA。

看了上面有什么感想?我今天就尝试DRBD!!

6.1 增加一块专门给数据用的磁盘(虚拟机直接增加即可)

然后两台机器都需要操作:

代码语言:javascript复制
# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.96.133 zhdy05
192.168.96.134 zhdy06

# ntpdate -u time.nist.gov      #网络时间同步命令

6.2 安装MYSQL

代码语言:javascript复制
cd /usr/local/src

wget http://mirrors.sohu.com/mysql/MySQL-5.6/mysql-5.6.35-linux-glibc2.5-x86_64.tar.gz 

tar zxvf mysql-5.6.35-linux-glibc2.5-x86_64.tar.gz

mv mysql-5.6.35-linux-glibc2.5-x86_64 /usr/local/mysql

cd /usr/local/mysql

useradd mysql

mkdir -p /data/mysql

chown -R mysql:mysql /data/mysql

./scripts/mysql_install_db --user=mysql --datadir=/data/mysql

cp support-files/my-default.cnf  /etc/my.cnf

cp support-files/mysql.server /etc/init.d/mysqld

vi /etc/init.d/mysqld 

vim编辑下面两行basedir和datadir配置
basedir=/usr/local/mysql
datadir=/data/mysql

/etc/init.d/mysqld start

6.3 安装DRBD

以下均为两台机器操作:

代码语言:javascript复制
# rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
# yum -y install drbd84-utils kmod-drbd84

6.4 格式化磁盘给drbd使用(两个节点分别提供大小相同的分区):

代码语言:javascript复制
[root@zhdy05 mysql]# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   60G  0 disk 
├─sda1   8:1    0  400M  0 part /boot
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0 57.6G  0 part /
sdb      8:16   0   10G  0 disk 
sr0     11:0    1  4.1G  0 rom  

[root@zhdy05 mysql]# fdisk /dev/sdb 

n → p → 3 → 回车 → 回车 → w

[root@zhdy05 mysql]# cat /proc/partitions
major minor  #blocks  name

   8        0   62914560 sda
   8        1     409600 sda1
   8        2    2097152 sda2
   8        3   60406784 sda3
   8       16   10485760 sdb
   8       19   10484736 sdb3
  11        0    4277248 sr0

6.5 查看DRBD配置文件

代码语言:javascript复制
[root@zhdy05 ~]# vim /etc/drbd.d/global_common.conf 

global {
    usage-count no;
}
common {
    protocol C;
    handlers {
        pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
    }
    startup {
        wfc-timeout 30;
        degr-wfc-timeout 30;
    }
    options {
    }
    disk {
                on-io-error detach;
                fencing resource-only;
    }
    net {
        cram-hmac-alg "sha1";
                shared-secret "mydrbd";
    }

    syncer {
        rate 100M;
    }
}

6.6 添加资源文件:

代码语言:javascript复制
[root@zhdy05 ~]# vim /etc/drbd.d/drbd.res
resource r0 {
    device /dev/drbd0;  
        disk /dev/sdb3;  
        meta-disk internal;
        on zhdy05 {  
        address 192.168.96.133:7789;
        }  
        on zhdy06 {  
        address 192.168.96.134:7789;  
        }  
}

6.7 将配置文件为zhdy06提供一份

代码语言:javascript复制
[root@zhdy05 ~]# scp /etc/drbd.d/{global_common.conf,drbd.res} zhdy06:/etc/drbd.d/
The authenticity of host 'zhdy06 (192.168.96.134)' can't be established.
ECDSA key fingerprint is 2f:14:f6:09:bd:e2:79:98:d1:62:15:0c:90:90:1d:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'zhdy06,192.168.96.134' (ECDSA) to the list of known hosts.
global_common.conf                                                                                                                                                                                         100% 2354     2.3KB/s   00:00    
drbd.res     

6.8 初始化资源并启动服务

代码语言:javascript复制
iptables -F
iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 7788 -j ACCEPT
iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 7799 -j ACCEPT
service iptables save

################在NOD1节点上初始化资源并启动服务
[root@zhdy05 ~]# drbdadm create-md r0
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.

################启动服务
[root@zhdy05 ~]# systemctl start drbd

[root@zhdy05 ~]# cat /proc/drbd
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by akemi@Build64R7, 2016-12-04 01:08:48
 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Diskless C r-----
    ns:0 nr:0 dw:0 dr:0 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:10484380

######查看监听地址与端口
[root@zhdy05 ~]# netstat -anput|grep 7789
tcp        0      0 192.168.96.133:47387    192.168.96.134:7789     ESTABLISHED -                   
tcp        0      0 192.168.96.133:49493    192.168.96.134:7789     ESTABLISHED -   

将其中一个节点设置为Primary,在要设置为Primary的节点上执行如下命令,这里在zhdy05上操作

########## 设置zhdy05为主动模式
[root@zhdy05 ~]# drbdadm -- --overwrite-data-of-peer primary r0

[root@zhdy05 ~]# cat /proc/drbd     #开始同步
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by akemi@Build64R7, 2016-12-04 01:08:48
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:1131520 nr:0 dw:0 dr:1132432 al:8 bm:0 lo:0 pe:2 ua:0 ap:0 ep:1 wo:f oos:9354908
	[=>..................] sync'ed: 10.9% (9132/10236)M
	finish: 0:03:56 speed: 39,472 (38,944) K/sec

[root@zhdy05 ~]# cat /proc/drbd      #完成同步,显示为主/备模式。
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by akemi@Build64R7, 2016-12-04 01:08:48
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:10484380 nr:0 dw:0 dr:10485292 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

[root@zhdy06 ~]# cat /proc/drbd
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by akemi@Build64R7, 2016-12-04 01:08:48
 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:10484380 dw:10484380 dr:0 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

6.9 创建文件系统并挂载:

代码语言:javascript复制
[root@zhdy05 ~]# mkfs.ext4 /dev/drbd0       //格式化块存储
[root@zhdy05 ~]# mkdir /mydata             //创建挂载点
[root@zhdy05 ~]# mount /dev/drbd0 /mydata/      //主节点挂载使用(从节点不会自动挂载的,也不需要挂载)

[root@zhdy05 ~]# df -h      //最后一个。
文件系统        容量  已用  可用 已用% 挂载点
/dev/sda3        58G  3.1G   55G    6% /
devtmpfs        479M     0  479M    0% /dev
tmpfs           489M     0  489M    0% /dev/shm
tmpfs           489M  6.7M  482M    2% /run
tmpfs           489M     0  489M    0% /sys/fs/cgroup
/dev/sda1       397M  119M  279M   30% /boot
tmpfs            98M     0   98M    0% /run/user/0
/dev/drbd0      9.8G   37M  9.2G    1% /mydata

6.10 测试~

代码语言:javascript复制
[root@zhdy05 mydata]# ls
lost found

[root@zhdy05 mydata]# touch tst.txt

[root@zhdy05 mydata]# cp /etc/issue /mydata/

[root@zhdy05 mydata]# ls
issue  lost found  tst.txt

[root@zhdy05 mydata]# !cat
cat /proc/drbd  
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by akemi@Build64R7, 2016-12-04 01:08:48
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:10783940 nr:0 dw:299560 dr:10486289 al:81 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

已经同步完毕!!!


[root@zhdy06 ~]# mount /dev/drbd0 /mnt # 默认是无法挂载DRBD数据盘的
mount: you must specify the filesystem type
[root@zhdy06 ~]# mount /dev/sdb3 /mnt # 同样物理盘也无法挂载,因为DRBD在使用它
mount: /dev/sdb1 already mounted or /mnt busy
[root@zhdy06 ~]# drbdadm down data # 关闭同步服务
[root@zhdy06 ~]# mount /dev/sdb3 /mnt/ # 挂载物理盘
[root@zhdy06 ~]# df -h # 查看磁盘使用情况,可以看到此处sdb3与zhdy05使用情况完全一致,从而手动切换完毕。

[root@zhdy06 ~]# umount /mnt/ # 卸载物理盘
[root@zhdy06 ~]# drbdadm up data # 开启DRBD同步模式
[root@zhdy06 ~]# cat /proc/drbd # 查看同步情况,恢复到主/备模式
version: 8.4.4 (api:1/proto:86-101)
GIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by root@, 2014-07-08 20:52:23
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

大功告成,可以正常使用drbd存储。但是这种方式不高效,所以后期我准备再次增加heartbeat当故障发生时可以完全自动完成主从切换。


DRBD遇到了很多很多问题,大家操作过程要细心!

DRBD UpToDate/DUnknown 故障恢复

1, 节点状态查看

(1) 主节点状态

[root@zhdy05 ~]# cat /proc/drbd

代码语言:javascript复制
version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00    

0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r-----    

ns:0 nr:0 dw:0 dr:672 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:604

(2) 从节点状态

[root@zhdy06 ~]# cat /proc/drbd

代码语言:javascript复制
version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00    

0: cs:StandAlone ro:Secondary/Unknown ds:UpToDate/DUnknown   r-----    

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:548
  1. 这里确认以主节点的数据为准,重新同步到从节点

(1) 停止app2 drbd服务

[root@zhdy05 ~]# systemctl stop drbd

Stopping all DRBD resources: .

(2) 重新初始化元数据

[root@zhdy06 ~]# drbdadm create-md r0 #create-md后面的是drbd资源的名称

代码语言:javascript复制
You want me to create a v08 style flexible-size internal meta data block.    

There appears to be a v08 flexible-size internal meta data block    

already in place on /dev/sdb1 at byte offset 5364318208    

Do you really want to overwrite the existing v08 meta-data?    

[need to type 'yes' to confirm] yes

Writing meta data...

md_offset 5364318208    

al_offset 5364285440    

bm_offset 5364121600

Found ext3 filesystem

5238400 kB data area apparently used    

5238400 kB left usable by current configuration

Even though it looks like this would place the new meta data intounused space, you still need to confirm, as this is only a guess.

Do you want to proceed?[need to type 'yes' to confirm] yes

initializing activity log

NOT initializing bitmap    

lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory    

New drbd meta data block successfully created.    

lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory

(3) 启动drbd服务

[root@zhdy06 ~]# systemctl start brbd

代码语言:javascript复制
Starting DRBD resources: [    

create res: data    

prepare disk: data    

adjust disk: data    

adjust net: data    

]    

..........    

***************************************************************    

DRBD's startup script waits for the peer node(s) to appear.    

- In case this node was already a degraded cluster before the    

reboot the timeout is 0 seconds. [degr-wfc-timeout]    

- If the peer was available before the reboot the timeout will    

expire after 0 seconds. [wfc-timeout]    

(These values are for resource 'data'; 0 sec -> wait forever)    

To abort waiting enter 'yes' [  15]:yes

.

[root@zhdy06 ~]# cat /proc/drbd

代码语言:javascript复制
version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00    

0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----    

ns:0 nr:5238400 dw:5238400 dr:0 al:0 bm:320 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
  1. app1主节点下

(1) 主节点状态正常了

[root@zhdy05 ~]# cat /proc/drbd

代码语言:javascript复制
version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00    

0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r-----    

ns:0 nr:0 dw:0 dr:672 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:604

(2) 重启drbd之后,数据重新同步到从节点

代码语言:javascript复制
[root@zhdy05 ~]# systemctl reload drbd
Reloading DRBD configuration: .    

[root@zhdy05 ~]# cat /proc/drbd    

version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00    

0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-    

ns:176816 nr:0 dw:0 dr:180896 al:0 bm:10 lo:4 pe:2 ua:8 ap:0 ep:1 wo:d oos:5063296    

[>....................] sync'ed:  3.4% (4944/5112)M    

finish: 0:00:57 speed: 87,552 (87,552) K/sec
代码语言:javascript复制
[root@zhdy05 ~]# cat /proc/drbd    

version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00    

0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----    

ns:5238400 nr:0 dw:0 dr:5239072 al:0 bm:320 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0

再看一下节点状态就同步了!这样就解决了脑裂情况。且保证数据都还在。

扩展:配置数据库的高可用集群架构

参考文章:(http://blog.csdn.net/kjsayn/article/details/52871835)

以上架构的最终理想模型如下:

八、关于zabbix的配置

本机配置有限,无法开第7台虚拟机了。我直接在其中的一台服务器上面做的。参考如下:

配置zabbix架构 :https://my.oschina.net/u/3497124/blog/1531500

参考

DRBD原理详解: http://502245466.blog.51cto.com/7559397/1298945

(adsbygoogle = window.adsbygoogle || []).push({});

0 人点赞