学习笔记0606----Linux集群架构(二)

2020-11-24 10:23:50 浏览数 (1)

Linux集群架构(二)

  • 预习内容
    • 1. LVS DR模式搭建
      • 1.1 准备工作
      • 1.2 在分发器上设置一个dr的脚本
      • 1.3 在rs上分别设置脚本
      • 1.4 在dir上启动脚本
      • 1.5 分别在rs上执行脚本
      • 1.6 测试负载均衡
      • 1.7 问题
    • 2. keepalived实现负载均衡
      • 2.1 编辑keepalived的配置文件
      • 2.2 启动keepalived,查看网卡
      • 2.3 rs端设置
      • 2.4 测试一
      • 2.5 测试二
  • 课后总结

预习内容

18.11 LVS DR模式搭建 18.12 keepalived LVS 扩展 heartbeat和keepalived比较http://blog.csdn.net/yunhua_lee/article/details/9788433 DRBD工作原理和配置 http://502245466.blog.51cto.com/7559397/1298945 mysql keepalived http://lizhenliang.blog.51cto.com/7876557/1362313 lvs 三种模式详解 http://www.it165.net/admin/html/201401/2248.html lvs几种算法 http://www.aminglinux.com/bbs/thread-7407-1-1.html 关于arp_ignore和 arp_announce http://www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.html LVS DR模型中的arp_ignore https://www.imooc.com/article/79661 lvs原理相关的 http://blog.csdn.net/pi9nc/article/details/23380589 haproxy keepalived http://blog.csdn.net/xrt95050/article/details/40926255 nginx、lvs、haproxy比较 http://www.csdn.net/article/2014-07-24/2820837 keepalived中自定义脚本 vrrp_script http://my.oschina.net/hncscwc/blog/158746 lvs dr模式只使用一个公网ip的实现方法 http://storysky.blog.51cto.com/628458/338726

1. LVS DR模式搭建

1.1 准备工作

  • 三台机器
  • 分发器,也叫调度器(简写为dir):141.128
  • rs1:141.129
  • rs2:141.130
  • vip:141.200

1.2 在分发器上设置一个dr的脚本

代码语言:javascript复制
[root@linux-001 ~]# vim /usr/local/sbin/lvs_dr.sh

#! /bin/bash
echo 1 > /proc/sys/net/ipv4/ip_forward
ipv=/usr/sbin/ipvsadm
vip=192.168.141.200
rs1=192.168.141.129
rs2=192.168.141.130
#注意这里的网卡名字
ifdown ens33
ifup ens33
ifconfig ens33:2 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip dev ens33:2
$ipv -C
$ipv -A -t $vip:80 -s wrr
$ipv -a -t $vip:80 -r $rs1:80 -g -w 1
$ipv -a -t $vip:80 -r $rs2:80 -g -w 1

1.3 在rs上分别设置脚本

代码语言:javascript复制
[root@linux-02 ~]# vim /usr/local/sbin/lvs_rs.sh

#/bin/bash
vip=192.168.141.200
#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端
ifdonw lo
ifup lo
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
#以下操作为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端
#参考文档www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.html
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

1.4 在dir上启动脚本

代码语言:javascript复制
[root@linux-001 ~]# sh  /usr/local/sbin/lvs_dr.sh
成功断开设备 'ens33'。
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/3)
[root@linux-001 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.141.128  netmask 255.255.255.0  broadcast 192.168.141.255
        inet6 fe80::8db4:d867:92de:d2d1  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:6d:81:cc  txqueuelen 1000  (Ethernet)
        RX packets 15952  bytes 976264 (953.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 441  bytes 68862 (67.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.141.122  netmask 255.255.255.0  broadcast 192.168.141.255
        ether 00:0c:29:6d:81:cc  txqueuelen 1000  (Ethernet)

ens33:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.141.200  netmask 255.255.255.255  broadcast 192.168.141.200
        ether 00:0c:29:6d:81:cc  txqueuelen 1000  (Ethernet)

ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.23.88  netmask 255.255.255.0  broadcast 192.168.23.255
        inet6 fe80::1bd9:6a99:3db1:3ce6  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:6d:81:d6  txqueuelen 1000  (Ethernet)
        RX packets 17549  bytes 1060123 (1.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 67  bytes 4322 (4.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2  bytes 140 (140.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 140 (140.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@linux-001 ~]# 

1.5 分别在rs上执行脚本

代码语言:javascript复制
[root@linux-02 ~]# sh  /usr/local/sbin/lvs_rs.sh
[root@linux-02 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.141.129  netmask 255.255.255.0  broadcast 192.168.141.255
        inet6 fe80::86ff:d912:c144:4503  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:3a:cd:af  txqueuelen 1000  (Ethernet)
        RX packets 16204  bytes 1003090 (979.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 585  bytes 113199 (110.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.100.100  netmask 255.255.255.0  broadcast 192.168.100.255
        inet6 fe80::e3af:26e5:ac7b:b1f  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:83:29:48  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 18  bytes 1382 (1.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.141.200  netmask 255.255.255.255
        loop  txqueuelen 1000  (Local Loopback)

[root@linux-02 ~]# 
代码语言:javascript复制
[root@localhost ~]# sh /usr/local/sbin/lvs_rs.sh
[root@localhost ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:7E:B0:87  
          inet addr:192.168.141.130  Bcast:192.168.141.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe7e:b087/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16051 errors:0 dropped:0 overruns:0 frame:0
          TX packets:456 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:986535 (963.4 KiB)  TX bytes:76567 (74.7 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

lo:0      Link encap:Local Loopback  
          inet addr:192.168.141.200  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:65536  Metric:1

[root@localhost ~]# 

1.6 测试负载均衡

1.7 问题

当rs中其中一台nginx 服务有问题的时候,lvs并不能之后后端那个服务器down掉了,会出现以下的情况。

2. keepalived实现负载均衡

2.1 编辑keepalived的配置文件

代码语言:javascript复制
[root@linux-001 keepalived]# cd /etc/keepalived/
[root@linux-001 keepalived]# vim keepalived.conf

vrrp_instance VI_1 {
#备用服务器上为 BACKUP
state MASTER
#绑定vip的网卡为ens33
interface ens33
virtual_router_id 51
#备用服务器上为90
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass aminglinux
}
virtual_ipaddress {
192.168.141.200
}
}
virtual_server 192.168.141.200 80 {
#(每隔10秒查询realserver状态)
delay_loop 10
#(lvs 算法)
lb_algo wlc
#(DR模式)
lb_kind DR
#(同一IP的连接60秒内被分配到同一台realserver)
persistence_timeout 0
#(用TCP协议检查realserver状态)
protocol TCP

real_server 192.168.141.129 80 {
#(权重)
weight 100
TCP_CHECK {
#(10秒无响应超时)
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.141.130 80 {
weight 100
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}

2.2 启动keepalived,查看网卡

代码语言:javascript复制
[root@linux-001 keepalived]# systemctl  restart keepalived

[root@linux-001 keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:6d:81:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.141.128/24 brd 192.168.141.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.141.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.141.122/24 brd 192.168.141.255 scope global secondary noprefixroute ens33:0
       valid_lft forever preferred_lft forever
    inet6 fe80::8db4:d867:92de:d2d1/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:6d:81:d6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.23.88/24 brd 192.168.23.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::1bd9:6a99:3db1:3ce6/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

2.3 rs端设置

rs端设置脚本和1.3相同,如下。

代码语言:javascript复制
[root@linux-02 ~]# vim /usr/local/sbin/lvs_rs.sh

#/bin/bash
vip=192.168.141.200
#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端
ifdonw lo
ifup lo
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
#以下操作为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端
#参考文档www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.html
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce


[root@linux-02 ~]# sh /usr/local/sbin/lvs_rs.sh

2.4 测试一

由于rs2上的nginx是down掉的状态,所以只会显示出如下的结果,只会访问到rs1,不放访问到rs2。

2.5 测试二

把rs2的nginx启动,可以发现访问vip地址,是访问到了rs2上,再来查看下ipvsadm的规则。

小结:使用了keepalived既可以实现负载均衡的功能,又可以实现高可用,后端有一台机器down状态的时候,保证服务不去访问这台机器,如果这台服务启动了,又可以把这台机器加入到ipvsadm的规则中。

课后总结

DRBD

https://blog.51cto.com/502245466/1298945

DRBD的全称为:Distributed ReplicatedBlock Device(DRBD)分布式块设备复制,DRBD是由内核模块和相关脚本而构成,用以构建高可用性的集群。其实现方式是通过网络来镜像整个设备。你可以把它看作是一种网络RAID。它允许用户在远程机器上建立一个本地块设备的实时镜像。 简单来说类似RAID1磁盘阵列 一步一步搭建MHA集群

http://blog.51cto.com/xiaoshuaigege/2060768 MHA:关于mysql的高可用集群

0 人点赞