Codis安装部署全架构

2018-11-14 16:27:35 浏览数 (1)

Codis安装部署全架构

Codis简介

Codis 是一个分布式 Redis 解决方案, 对于上层的应用来说, 连接到 Codis Proxy 和连接原生的 Redis Server 没有明显的区别 (不支持的命令列表), 上层应用可以像使用单机的 Redis 一样使用, Codis 底层会处理请求的转发, 不停机的数据迁移等工作, 所有后边的一切事情, 对于前面的客户端来说是透明的, 可以简单的认为后边连接的是一个内存无限大的 Redis 服务。

Codis 由四部分组成:

Codis Proxy (codis-proxy)

Codis Manager (codis-config)

Codis Redis (codis-server)

ZooKeeper

codis-proxy 是客户端连接的 Redis 代理服务, codis-proxy 本身实现了 Redis 协议, 表现得和一个原生的 Redis 没什么区别 (就像 Twemproxy), 对于一个业务来说, 可以部署多个 codis-proxy, codis-proxy 本身是无状态的。

codis-config 是 Codis 的管理工具, 支持包括, 添加/删除 Redis 节点, 添加/删除 Proxy 节点, 发起数据迁移等操作. codis-config 本身还自带了一个 http server, 会启动一个 dashboard,用户可以直接在浏览器上观察 Codis 集群的运行状态。

codis-server 是 Codis 项目维护的一个 Redis 分支, 基于 2.8.13 开发, 加入了 slot 的支持和原子的数据迁移指令。Codis 上层的 codis-proxy 和 codis-config 只能和这个版本的 Redis 交互才能正常运行。

Codis 依赖 ZooKeeper 来存放数据路由表和 codis-proxy 节点的元信息, codis-config 发起的命令都会通过 ZooKeeper 同步到各个存活的 codis-proxy。

Codis 支持按照 Namespace 区分不同的产品, 拥有不同的 product name 的产品, 各项配置都不会冲突。

Codis架构图

前期规划

机器与应用列表:

操作系统:CentOS6.5

1,操作系统:CentOS6.5

192.168.88.106    codis-server1

192.168.88.107    codis-server2

192.168.88.108    codis-server3

192.168.88.111    codis-ha1

192.168.88.112    codis-ha2

192.168.88.113  zookeeper-1(codis-proxy-1)

192.168.88.114  zookeeper-2(codis-proxy-2)

192.168.88.115  zookeeper-3(codis-proxy-3)

2,硬件配置:

ha               mem:8G   cpu:4   disk:100G

zookeeper     mem:16G  cpu:8   disk:300G

codis-server      mem:16G  cpu:8   disk:200G

3,架构应用

1), HA(192.168.88.111、192.168.88.112、VIP 192.168.88.159)

hostname:codisha-1   apps:keepalived master,haproxy           prots:19000

hostname:codisha-2 apps:keepalived slave ,haproxy,codis-config  prots:19000,18087

2),zookeeper codis-proxy

hostname:zookeeper-1     apps: zookeeper1, codis_proxy_1         prots:2811,19000

hostname:zookeeper-2     apps: zookeeper2, codis_proxy_2         prots:2811,19000

hostname:zookeeper-3     apps: zookeeper3, codis_proxy_3         prots:2811,19000

3),codis-server

codis-server(192.168.88.106、192.168.88.107、192.168.88.108)

hostname: codis-server1    apps: codis_server_master,slave,   ports:6379,6380,6389,6390hostname: codis-server2    apps: codis_server_master,slave   ports:6379,6380,6389,6390

hostname: codis-server3    apps: codis_server_master,slave   ports:6379,6380,6389,6390

部署安装详细步骤

一,安装zookeeper

1,配置hosts文件 (所有机器上配置)

vim /etc/hosts

192.168.88.106    codis-server1

192.168.88.107    codis-server2

192.168.88.108    codis-server3

192.168.88.111    codis-ha1

192.168.88.112    codis-ha2

192.168.88.113  zookeeper-1(codis-proxy-1)

192.168.88.114  zookeeper-2(codis-proxy-2)

192.168.88.115  zookeeper-3(codis-proxy-3)

2,安装java环境

ZooKeeper 要求 JAVA 的环境才能运行,并且需要 JAVA6 以上的版本,可以

从 SUN 官网上下载,并对 JAVA 环境变量进行设置。

yum -y install java-1.7.0-openjdk-devel

java -version

java version "1.7.0_75"

OpenJDK Runtime Environment (rhel-2.5.4.0.el6_6-x86_64 u75-b13)

OpenJDK 64-Bit Server VM (build 24.75-b04, mixed mode)

3,安装zookeeper

wget http://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

tar zxvf zookeeper-3.4.6.tar.gz

mv zookeeper-3.4.6 /usr/local/zookeeper

mkdir -p /data/zookeeper/{data,logs}

配置zoo.cfg

vim /usr/local/zookeeper/conf/zoo.cfg 

tickTime=2000

initLimit=5

syncLimit=2

dataDir=/data/zookeeper/data

#dataLogDir=/data/zookeeper/logs

clientPort=2181

server.1=zookeeper-1:2888:3888

server.2=zookeeper-2:2888:3888

server.3=zookeeper-3:2888:3888

上述配置内容说明,可以参考

http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html#sc_RunningReplicatedZooKeeper

4,设置myid

在我们配置的dataDir指定的目录下面,创建一个myid文件,里面内容为一个数字,用来标识当前主机,conf/zoo.cfg文件中配置的server.X中X为什么数字,则myid文件中就输入这个数字

[root@zookeeper-1 ~]# echo 1 > /data/zookeeper/data/myid

[root@zookeeper-2 ~]# echo 2 > /data/zookeeper/data/myid

[root@zookeeper-3 ~]# echo 3 > /data/zookeeper/data/myid

5,启动zookeeper

启动顺序zookeeper-1>zookeeper-2>zookeeper-3

[root@zookeeper-1 zookeeper]# zkServer.sh start  

JMX enabled by default

Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[root@zookeeper-1 zookeeper]# zkServer.sh status

JMX enabled by default

Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg

Mode: leader

可以看到最先开始启动的是leader,其他两个是follower

设置开机启动

vim /etc/rc.local

/usr/local/zookeeper/bin/zkServer.sh start

设置环境变量

vim /etc/profile

代码语言:javascript复制
export ZOOKEEPERPATH=/usr/local/zookeeper
export GOROOT=/usr/local/go
export CODISPATH=/usr/local/codis
export PATH=$PATH:$GOROOT/bin:$ZOOKEEPERPATH/bin:$CODISPATH/bin

source /etc/profile

二,安装codis集群

1,安装go

设置环境变量

vim /etc/profile

代码语言:javascript复制
export GOROOT=/usr/local/go
export CODISPATH=/usr/local/codis
export PATH=$PATH:$GOROOT/bin:$CODISPATH/bin

source /etc/profile

下载安装go

cd /usr/local/

wget http://golangtc.com/static/go/go1.3.3.linux-amd64.tar.gz

tar -zxvf go1.3.3.linux-amd64.tar.gz                                     go version

go version go1.3.3 linux/amd64

2,安装依赖环境

yum groupinstall "Development Tools"

3,安装codis

yum install -y git

go get github.com/wandoulabs/codis  #这个需要几分钟下载共30M文件

package github.com/wandoulabs/codis

imports github.com/wandoulabs/codis

imports github.com/wandoulabs/codis: no buildable Go source 

files in /usr/local/codis/src/github.com/wandoulabs/codis

cd $GOPATH/src/github.com/wandoulabs/codis

[root@localhost codis]# pwd

/usr/local/codis/src/github.com/wandoulabs/codis

#执行编译测试脚本,编译go和reids。 

./bootstrap.sh #这个需要十几分钟共下载50M文件

make gotest

# 将编译好后,把bin目录和一些脚本复制过去/usr/local/codis目录下:

mkdir -p /usr/local/codis/{logs,conf,scripts}

mkdir -p /data/codis_server/{logs,conf,data}

cp -rf bin /usr/local/codis/

cp sample/config.ini /usr/local/codis/conf/

cp sample/redis_conf/6381.conf /data/codis_server/conf/

cp -rf /usr/local/codis/src/github.com/wandoulabs/codis/sample/*.sh

/usr/local/codis/scripts/

cp -rf /usr/local/codis/src/github.com/wandoulabs/codis/sample/usage.md 

/usr/local/codis/scripts/

cp /usr/local/codis/src/github.com/wandoulabs/codis/extern/redis-2.8.13/src/redis-cli /usr/local/codis/bin/redis-cli-2.8.13

cp /usr/local/codis/src/github.com/wandoulabs/codis/extern/redis-2.8.21/src/redis-cli /usr/local/codis/bin/redis-cli-2.8.21

ln -s /usr/local/codis/bin/redis-cli-2.8.21 /usr/local/codis/bin/redis-cli

4. 配置codis_proxy  ( zookeeper-1、zookeeper-2、zookeeper-3 机器上配置)

配置codis_proxy_1 ( zookeeper-1 机器上配置)

cd /usr/local/codis/conf

vim config.ini 

zk=zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181

product=jerrymin-codis

proxy_id=codis_proxy_1

net_timeout=50

dashboard_addr=192.168.88.112:18087

coordinator=zookeeper

配置codis_proxy_2 ( zookeeper-1 机器上配置)

cd /usr/local/codis/conf

vim config.ini 

zk=zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181

product=jerrymin-codis

proxy_id=codis_proxy_2

net_timeout=50

dashboard_addr=192.168.88.112:18087

coordinator=zookeeper

配置codis_proxy_3 ( zookeeper-1 机器上配置)

cd /usr/local/codis/conf

vim config.ini 

zk=zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181

product=jerrymin-codis

proxy_id=codis_proxy_3

net_timeout=50

dashboard_addr=192.168.88.112:18087

coordinator=zookeeper

5. 修改配置文件,启动codis-server服务(codis-server1、codis-server2、codis-server3 机器上配置)

cd /data/codis_server/conf/

grep -Ev "^#|^$" 6379.conf.bak >6379.conf

vim 6379.conf

修改如下参数: (生产环境,参数适当进行调整)

daemonize yes

timeout 300

pidfile /var/run/redis_6379.pid

port 6379

logfile "/data/codis_server/logs/codis_6379.log"

save 900 1

save 300 10

save 60 10000

dbfilename 6379.rdb

dir /data/codis_server/data

appendfilename "6379_appendonly.aof"

appendfsync everysec

具体配置文件如下:

daemonize yes

pidfile /var/run/redis_6379.pid

port 6379

tcp-backlog 511

timeout 300

tcp-keepalive 0

loglevel notice

logfile "/data/codis_server/logs/redis_6379.log"

databases 16

stop-writes-on-bgsave-error yes

rdbcompression yes

rdbchecksum yes

dbfilename 6379.rdb

dir /data/codis_server/data

slave-serve-stale-data yes

repl-disable-tcp-nodelay no

slave-priority 100

maxclients 10000

maxmemory 3gb

maxmemory-policy allkeys-lru

appendonly yes

appendfilename "6379_appendonly.aof"

appendfsync everysec

no-appendfsync-on-rewrite no

auto-aof-rewrite-percentage 100

auto-aof-rewrite-min-size 64mb

lua-time-limit 5000

slowlog-log-slower-than 10000

slowlog-max-len 128

latency-monitor-threshold 0

notify-keyspace-events ""

hash-max-ziplist-entries 512

hash-max-ziplist-value 64

list-max-ziplist-entries 512

list-max-ziplist-value 64

set-max-intset-entries 512

zset-max-ziplist-entries 128

zset-max-ziplist-value 64

hll-sparse-max-bytes 3000

activerehashing yes

client-output-buffer-limit normal 0 0 0

client-output-buffer-limit slave 256mb 64mb 60

client-output-buffer-limit pubsub 32mb 8mb 60

hz 10

aof-rewrite-incremental-fsync yes

复制6380、6389、6390配置文件

cp 6379.conf 6380.conf

cp 6379.conf 6389.conf

cp 6379.conf 6390.conf

sed -i 's/6379/6380/g' 6380.conf

sed -i 's/6379/6389/g' 6380.conf

sed -i 's/6379/6390/g' 6380.conf

添加内核参数

echo "vm.overcommit_memory = 1" >>  /etc/sysctl.conf

sysctl -p

内核参数说明如下:                                                                                                   overcommit_memory文件指定了内核针对内存分配的策略,其值可以是0、1、2。               0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可

用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。 

1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。

2, 表示内核允许分配超过所有物理内存和交换空间总和的内存

启动codis-server服务

/usr/local/codis/bin/codis-server /data/codis_server/conf/6379.conf 

/usr/local/codis/bin/codis-server /data/codis_server/conf/6380.conf 

/usr/local/codis/bin/codis-server /data/codis_server/conf/6389.conf 

/usr/local/codis/bin/codis-server /data/codis_server/conf/6390.conf 

[root@codis-server1 ~]# /usr/local/codis/bin/codis-server

/data/codis_server/conf/6379.conf 

[root@codis-server1 ~]# /usr/local/codis/bin/codis-server 

/data/codis_server/conf/6380.conf 

[root@codis-server1 ~]# /usr/local/codis/bin/codis-server 

/data/codis_server/conf/6389.conf 

[root@codis-server1 ~]# /usr/local/codis/bin/codis-server 

/data/codis_server/conf/6390.conf 

[root@codis-server1 ~]# ps aux |grep codis

root      7473  0.0  0.0 137388  9540 ?        Ssl  09:48   0:00 

/usr/local/codis/bin/codis-server *:6379                           

root      7478  0.0  0.0 137388  9524 ?        Ssl  09:48   0:00 

/usr/local/codis/bin/codis-server *:6380                           

root      7482  0.0  0.0 137388  9516 ?        Ssl  09:48   0:00 

/usr/local/codis/bin/codis-server *:6389                           

root      7486  0.0  0.0 137388  9524 ?        Ssl  09:48   0:00 

/usr/local/codis/bin/codis-server *:6390                           

root      7490  0.0  0.0 103252   856 pts/0    S   09:49   0:00 

grep --color=auto codis

[root@codis-server1 ~]# netstat -tulpn|grep codis

tcp        0      0 0.0.0.0:6379                0.0.0.0:*            

       LISTEN      7473/codis-server * 

tcp        0      0 0.0.0.0:6380                0.0.0.0:*            

       LISTEN      7478/codis-server * 

tcp        0      0 0.0.0.0:6389                0.0.0.0:*            

       LISTEN      7482/codis-server * 

tcp        0      0 0.0.0.0:6390                0.0.0.0:*            

       LISTEN      7486/codis-server * 

tcp        0      0 :::6379                     :::*              

       LISTEN      7473/codis-server * 

tcp        0      0 :::6380                     :::*              

       LISTEN      7478/codis-server * 

tcp        0      0 :::6389                     :::*              

       LISTEN      7482/codis-server * 

tcp        0      0 :::6390                     :::*              

       LISTEN      7486/codis-server * 

6. 查看一下启动流程:

cat /usr/local/codis/scripts/usage.md

0. start zookeeper       //启动zookeeper服务

1. change config items in config.ini  //修改codis配置文件

2. ./start_dashboard.sh    //启动 dashboard

3. ./start_redis.sh        //启动redis实例

4. ./add_group.sh          //添加redis组,一个redis组只能有一个master

5. ./initslot.sh          //初始化槽

6. ./start_proxy.sh        //启动proxy

7. ./set_proxy_online.sh    //上线proxy项目

8. open browser to http://localhost:18087/admin     //访问web

这只是一个参考,有些顺序不是必须的,但启动dashboard前,必须启动zookeeper服务,这是必须的,后面有很多操作,都可以在web页面完成,例如添加/删除组,添加/删除redis实例等

7. 修改脚本,启动 dashboard。( 只需在一台机器上启动即可。codis-ha2上启动 ,后续大部分操作都可以在面板上操作)

cat /usr/local/codis/scripts/start_dashboard.sh

#!/bin/sh

CODIS_HOME=/usr/local/codis 

nohup $CODIS_HOME/bin/codis-config -c $CODIS_HOME/conf/config.ini -L 

$CODIS_HOME/logs/dashboard.log dashboard --addr=:18087 --http-log=$CODIS_HOME/logs/requests.log &>/dev/null &

启动dashboard

[root@codis-ha2 scripts]# ls -lh start_dashboard.sh 

-rwxr-xr-x 1 root root 218 Jun 24 22:04 start_dashboard.sh

[root@codis-ha2 scripts]# ./start_dashboard.sh 

[root@codis-ha2 scripts]# ps aux |grep codis-config

root      2435  0.0  0.1 216444 11044 pts/1    Sl   10:06   0:00 

/usr/local/codis/bin/codis-config -c 

/usr/local/codis/conf/config.ini -L 

/usr/local/codis/logs/dashboard.log dashboard --addr=:18087 --http-

log=/usr/local/codis/logs/requests.log

root      2441  0.0  0.0 103252   840 pts/1    S   10:06   0:00 

grep --color=auto codis-config

[root@codis-ha2 scripts]# netstat -tulpn |grep codis

tcp        0      0 :::10086                    :::*              

       LISTEN      2435/codis-config   

tcp        0      0 :::18087                    :::*              

       LISTEN      2435/codis-config   

访问dashboard

http://192.168.88.112:18087/admin/

8. 添加redis组 

通过管理页面添加组ID,为组添加主从实例,一个组里只能有一个redis-master:

http://192.168.88.112:18087/admin/(最好用Firefox浏览器或者谷歌浏览器)

登录http://192.168.88.112:18087/admin/,添加2个组,组里面有2个实例,

一个主一个从,默认每个组里面第一个实例是主

group_1

192.168.88.106:6379   master

192.168.88.107:6380   slave

group_2

192.168.88.106:6389   master

192.168.88.108:6390   slave

group_3

192.168.88.107:6379   master

192.168.88.106:6380   slave

group_4

192.168.88.107:6389   master

192.168.88.108:6380   slave

group_5

192.168.88.108:6379   master

192.168.88.106:6390   slave

group_6

192.168.88.108:6389   master

192.168.88.107:6390   slave

9. 修改脚本,初始化槽 ( 在codis-ha2机器上配置,初始化solt是在group设置好之后 )

[root@codis-ha2 scripts]# cat initslot.sh 

#!/bin/sh

CODIS_HOME=/usr/local/codis

echo "slots initializing..."

$CODIS_HOME/bin/codis-config -c $CODIS_HOME/conf/config.ini slot 

init -f

echo "done"

echo "set slot ranges to server groups..."

$CODIS_HOME/bin/codis-config -c  config.ini slot range-set 0 170 1 online

$CODIS_HOME/bin/codis-config -c  config.ini slot range-set 171 341 2 online

$CODIS_HOME/bin/codis-config -c  config.ini slot range-set 342 512 3 online

$CODIS_HOME/bin/codis-config -c  config.ini slot range-set 513 683 4 online

$CODIS_HOME/bin/codis-config -c  config.ini slot range-set 684 853 5 online

$CODIS_HOME/bin/codis-config -c  config.ini slot range-set 854 1023 6 online

echo "done"

10.测试一下redis-master和redis-slave是否正常同步数据了:

这个需要用redis-cli测试:

[root@codis-server1 ~]# redis-cli -h 192.168.88.106 -p 6379

192.168.88.106:6379> ping

PONG

192.168.88.106:6379> set name jerrymin

OK

192.168.88.106:6379> get name

"jerrymin"

192.168.88.106:6379> quit

[root@codis-server1 ~]# redis-cli -h 192.168.88.107 -p 6380

192.168.88.107:6380> get name

"jerrymin"

192.168.88.107:6380> set name foo

(error) READONLY You can't write against a read only slave.

192.168.88.107:6380> quit

11. 修改start_proxy.sh,启动codis-proxy服务 ( 在zookeeper-1、zookeeper-2、zookeeper-3上配置)

zookeeper-1上(其他上面就是codis_proxy_2、codis_proxy_3)

查看start_proxy脚本

[root@zookeeper-1 scripts]# cat start_proxy.sh 

代码语言:javascript复制
#!/bin/sh
CODIS_HOME=/usr/local/codis
echo "shut down codis_proxy_1..."
$CODIS_HOME/bin/codis-config -c $CODIS_HOME/conf/config.ini proxy 
offline codis_proxy_1
echo "done"
echo "start new codis_proxy_1..."
nohup $CODIS_HOME/bin/codis-proxy --log-level info -c 
$CODIS_HOME/conf/config.ini -L $CODIS_HOME/logs/codis_proxy_1.log  
--cpu=4 --addr=0.0.0.0:19000 --http-addr=0.0.0.0:11000 &
echo "done"
echo "sleep 3s"
sleep 3
tail -n 30 $CODIS_HOME/logs/codis_proxy_1.log.0

查看proxy_online脚本

[root@zookeeper-1 scripts]# cat set_proxy_online.sh 

代码语言:javascript复制
#!/bin/sh
CODIS_HOME=/usr/local/codis
echo "set codis_proxy_1 online"
$CODIS_HOME/bin/codis-config -c $CODIS_HOME/conf/config.ini proxy 
online codis_proxy_1
echo "done"

启动proxy

[root@zookeeper-1 scripts]# ./start_proxy.sh 

shut down codis_proxy_1...

{

  "msg": "OK",

  "ret": 0

}

done

start new codis_proxy_1...

done

sleep 3s

nohup: appending output to `nohup.out'

2015/07/24 11:06:13 [INFO] set log level to %!s(log.LogLevel=7)

2015/07/24 11:06:13 [INFO] running on 0.0.0.0:19000

2015/07/24 11:06:13 [INFO] start proxy with config: &

{proxyId:codis_proxy_1 productName:jerrymin-codis zkAddr:zookeeper-

1:2181,zookeeper-2:2181,zookeeper-3:2181 fact:<nil> netTimeout:50 

proto:tcp provider:zookeeper}

2015/07/24 11:06:13 [INFO] proxy info = {Id:codis_proxy_1 

Addr:zookeeper-1:19000 LastEvent: LastEventTs:0 State:offline 

Description: DebugVarAddr:zookeeper-1:11000 Pid:8097 StartAt:2015-

07-24 11:06:13.4791833 0800 CST}

2015/07/24 11:06:13 [WARN] wait to be online: codis_proxy_1

上线proxy

[root@zookeeper-1 scripts]# ./set_proxy_online.sh 

set codis_proxy_1 online

{

  "msg": "OK",

  "ret": 0

}

done

三,配置HA

1,codis-ha1和codis-ha2上操作

安装keeplived和ipvsadm

设置VIP 192.168.88.159

codis-ha1  为LVS主

codis-ha2  为LVS备

如下是备的配置,主的配置与此类似

[root@codis-ha2 haproxy]# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

   notification_email {

     local@localhost

   }

   notification_email_from localhost@localhost

   smtp_server 127.0.0.1

   smtp_connect_timeout 30

   router_id LVS_DEVEL_112

}

vrrp_script chk_haproxy_port {

   script "/etc/keepalived/check_haproxy.sh"

   interval 2

   weight 2

}

vrrp_instance VI_1 {

    state BACKUP

    interface eth0

    virtual_router_id 159

    priority 50

    advert_int 3

    authentication {

        auth_type PASS

        auth_pass jerrymin

    }

    virtual_ipaddress {

        192.168.88.159

    }

    track_script {

      chk_haproxy_port

    }

}

检测脚本,保证keepalived切换成主时haproxy在工作

[root@codis-ha2 keepalived]# cat check_haproxy.sh 

代码语言:javascript复制
#!/bin/bash
A=`ps -C haproxy --no-header |wc -l`
if [ $A -eq 0 ];then
/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg
sleep 3
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
/etc/init.d/keepalived stop
fi
fi

2,安装配置haproxy

[root@codis-ha1 ~]#tar zxvf  haproxy-1.5.11.tar.gz 

[root@codis-ha1 ~]#cd  haproxy-1.5.11

[root@codis-ha1 haproxy-1.5.11]#make TARGET=linux26 PREFIX=/usr/local/haproxy 

[root@codis-ha1 haproxy-1.5.11]#make install PREFIX=/usr/local/haproxy

3,配置haproxy.cfg

global 

        maxconn 40000 

daemon

        user root

        group root 

        nbproc 4 

        log 127.0.0.1 local3 

        spread-checks 2 

defaults 

         timeout server  3s 

         timeout connect 3s 

         timeout client  60s 

         timeout http-request 3s 

         timeout queue   3s

frontend codis-proxy

        bind :19000 

        default_backend codis-proxy-19000

frontend web_haproxy_status 

        bind :8080 

        default_backend web_status

backend codis-proxy-19000

        mode    tcp 

        option  tcpka 

        balance roundrobin 

        server  msvr1 192.168.88.113:19000 check   inter 1s rise 5 fall 1 

        server  msvr2 192.168.88.114:19000 check   inter 1s rise 5 fall 1 

        server  msvr3 192.168.88.115:19000 check   inter 1s rise 5 fall 1 

  timeout server   9s 

backend  web_status 

         mode http 

         stats enable 

         stats refresh 5s 

         stats uri /status 

         stats realm Haproxy statistics 

         stats auth jerrymin:jerrymin@2015

4.启动服务

启动服务:

[root@codis-ha1 haproxy]# /usr/local/haproxy/sbin/haproxy -f 

/usr/local/haproxy/haproxy.cfg

[root@codis-ha1 haproxy]# ps aux |grep haproxy

root      3147  0.0  0.0  18668  3020 ?        Ss   14:42   0:00 

/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg

root      3148  0.0  0.0  18668  2888 ?        Ss   14:42   0:00 

/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg

root      3149  0.0  0.0  18668  2888 ?        Ss   14:42   0:00 

/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg

root      3150  0.0  0.0  18668  3020 ?        Ss   14:42   0:00 

/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg

[root@codis-ha1 haproxy]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn 

InActConn

TCP  192.168.88.159:19000 wrr

  -> 192.168.88.111:19000           Local   1      0          0 

0 人点赞