0. 前言
- 上一篇中,我们介绍了多节点部署 kubernetes 集群,并通过 haproxy keepalived 实现 Master 节点的负载均衡
- 其中 haproxy keepalived 以 tcp 模式实现了正向代理和负载均衡
- 其实 haproxy 可以采用 http 模式工作,并通过 option redispatch 配置实现后端某个真实服务器挂掉后重新转发请求
- 但是如果我们希望实现在特定 http 状态码出现时,重试请求
- 因此本篇文章我们采用 nginx 作为负载均衡组件
1. 实验环境
- 实验环境主要为 4 台虚拟机,IP 地址分别为:192.168.1.66、192.168.1.67、192.168.1.68、192.168.1.69
1.1 节点分配
- 节点分配同上一篇文章,但是因为只有一个负载均衡节点,我们去掉了 lb1 节点
- LB 节点:
- Master 节点:
- master1:192.168.1.67
- master2:192.168.1.68
- master3:192.168.1.69
- Node 节点:
- node1:192.168.1.67
- node2:192.168.1.68
- node3:192.168.1.69
- Etcd 节点:
- etcd01:192.168.1.67
- etcd02:192.168.1.68
- etcd03:192.168.1.69
- 为节约计算资源,kubernetes 集群中的 Master 节点、Node 节点和 Etcd 节点均各自部署在一个节点内
2. 部署流程
- 本章中,我们在 github 仓库 中补充了新的脚本
- 仓库中,nginx 分别实现了 http 和 http s 的方式启动,脚本分别放在 http_script 和 https_scripts 目录
- 其中 2.1 源码编译和 2.2 安装 docker 和上一篇文章中相同,熟悉的读者可以直接跳过
2.1 源码编译
- 安装 golang 环境
- kubernetes v1.18 要求使用的 golang 版本为 1.13
代码语言:javascript
复制$ wget https://dl.google.com/go/go1.13.8.linux-amd64.tar.gz
$ tar -zxvf go1.13.8.linux-amd64.tar.gz -C /usr/local/
- 添加如下环境变量至 ~/.bashrc 或者 ~/.zshrc
代码语言:javascript
复制export GOROOT=/usr/local/go
# GOPATH
export GOPATH=$HOME/go
# GOROOT bin
export PATH=$PATH:$GOROOT/bin
# GOPATH bin
export PATH=$PATH:$GOPATH/bin
代码语言:javascript
复制$ source ~/.bashrc
- 从 github 上下载 kubernetes 最新源码
代码语言:javascript
复制$ git clone https://github.com/kubernetes/kubernetes.git
代码语言:javascript
复制$ make KUBE_BUILD_PLATFORMS=linux/amd64
[0215 22:16:44] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/deepcopy-gen
[0215 22:16:52] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/defaulter-gen
[0215 22:17:00] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/conversion-gen
[0215 22:17:12] Building go targets for linux/amd64:
./vendor/k8s.io/kube-openapi/cmd/openapi-gen
[0215 22:17:25] Building go targets for linux/amd64:
./vendor/github.com/go-bindata/go-bindata/go-bindata
[0215 22:17:27] Building go targets for linux/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/kubelet
cmd/kubeadm
cmd/kube-scheduler
vendor/k8s.io/apiextensions-apiserver
cluster/gce/gci/mounter
cmd/kubectl
cmd/gendocs
cmd/genkubedocs
cmd/genman
cmd/genyaml
cmd/genswaggertypedocs
cmd/linkcheck
vendor/github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
cluster/images/conformance/go-runner
cmd/kubemark
vendor/github.com/onsi/ginkgo/ginkgo
- KUBE_BUILD_PLATFORMS 指定了编译生成的二进制文件的目标平台,包括 darwin/amd64、linux/amd64 和 windows/amd64 等
- 执行 make cross 会生成所有平台的二进制文件
- 本地编译然后上传至服务器
- 生成的 _output 目录即为编译生成文件,核心二进制文件在 _output/local/bin/linux/amd64 中
代码语言:javascript
复制$ pwd
/root/Coding/kubernetes/_output/local/bin/linux/amd64
$ ls
apiextensions-apiserver genman go-runner kube-scheduler kubemark
e2e.test genswaggertypedocs kube-apiserver kubeadm linkcheck
gendocs genyaml kube-controller-manager kubectl mounter
genkubedocs ginkgo kube-proxy kubelet
- 其中 kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kube-proxy 和 kubelet 为安装需要的二进制文件
2.2 安装 docker
- 在 kubernetes 集群的三个虚拟机上安装 docker:192.168.1.67、192.168.1.68、192.168.1.69
- 具体安装细节参见 官方文档
2.3 下载安装脚本
- 后续安装部署的所有脚本已经上传至 github 仓库 中,感兴趣的朋友可以下载
- 在 master1、master2 和 master3 上创建工作目录 k8s 以及脚本目录(k8s/scripts、k8s/http_scripts 和 k8s/https_scripts),复制相应的脚本,到工作目录中的脚本文件夹中
代码语言:javascript
复制$ git clone https://github.com/wangao1236/k8s_cluster_deploy.git
$ cd k8s_cluster_deploy/scripts
$ chmod x *.sh
$ cd ~
$ mkdir -p k8s/scripts
$ cp k8s_cluster_deploy/scripts/* k8s/scripts
$ cd k8s_cluster_deploy/http_scripts
$ chmod x *.sh
$ cd ~
$ mkdir -p k8s/http_scripts
$ cp k8s_cluster_deploy/http_scripts/* k8s/http_scripts
$ cd k8s_cluster_deploy/https_scripts
$ chmod x *.sh
$ cd ~
$ mkdir -p k8s/https_scripts
$ cp k8s_cluster_deploy/https_scripts/* k8s/http_scripts
2.4 安装 cfssl
- 本节同上一篇文章相同,熟悉的读者可以跳过
- 在 master1、master2 和 master3 上安装 cfssl
- 在左右 kubernetes 节点上安装 cfssl,执行 k8s/scripts/cfssl.sh 脚本,或者执行如下命令:
代码语言:javascript
复制$ curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
$ curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
$ curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
$ chmod x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
- k8s/scripts/cfssl.sh 脚本内容如下:
代码语言:javascript
复制$ cat k8s_cluster_deploy/scripts/cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
2.5 安装 etcd
- 本节同上一篇文章相同,熟悉的读者可以跳过
- 在其中一台机器上(如 etcd01)创建目标文件夹
代码语言:javascript
复制$ mkdir -p /opt/etcd/{cfg,bin,ssl}
代码语言:javascript
复制$ wget https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
$ tar -zxvf etcd-v3.3.18-linux-amd64.tar.gz
$ cp etcd-v3.3.18-linux-amd64/etcdctl etcd-v3.3.18-linux-amd64/etcd /opt/etcd/bin
- 创建文件夹 k8s/etcd-cert,其中 k8s 部署相关文件和脚本的存储根目录,etcd-cert 暂存 etcd https 的证书
代码语言:javascript
复制$ mkdir -p k8s/etcd-cert
- 复制 etcd-cert.sh 脚本执行 etcd-cert 目录中
代码语言:javascript
复制$ cp k8s/scripts/etcd-cert.sh k8s/etcd-cert
代码语言:javascript
复制cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.1.65",
"192.168.1.66",
"192.168.1.67",
"192.168.1.68",
"192.168.1.69"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
- 注意修改 server-csr.json 部分的 hosts 内容为 127.0.0.1 和虚拟机集群的所有 IP 地址
- 执行脚本
代码语言:javascript
复制$ ./etcd-cert.sh
2020/02/20 17:18:09 [INFO] generating a new CA key and certificate from CSR
2020/02/20 17:18:09 [INFO] generate received request
2020/02/20 17:18:09 [INFO] received CSR
2020/02/20 17:18:09 [INFO] generating key: rsa-2048
2020/02/20 17:18:09 [INFO] encoded CSR
2020/02/20 17:18:09 [INFO] signed certificate with serial number 712703952401219579947544408367305212876133158662
2020/02/20 17:18:09 [INFO] generate received request
2020/02/20 17:18:09 [INFO] received CSR
2020/02/20 17:18:09 [INFO] generating key: rsa-2048
2020/02/20 17:18:09 [INFO] encoded CSR
2020/02/20 17:18:09 [INFO] signed certificate with serial number 59975233056205858127163767550140095337822886214
2020/02/20 17:18:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
代码语言:javascript
复制$ cp *.pem /opt/etcd/ssl
- 执行 k8s/scripts/etcd.sh 脚本,第一个参数为 etcd 节点名称,第二个为当前启动节点的 IP 地址,第三个参数为 Etcd 集群的所有地址
代码语言:javascript
复制$ ./k8s/scripts/etcd.sh etcd01 192.168.1.67 etcd01=https://192.168.1.67:2380,etcd02=https://192.168.1.68:2380,etcd03=https://192.168.1.69:2380
- k8s/scripts/etcd.sh 脚本内容如下:
代码语言:javascript
复制#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd01=https://192.168.1.10:2380,etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
systemctl stop etcd
systemctl disable etcd
WORK_DIR=/opt/etcd
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd
--name=${ETCD_NAME}
--data-dir=${ETCD_DATA_DIR}
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS}
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS}
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS}
--initial-cluster=${ETCD_INITIAL_CLUSTER}
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN}
--initial-cluster-state=new
--cert-file=${WORK_DIR}/ssl/server.pem
--key-file=${WORK_DIR}/ssl/server-key.pem
--peer-cert-file=${WORK_DIR}/ssl/server.pem
--peer-key-file=${WORK_DIR}/ssl/server-key.pem
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
- 接下来将 etcd 的工作目录和 etcd.service 文件复制给 etcd02 和 etcd03
代码语言:javascript
复制$ scp -r /opt/etcd/ root@192.168.1.68:/opt/
$ scp -r /opt/etcd/ root@192.168.1.69:/opt/
$ scp /usr/lib/systemd/system/etcd.service root@192.168.1.68:/usr/lib/systemd/system/
$ scp /usr/lib/systemd/system/etcd.service root@192.168.1.69:/usr/lib/systemd/system/
- 分别在 etcd02 和 etcd03 上修改配置文件:/opt/etcd/cfg/etcd
代码语言:javascript
复制[root@192.168.1.68] $ vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.68:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.68:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.68:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.68:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.67:2380,etcd02=https://192.168.1.68:2380,etcd03=https://192.168.1.69
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@192.168.1.69] $ vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.69:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.69:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.69:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.69:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.67:2380,etcd02=https://192.168.1.68:2380,etcd03=https://192.168.1.69
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
- 分别在 etcd02 和 etcd03 上启动 etcd 服务
代码语言:javascript
复制$ sudo systemctl enable etcd.service
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /usr/lib/systemd/system/etcd.service.
$ sudo systemctl start etcd.service
代码语言:javascript
复制$ sudo etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.1.67:2379,https://192.168.1.68:2379,https://192.168.1.69:2379" cluster-health
member 3143a1397990e241 is healthy: got healthy result from https://192.168.1.68:2379
member 469e7b2757c25086 is healthy: got healthy result from https://192.168.1.67:2379
member 5b1e32d0ab5e3e1b is healthy: got healthy result from https://192.168.1.69:2379
cluster is healthy
2.6 部署 flannel
- 本节同上一篇文章相同,熟悉的读者可以跳过
- 在 node1、node2、node3 三个节点上分别部署 flannel
- 写入分配的子网段到 etcd 中,供 flannel 使用:
代码语言:javascript
复制$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://127.0.0.1:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
代码语言:javascript
复制$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://127.0.0.1:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
代码语言:javascript
复制$ wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
$ tar -zxvf flannel-v0.11.0-linux-amd64.tar.gz
$ mkdir -p /opt/kubernetes/{cfg,bin,ssl}
$ mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
- 执行脚本 k8s/scripts/flannel.sh,第一个参数为 etcd 地址
代码语言:javascript
复制$ ./k8s/scripts/flannel.sh https://192.168.1.67:2379,https://192.168.1.68:2379,https://192.168.1.69:2379
代码语言:javascript
复制$ cat ./k8s/scripts/flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
systemctl stop flanneld
systemctl disable flanneld
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker -f /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
代码语言:javascript
复制$ cat /run/flannel/subnet.envFLANNEL_NETWORK=172.17.0.0/16
FLANNEL_SUBNET=172.17.89.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
$ cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=172.17.89.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_OPTS=" --bip=172.17.89.1/24 --ip-masq=false --mtu=1450"
- 执行 vim /usr/lib/systemd/system/docker.service 修改 docker 配置
代码语言:javascript
复制[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H unix:///var/run/docker.soc
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.soc
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
......
代码语言:javascript
复制$ systemctl daemon-reload
$ systemctl restart docker
- 查看 flannel 网络,docker0 位于 flannel 分配的子网中
代码语言:javascript
复制$ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.89.1 netmask 255.255.255.0 broadcast 172.17.89.255
ether 02:42:fb:16:3b:12 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:feaf:b59f prefixlen 64 scopeid 0x20<link>
ether 08:00:27:af:b5:9f txqueuelen 1000 (Ethernet)
RX packets 517 bytes 247169 (247.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 361 bytes 44217 (44.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.67 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::a00:27ff:fe9f:cb5c prefixlen 64 scopeid 0x20<link>
inet6 2409:8a10:2e24:d130:a00:27ff:fe9f:cb5c prefixlen 64 scopeid 0x0<global>
ether 08:00:27:9f:cb:5c txqueuelen 1000 (Ethernet)
RX packets 9244 bytes 2349434 (2.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7420 bytes 1047863 (1.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.89.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::60c3:ecff:fe34:9d6c prefixlen 64 scopeid 0x20<link>
ether 62:c3:ec:34:9d:6c txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 6 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 3722 bytes 904859 (904.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3722 bytes 904859 (904.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
代码语言:javascript
复制[root@adf9fc37d171 /]# yum install -y net-tools
[root@adf9fc37d171 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.89.2 netmask 255.255.255.0 broadcast 172.17.89.255
ether 02:42:ac:11:59:02 txqueuelen 0 (Ethernet)
RX packets 1538 bytes 14149689 (13.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1383 bytes 81403 (79.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@adf9fc37d171 /]# ping 172.17.89.1
PING 172.17.89.1 (172.17.89.1) 56(84) bytes of data.
64 bytes from 172.17.89.1: icmp_seq=1 ttl=64 time=0.045 ms
64 bytes from 172.17.89.1: icmp_seq=2 ttl=64 time=0.045 ms
64 bytes from 172.17.89.1: icmp_seq=3 ttl=64 time=0.050 ms
64 bytes from 172.17.89.1: icmp_seq=4 ttl=64 time=0.052 ms
64 bytes from 172.17.89.1: icmp_seq=5 ttl=64 time=0.049 ms
- 测试可以 ping 通 docker0 网卡 证明 flannel 起到路由作用
2.7 安装 nginx
代码语言:javascript
复制$ sudo apt-get -y install nginx
- 使用 k8s_cluster_deploy/nginx/nginx.conf 替换 /etc/nginx/nginx.conf
- k8s_cluster_deploy/nginx/nginx.conf 内容如下:
代码语言:javascript
复制$ cat k8s_cluster_deploy/nginx/nginx.conf
user www-data;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
worker_processes 2;
worker_rlimit_nofile 65536;
events {
worker_connections 32768;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml rss text/javascript;
##
# Virtual Host Configs
##
log_format default '$remote_addr:$remote_port->$upstream_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
- 在配置中主要定义了日志格式,命名为 default
- 将 k8s_cluster_deploy/nginx/conf.d/k8s.conf,复制到 /etc/nginx/conf.d 中
- k8s_cluster_deploy/nginx/conf.d/k8s.conf 内容如下:
代码语言:javascript
复制$ cat k8s_cluster_deploy/nginx/conf.d/k8s.conf
upstream kubernetes-api-cluster-tls {
server 192.168.1.67:6443 max_fails=0 fail_timeout=3s weight=1;
server 192.168.1.68:6443 max_fails=0 fail_timeout=3s weight=1;
# server 192.168.1.69:6443 weight=1 max_fails=0 fail_timeout=3s;
}
upstream kubernetes-api-cluster {
server 192.168.1.67:8080 weight=100 max_fails=0 fail_timeout=3s;
server 192.168.1.68:8080 weight=100 max_fails=0 fail_timeout=3s;
# server 192.168.1.69:8080 weight=100 max_fails=0 fail_timeout=3s;
}
server {
listen 8443 ssl;
ssl_certificate /etc/nginx/ssl/master/kube-apiserver.pem; # kube-apiserver cert
ssl_certificate_key /etc/nginx/ssl/master/kube-apiserver-key.pem; # kube-apiserver key
ssl_trusted_certificate /etc/nginx/ssl/ca.pem; # ca.pem
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH AESGCM:DH AESGCM:ECDH AES256:DH AES256:ECDH AES128:DH AES:ECDH 3DES:DH 3DES:RSA AESGCM:RSA AES:RSA 3DES:!aNULL:!MD5:!DSS;
location / {
proxy_ssl_certificate /etc/nginx/ssl/test-user.pem; # kubectl cert
proxy_ssl_certificate_key /etc/nginx/ssl/test-user-key.pem; # kubectl key
proxy_ssl_trusted_certificate /etc/nginx/ssl/ca.pem; # ca.pem
proxy_pass https://kubernetes-api-cluster-tls;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_403 http_404 http_429 non_idempotent;
proxy_next_upstream_timeout 1s;
proxy_next_upstream_tries 3;
proxy_set_header Host $host;
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_read_timeout 600s;
}
access_log /var/log/nginx/access.log default;
}
server {
listen 8081;
location / {
proxy_pass http://kubernetes-api-cluster;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504 http_403 http_429 non_idempotent;
proxy_next_upstream_timeout 3s;
proxy_next_upstream_tries 5;
proxy_ignore_client_abort on;
proxy_set_header Host $host;
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_connect_timeout 300s;
}
#access_log /var/log/nginx/access.log default;
}
- 配置中分别实现了监听 8443 和 8081 端口,8443 为 https 端口,8081 为 http 端口
- 8443 端口中 ssl_certificate、ssl_certificate_key 分配指定为 kube-apiserver 的服务端证书和私钥
- proxy_ssl_certificate 和 proxy_ssl_certificate_key 分别指定为一个具有 cluster-admin 权限的用户的客户端证书和私钥
- 当前由于证书和私钥还未生存,暂时不需要重启服务
2.8 脚本一键安装
- 上一篇中,我们详细介绍了各个组件的证书生成、配置、重启服务和验证
- 在本篇文章中为了方便部署,我们分别为 http 和 https 方式编写了一键安装脚本
- 进入相应的目录启动服务
代码语言:javascript
复制$ cd k8s/http_scripts
$ ./install.sh
$ cd k8s/https_scripts
$ ./install.sh
- 其中 k8s/https_scripts/install.sh 脚本内容如下:
代码语言:javascript
复制$ cat k8s/https_scripts/install.sh
#!/bin/bash
sudo mkdir -p /opt/kubernetes/{bin,cfg,log,ssl}
sudo rm -rf /opt/kubernetes/cfg/*
sudo rm -rf /opt/kubernetes/log/*
sudo rm -rf /opt/kubernetes/ssl/*
ssh root@master2 "mkdir -p /opt/kubernetes/{bin,cfg,log} &&
rm -rf /opt/kubernetes/cfg/* &&
rm -rf /opt/kubernetes/log/* &&
rm -rf /opt/kubernetes/ssl/*"
ssh root@master3 "mkdir -p /opt/kubernetes/{bin,cfg,log} &&
rm -rf /opt/kubernetes/cfg/* &&
rm -rf /opt/kubernetes/log/* &&
rm -rf /opt/kubernetes/ssl/*"
mkdir -p ../k8s-cert
sudo rm -rf ../k8s-cert/*
sudo rm -rf /opt/kubernetes/ssl/*
ssh root@master2 "rm -rf /opt/kubernetes/ssl/*"
ssh root@master3 "rm -rf /opt/kubernetes/ssl/*"
cp k8s-cert.sh ../k8s-cert
cd ../k8s-cert
./k8s-cert.sh
echo -e "