K8S系列之K8S集群之Master节点部署

2020-09-03 17:39:01 浏览数 (1)

在上一篇文章中,我们部署了Harbor私有镜像仓库,集群自建DNS服务以及ETCD集群服务,这些服务本身并不属于K8S集群,只是K8S集群提供服务时需要用到的基础服务。

在本文中,我们将正式开始部署K8S集群,首先要部署K8S集群中的Master节点相关的各种组件,实验集群的架构图参见 搭建K8S集群之Harbor仓库、DNS和ETCD部署。Master节点上有三个组件,分别是API Server、Scheduler已经Controller Manager,接下来我们来详述部署过程。

一、部署API Server组件

根据架构图,我们将在 10.4.7.21 和 10.4.7.22 这两台服务器上部署Master组件,两台服务器部署的步骤是完全一致的,只是部分配置文件需要修改,此处我们以 10.4.7.21 来作详细的说明。

下载k8s包并解压:

代码语言:javascript复制
[root@k8s7-21 src]# pwd
/opt/src
[root@k8s7-21 src]# wget https://dl.k8s.io/v1.15.2/kubernetes-server-linux-amd64.tar.gz
[root@k8s7-21 src]# tar -zxf kubernetes-server-linux-amd64.tar.gz -C /opt/
[root@k8s7-21 src]# cd ..
[root@k8s7-21 opt]# mv kubernetes/ kubernetes-v1.15.2
[root@k8s7-21 opt]# ln -s /opt/kubernetes-v1.15.2/ /opt/kubernetes
[root@k8s7-21 opt]# cd /opt/kubernetes
[root@k8s7-21 kubernetes]# ls
addons  kubernetes-src.tar.gz  LICENSES  server
[root@k8s7-21 kubernetes]# rm -rf kubernetes-src.tar.gz 
[root@k8s7-21 kubernetes]# cd server/bin/
[root@k8s7-21 bin]# rm -rf *_tag     # 删除不需要的文件
[root@k8s7-21 bin]# rm -rf *.tar

API Server要访问ETCD集群来存取数据,这个过程需要使用https来完成通信,所以,API Server是ETCD集群的一个客户端,所以要为API Server签发一套Client证书。签发证书还是在 10.4.7.200 服务器上完成。

创建证书请求文件:

代码语言:javascript复制
[root@k8s7-200 certs]# cat client-csr.json 
{
    "CN": "k8s-master",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

签发证书:

代码语言:javascript复制
[root@k8s7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client
2019/12/16 17:27:37 [INFO] generate received request
2019/12/16 17:27:37 [INFO] received CSR
2019/12/16 17:27:37 [INFO] generating key: rsa-2048
2019/12/16 17:27:38 [INFO] encoded CSR
2019/12/16 17:27:38 [INFO] signed certificate with serial number 142736127829672824953112201776416840875820009716
2019/12/16 17:27:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s7-200 certs]# ll -h client*
-rw-r--r-- 1 root root  997 7月 16 17:27 client.csr
-rw-r--r-- 1 root root  282 7月 16 16:50 client-csr.json
-rw------- 1 root root 1.7K 7月 16 17:27 client-key.pem
-rw-r--r-- 1 root root 1.4K 7月 16 17:27 client.pem

API Server除了作为ETCD集群的客户端之外,还要对外提供服务,此时的API Server就是服务端的角色,而这些请求也需要使用https,所以,我们还需要为API Server签发一套Server端证书。

创建证书请求文件:

代码语言:javascript复制
[root@k8s7-200 certs]# cat apiserver-csr.json 
{
    "CN": "k8s-apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "10.4.7.10",
        "10.4.7.21",
        "10.4.7.22",
        "10.4.7.23"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

这个文件中hosts段是所有可能部署API Server的主机的IP地址,如果之后IP地址有更换,那么需要重新签发证书.

签发证书:

代码语言:javascript复制
[root@k8s7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver
2019/12/16 17:49:28 [INFO] generate received request
2019/12/16 17:49:28 [INFO] received CSR
2019/12/16 17:49:28 [INFO] generating key: rsa-2048
2019/12/16 17:49:28 [INFO] encoded CSR
2019/12/16 17:49:28 [INFO] signed certificate with serial number 250498429005300717086497403250222533374308559881
2019/12/16 17:49:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s7-200 certs]# ll -h apiserver*
-rw-r--r-- 1 root root 1.3K 7月 16 17:49 apiserver.csr
-rw-r--r-- 1 root root  566 7月 16 17:47 apiserver-csr.json
-rw------- 1 root root 1.7K 7月 16 17:49 apiserver-key.pem
-rw-r--r-- 1 root root 1.6K 7月 16 17:49 apiserver.pem

10.4.7.21上创建证书目录并拷贝上述生成的证书

代码语言:javascript复制
[root@k8s7-21 bin]# pwd
/opt/kubernetes/server/bin
[root@k8s7-21 bin]# mkdir cert
[root@k8s7-21 bin]# cd cert/
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/ca.pem ./                                                   
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/ca-key.pem ./
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/client.pem ./
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/client-key.pem ./
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/apiserver.pem ./
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/apiserver-key.pem ./
[root@k8s7-21 cert]# ll -h 
-rw------- 1 root root 1.7K 7月 16 17:57 apiserver-key.pem
-rw-r--r-- 1 root root 1.6K 7月 16 17:57 apiserver.pem
-rw------- 1 root root 1.7K 7月 16 17:56 ca-key.pem
-rw-r--r-- 1 root root 1.4K 7月 16 17:56 ca.pem
-rw------- 1 root root 1.7K 7月 16 17:57 client-key.pem
-rw-r--r-- 1 root root 1.4K 7月 16 17:57 client.pem

创建日志审计conf目录及日志审计配置文件:

代码语言:javascript复制
[root@k8s7-21 bin]# pwd
/opt/kubernetes/server/bin
[root@k8s7-21 bin]# mkdir conf
[root@k8s7-21 bin]# cd conf/
[root@k8s7-21 conf]# ls
[root@k8s7-21 conf]# vim audit.yaml
[root@k8s7-21 conf]# cat audit.yaml 
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

创建apiserver启动脚本及相关日志目录:

代码语言:javascript复制
[root@k8s7-21 bin]# pwd
/opt/kubernetes/server/bin
[root@k8s7-21 bin]# vim kube-apiserver.sh 
[root@k8s7-21 bin]# cat kube-apiserver.sh 
#!/bin/bash
./kube-apiserver 
  --apiserver-count 2 
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log 
  --audit-policy-file ./conf/audit.yaml 
  --authorization-mode RBAC 
  --client-ca-file ./cert/ca.pem 
  --requestheader-client-ca-file ./cert/ca.pem 
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota 
  --etcd-cafile ./cert/ca.pem 
  --etcd-certfile ./cert/client.pem 
  --etcd-keyfile ./cert/client-key.pem 
  --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 
  --service-account-key-file ./cert/ca-key.pem 
  --service-cluster-ip-range 192.168.0.0/16 
  --service-node-port-range 3000-29999 
  --target-ram-mb=1024 
  --kubelet-client-certificate ./cert/client.pem 
  --kubelet-client-key ./cert/client-key.pem 
  --log-dir  /data/logs/kubernetes/kube-apiserver 
  --tls-cert-file ./cert/apiserver.pem 
  --tls-private-key-file ./cert/apiserver-key.pem 
  --v 2
[root@k8s7-21 bin]# chmod 755 kube-apiserver.sh 
[root@k8s7-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver/audit-log

创建supervisor托管API Server的配置文件:

代码语言:javascript复制
[root@k8s7-21 bin]# cat /etc/supervisord.d/kube-apiserver.ini 
[program:kube-apiserver-7-21]
command=/opt/kubernetes/server/bin/kube-apiserver.sh            ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                            ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=30                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                       ; setuid to this UNIX account to run the program
redirect_stderr=true                                            ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log        ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)

启动supervisor托管的API Server服务:

代码语言:javascript复制
[root@k8s7-21 bin]# supervisorctl update
kube-apiserver-7-21: added process group
[root@k8s7-21 bin]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 41034, uptime 2:28:12
kube-apiserver-7-21              RUNNING   pid 42543, uptime 0:00:44

至此,在10.4.7.21上部署的API Server组件就成功运行起来了,然后我们以同样的步骤部署 10.4.7.22 ,此时要注意的是要修改supervisor托管API Server的配置文件:

代码语言:javascript复制
[root@k8s7-22 bin]# cat /etc/supervisord.d/kube-apiserver.ini 
[program:kube-apiserver-7-22]      # 修改为7-22
...

当两台服务器上的API Server服务都成功运行后,我们的API Server就部署完成了,API Server默认情况下回监听两个端口,分别是8080端口和6443端口,8080端口提供http服务,默认IP是127.0.0.1,访问此端口不需要认证。6443端口提供https服务,默认监听IP是0.0.0.0,访问这个端口需要使用证书。

二、部署nginx代理及keepalived服务

我们在集群中部署了两台API Server,为了提供一个统一入口,所以我们在10.4.7.1110.4.7.12上用nginx做一层4层代理,代理至后端的API Server的6443端口,然后在这两台服务器上再部署keepalived,提供统一的VIP入口,从而实现API Server的负载均衡和高可用。

部署 10.4.7.11 及 10.4.7.12 ,安装并配置nginx及启动:

代码语言:javascript复制
[root@k8s7-11 ~]# yum -y install nginx
[root@k8s7-11 ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
http {
    ...
}
stream {
    upstream kube-apiserver {
        server 10.4.7.21:6443     max_fails=3 fail_timeout=30s;
        server 10.4.7.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}
[root@k8s7-11 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@k8s7-11 ~]# systemctl start nginx
[root@k8s7-11 ~]# systemctl enable nginx

“Tips: 此处nginx是做TCP转发,并不是http转发,所以,我们的配置要加在http模块之外。 ”

安装keepalived:

代码语言:javascript复制
[root@k8s7-11 ~]# yum -y install keepalived

创建端口检测脚本,供keepalived判断端口健康状态

代码语言:javascript复制
[root@k8s7-11 ~]# cat /etc/keepalived/check_port.sh 
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
#    script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
#    interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi
[root@k8s7-11 ~]# chmod  x /etc/keepalived/check_port.sh

10.4.7.11上配置为keepalived主节点并启动keepalived

代码语言:javascript复制
[root@k8s7-11 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id 10.4.7.11

}

vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33		# 此处要改成自己网卡的名称
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.4.7.11
    nopreempt

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.4.7.10       # 设置vip为10.4.7.10
    }
}
[root@k8s7-11 ~]# systemctl start keepalived
[root@k8s7-11 ~]# systemctl enable keepalived

10.4.7.12上配置为keepalived的从节点并启动keepalived

代码语言:javascript复制
[root@k8s7-12 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
	router_id 10.4.7.12
}
vrrp_script chk_nginx {
	script "/etc/keepalived/check_port.sh 7443"
	interval 2
	weight -20
}
vrrp_instance VI_1 {
	state BACKUP                    # 设置当前角色为BACKUP
	interface ens33			        # 此处要改成自己网卡的名称
	virtual_router_id 251
	mcast_src_ip 10.4.7.12
	priority 90
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass 11111111
	}
	track_script {
		chk_nginx
	}
	virtual_ipaddress {
		10.4.7.10
	}
}
[root@k8s7-12 ~]# systemctl start keepalived
[root@k8s7-12 ~]# systemctl enable keepalived

至此,API Server部分部署完成。

三、部署Controller Manager

10.4.7.21创建controller-manager启动脚本并创建相关目录

代码语言:javascript复制
[root@k8s7-21 ~]# cd /opt/kubernetes/server/bin/
[root@k8s7-21 bin]# vim kube-controller-manager.sh
[root@k8s7-21 bin]# cat kube-controller-manager.sh 
#!/bin/sh
./kube-controller-manager 
  --cluster-cidr 172.16.0.0/16                     # pod网络的网段
  --leader-elect true 
  --log-dir /data/logs/kubernetes/kube-controller-manager 
  --master http://127.0.0.1:8080                   # API Server地址,此处使用http协议
  --service-account-private-key-file ./cert/ca-key.pem 
  --service-cluster-ip-range 192.168.0.0/16        # service网络的网段
  --root-ca-file ./cert/ca.pem 
  --v 2
[root@k8s7-21 bin]# chmod  x kube-controller-manager.sh
[root@k8s7-21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager

controller-manager服务同样也要托管给supervisor,保证进程退出的时候为我们自动拉起,所以,要准备supervisor托管配置文件

代码语言:javascript复制
[root@k8s7-21 bin]# cat /etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager-7-21]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                                    ; start at supervisord start (default: true)
autorestart=true                                                                  ; retstart at unexpected quit (default: true)
startsecs=30                                                                      ; number of secs prog must stay running (def. 1)
startretries=3                                                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                                                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log  ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                                       ; emit events on stdout writes (default false)

启动supervisor托管的controller-manager服务

代码语言:javascript复制
[root@k8s7-21 bin]# supervisorctl update
kube-controller-manager-7-21: added process group
[root@k8s7-21 bin]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 41034, uptime 1 day, 1:16:03
kube-apiserver-7-21              RUNNING   pid 42543, uptime 22:48:35
kube-controller-manager-7-21     RUNNING   pid 54937, uptime 0:01:53

10.4.7.21上的controller-manager组件就部署完成,然后以相同的步骤部署10.4.7.22 ,其中只需要修改supervisor托管服务的配置文件即可:

代码语言:javascript复制
[root@k8s7-22 ~]# cat /etc/supervisord.d/kube-conntroller-manager.ini 
[program:kube-controller-manager-7-22]      # 修改托管服务名

至此,controller-manager服务部署完毕。

四、部署Scheduler组件

创建kube-scheduler启动脚本并创建相关目录

代码语言:javascript复制
[root@k8s7-21 bin]# pwd
/opt/kubernetes/server/bin
[root@k8s7-21 bin]# cat kube-scheduler.sh 
#!/bin/sh
./kube-scheduler 
  --leader-elect  
  --log-dir /data/logs/kubernetes/kube-scheduler 
  --master http://127.0.0.1:8080 
  --v 2
[root@k8s7-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler
[root@k8s7-21 bin]# chmod  x kube-scheduler.sh

同样,Scheduler服务也要托管给supervisor,创建托管配置文件

代码语言:javascript复制
[root@k8s7-21 bin]# cat /etc/supervisord.d/kube-scheduler.ini 
[program:kube-scheduler-7-21]
command=/opt/kubernetes/server/bin/kube-scheduler.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                               ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                     ; directory to cwd to before exec (def no cwd)
autostart=true                                                           ; start at supervisord start (default: true)
autorestart=true                                                         ; retstart at unexpected quit (default: true)
startsecs=30                                                             ; number of secs prog must stay running (def. 1)
startretries=3                                                           ; max # of serial start failures (default 3)
exitcodes=0,2                                                            ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                          ; signal used to kill process (default TERM)
stopwaitsecs=10                                                          ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                ; setuid to this UNIX account to run the program
redirect_stderr=true                                                     ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                             ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                              ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                              ; emit events on stdout writes (default false)

启动supervisor托管的scheduler服务

代码语言:javascript复制
[root@k8s7-21 bin]# supervisorctl update
kube-scheduler-7-21: added process group
[root@k8s7-21 bin]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 41034, uptime 1 day, 1:29:06
kube-apiserver-7-21              RUNNING   pid 42543, uptime 23:01:38
kube-controller-manager-7-21     RUNNING   pid 54937, uptime 0:14:56
kube-scheduler-7-21              RUNNING   pid 55104, uptime 0:02:01

10.4.7.21上的Scheduler组件就部署完成,然后以相同的步骤部署10.4.7.22 ,其中只需要修改supervisor托管服务的配置文件即可:

代码语言:javascript复制
[root@k8s7-22 bin]# cat /etc/supervisord.d/kube-scheduler.ini 
[program:kube-scheduler-7-22]      # 修改服务名

至此,我们的Scheduler组件也部署完成了。接下来,我们来做一下集群的健康状态检查:

代码语言:javascript复制
[root@k8s7-21 ~]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
[root@k8s7-21 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}

我们可以看到,使用kubectl命令检查集群状态为健康,我们的K8S Master节点上的组件就全部部署完成了。接下来,我们将部署Node节点的服务。

0 人点赞