!TIP 二进制部署
k8s-node节点部署calico
转载请注明出处:https://janrs.com/5rce 有任何问题欢迎在底部评论区发言。
部署 calico
!NOTE 在
node节点部署。
1.配置网络
部署 calico
之前需要配置一下网络。具体查看官网说明。
地址:(https://projectcalico.docs.tigera.io/maintenance/troubleshoot/troubleshooting#configure-networkmanager)
代码语言:shell复制cat > /etc/NetworkManager/conf.d/calico.conf <<EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:vxlan-v6.calico;interface-name:wireguard.cali;interface-name:wg-v6.cali
EOF重启网络
代码语言:shell复制systemctl restart NetworkManager2.部署 calico
!NOTE 官方不推荐手动二进制部署。推荐使用
tigera-operator部署。 我的是自建群,就按照官方推荐的方式部署。 地址:(https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises)yaml已经复制到我的博客,直接下载就行。
2-1.部署 operator
!NOTE 镜像地址已经修改为阿里的。不然又是卡住不动。 如果要部署到
master节点,给master打标签并在以下yaml中设置nodelSelector属性。
下载 yaml 并执行部署
cd /etc/kubernetes/init_k8s_config/ &&
wget https://janrs.com/calico-tigera-operator.yaml &&
kubectl create -f /etc/kubernetes/init_k8s_config/calico-tigera-operator.yaml查看
代码语言:shell复制kubectl get pods -A显示
代码语言:text复制NAMESPACE NAME READY STATUS RESTARTS AGE
tigera-operator tigera-operator-6dcd98c8ff-f2rw4 1/1 Running 0 104m2-2.部署 custom-resources
!NOTE
custom-resources可以自定义。不过按照官方默认的就可以了。
下载 yaml 并部署
cd /etc/kubernetes/init_k8s_config/ &&
wget https://janrs.com/calico-custom-resources.yaml修改网段。修改 ippool 的网段为:10.100.0.0/16
官方自带的是 192.168.0.0/16 网段。这里使用的是自定义的。
...
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 10.100.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
...代码语言:yaml复制!NOTE 如果有设置部署到
master,参考以下yaml配置
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# NodeMetricsPort specifies which port calico/node serves prometheus metrics on. By default, metrics are not enabled. If specified, this overrides any FelixConfiguration resources which may exist. If omitted, then prometheus metrics may still be configured through FelixConfiguration.
nodeMetricsPort: 9127
# TyphaMetricsPort specifies which port calico/typha serves prometheus metrics on. By default, metrics are not enabled.
typhaMetricsPort: 9128
# CalicoKubeControllersDeployment configures the calico-kube-controllers Deployment. If used in conjunction with the deprecated ComponentResources, then these overrides take precedence.
calicoKubeControllersDeployment:
spec:
template:
spec:
nodeSelector:
controller-plane: 'true'
tolerations:
- effect: NoSchedule
operator: Exists
# ControlPlaneNodeSelector is used to select control plane nodes on which to run Calico components. This is globally applied to all resources created by the operator excluding daemonsets.
controlPlaneNodeSelector:
controller-plane: 'true'
# ControlPlaneTolerations specify tolerations which are then globally applied to all resources created by the operator.
controlPlaneTolerations:
- effect: NoSchedule
operator: Exists
#typhaDeployment:
#spec:
#template:
#spec:
#nodeSelector:
#controller-plane: 'true'
#tolerations:
#- effect: NoSchedule
#operator: Exists
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 10.100.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec:
apiServerDeployment:
spec:
template:
spec:
nodeSelector:
controller-plane: 'true'
tolerations:
- effect: NoSchedule
operator: Exists部署
代码语言:shell复制kubectl create -f /etc/kubernetes/init_k8s_config/calico-custom-resources.yaml查看
代码语言:shell复制kubectl get pods -A显示
代码语言:text复制NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-69c54b8687-4khms 1/1 Running 1 (102m ago) 104m
calico-apiserver calico-apiserver-69c54b8687-7mnln 1/1 Running 1 (102m ago) 104m
calico-system calico-kube-controllers-688968c9b6-kchvq 1/1 Running 0 104m
calico-system calico-node-m7zxb 1/1 Running 0 103m
calico-system calico-typha-7bd99d8c79-vj4lw 1/1 Running 0 104m
calico-system csi-node-driver-v4g95 2/2 Running 0 103m
tigera-operator tigera-operator-6dcd98c8ff-f2rw4 1/1 Running 0 104m2-3.检测 node 状态
部署成功,所有 pods 运行正常,且 node 节点状态由 NotReady 变成 Ready 状态。
kubectl get nodes显示
代码语言:text复制NAME STATUS ROLES AGE VERSION
k8s-node01 Ready <none> 108m v1.23.9至此。网络插件 calico 部署成功。
转载请注明出处:https://janrs.com/5rce 有任何问题欢迎在底部评论区发言。


