jenkins结合istio实现灰度发布

2020-07-31 15:17:36 浏览数 (1)

灰度发布介绍

再介绍灰度发布前,先来介绍下目前我司的代码发布流程,如下:

  1. 开发人员提交代码至代码仓库
  2. 测试环境通过Jenkins进行构建
    1. 拉取代码
    2. 打包构建
    3. 代码检测
    4. 构建镜像
    5. 运行yaml,生成pod
  3. 测试人员开始测试
  4. 测试通过后,选择合适的时间发布到线上环境中
  5. 有问题,则回滚至上一版本

以上差不多是现有的升级方式,没问题就上线,有问题就回滚,这也差不多是大多数企业的部署方式,但这种方式会存在一些问题:

  1. 测试不全面,导致上线后才出现问题
  2. 测试环境没有足够的访问量来暴漏问题

假如出现上述中的任何一个问题,都要进行回滚操作,此时已经对所有的用户产生了影响,整个人瞬间妈卖批!!! 那么,有没有一种方法,能让线上的一部分用户体验新的功能,而这些用户也就是小白鼠,如果出问题,受影响的仅此一小部分用户而已,如果没问题,那再继续增加用户数,这种渐进式部署方式就是灰度发布,也叫金丝雀发布。

实现方式

其实灰度发布实现起来存在很多的障碍,对于一些大厂来说,实现起来自不在话下,但对于大多数企业来说,实现起来还是有些难度的,例如:

  • 目标用户
    • 地理
    • 访问形式
    • 用户特点
  • 回滚策略
  • 技术难点 在进行发布的时候,目标性越强,越精准,越能针对性的进行数据反馈,在k8s出现之后,实现变得简单了起来,结合ingress,可以对流量做简单的权重设置,但对于一些用户特征,似乎不太可以,而istio的出现,一下子解决了这个问题,下面就来看下jenkins如何结合istio来进行灰度发布。
环境准备
  • jenkins环境
  • harbor私有镜像
  • gitlab仓库
  • k8s集群
  • istio(通过helm安装)

下面简单介绍下整个流程图:

以上的环境安装都能在我的博客中找到,这里就不再单独写了,下面看下具体的如何实现吧

Pipeline编写
代码语言:javascript复制
pipeline {
    agent any
    environment {
        def registry = "harbor.yscloud.com"
        def ops_git_addr = "ssh://git@git.yscloud.com:24/web-server/ops-scripts.git"
        def git_address = "ssh://git@git.yscloud.com:24/web-server/istio-test.git"
        def git_auth = "54cd2820-fdcd-4c77-a5e8-a227e1546955"
        // Pod的名称 
        def app_names = "web-server"
        def domain_name = "nginx.yscloud.com"
        def replica_numbers = "1"
        def port_num = "80"
        def kuber_yml_dirs = "/root/k8s-yaml/cicd-istio"
    }
    stages('Begin Deploy') {
        stage('拉取代码') {
            steps {
                script {
                    deleteDir()
                    checkout([$class: 'GitSCM', branches: [[name: '${Branch}']], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_address}"]]])
                    tag_file = "/data/jenkins/workspace/TagFile/tag.txt"
                    date_time = sh(script: "date  %Y%m%d%H%M", returnStdout: true).trim()
                    git_cm_id = sh(script: "git rev-parse --short HEAD", returnStdout: true).trim()
                    cur_version = "1.0.0"
                    img_group = "onair"
                    img_name = "nginx-istio-test"
                    tag_v2 = sh(script: "grep ${JOB_NAME}-v2 ${tag_file} | awk -F '=' '{print $2}'", returnStdout: true).trim()
                    tag_current = "${date_time}-${cur_version}-${git_cm_id}"
                    whole_img_name_new = "${registry}/${img_group}/${img_name}:${tag_current}"
                    whole_img_name_old = "${registry}/${img_group}/${img_name}:${tag_v2}"
                }
            }
        }
        stage('deploy') {
            steps {
                script {
                if (env.Action == 'Deploy') {
                    sh "rm -rf jenkins-script && mkdir jenkins-script && cd jenkins-script && git clone ${ops_git_addr} && cp -r ${WORKSPACE}/jenkins-script/ops-scripts/deploy-jenkins/* ${WORKSPACE}/"
                    sh "docker login -u admin -p Harbor12345 ${registry}"
sh """
cat > Dockerfile << EOF
FROM harbor.yscloud.com/baseimg/nginx-base:v1.0.0
COPY index.html /usr/share/nginx/html/
EXPOSE 80
EOF"""
                    sh """
                    docker build -t ${whole_img_name_new} .
                    docker push ${whole_img_name_new}
                    """
                    sh "bash k8s.sh --test=test --app_name=${app_names} --replica_number=${replica_numbers} --image_addr_new=${whole_img_name_new} --image_addr_old=${whole_img_name_old} --port_num=${port_num} --kuber_yml_dir=${kuber_yml_dirs} --group_name=web-server --domain_name=${domain_name}"
                    sh "bash -x TAG/tagrecord.sh --type=istio --stat=false --tag_name_tmp=${tag_current}"
                }                    
                }

            }
        }
        stage('policy choice') {
            steps {
                script {
                    tag_tmp_to_v2 = sh(script: "grep ${JOB_NAME}-tmp ${tag_file} | awk -F '=' '{print $2}'", returnStdout: true).trim()
                    tag_v2_to_v1 = sh(script: "grep ${JOB_NAME}-v2 ${tag_file} | awk -F '=' '{print $2}'", returnStdout: true).trim()
                    tag_v1_to_tmp = sh(script: "grep ${JOB_NAME}-v1 ${tag_file} | awk -F '=' '{print $2}'", returnStdout: true).trim()
                    tag_stat = sh(script: "grep ${JOB_NAME}-stat ${tag_file} | awk -F '=' '{print $2}'", returnStdout: true).trim()
                                    if (env.Action == "Policy") {
                    sh "rm -rf jenkins-script && mkdir jenkins-script && cd jenkins-script && git clone ${ops_git_addr} && cp -r ${WORKSPACE}/jenkins-script/ops-scripts/deploy-jenkins/* ${WORKSPACE}/"
                    dir("${WORKSPACE}") {
                    sh """
                    ansible-playbook -i host setup-policy.yaml -e "group_name=web-server k8s_dst_dir=${kuber_yml_dirs} deploy_policy=${PolicyChoice}"
                    """
//只有当所有的流量指向新版本时才去替换tag
sh """
if [ "${PolicyChoice}" == "v2" ];then
    if [ "${tag_stat}" == "false" ];then   //目的是防止二次替换tag,造成tag混乱
        bash -x TAG/tagrecord.sh --type=istio --stat=true --tag_name_v2=${tag_tmp_to_v2} --tag_name_v1=${tag_v2_to_v1} --tag_name_tmp=${tag_v1_to_tmp}
    else
        echo "tag已经替换完成"
    fi
fi
"""
                    }
                }
                }

            }
        }
    }
}

这里执行了两个脚本,一个是用于通过ansible-playbook进行下发yaml配置,另一个是用于替换tag,另外还定义了一些变量,用来接收传入的参数,下面看下Jenkins使用的一些插件来传参 定义分支的变量

代码语言:javascript复制
def gettags = ("git ls-remote -h ssh://git@git.yscloud.com:24/web-server/istio-test.git").execute()
a=gettags.text.readLines().collect { it.split()[1].replaceAll('refs/heads/', '') }.unique()
return a

定义动作类型

定义策略类型

代码语言:javascript复制
String list=["bash","-c","cat /data/version-list/version"].execute().text
def arr=list.tokenize(',')

if (Action.equals("Policy")){
  return arr
} else {
  return ["选择Policy后显示"]
}

这里需要注意一下,version这个文件内的内容是以逗号分隔的

代码语言:javascript复制
cat /data/version-list/version 
v1,10,20,30,50,80,v2,

以上就是关于所有要传入的参数配置

通过ansible完成yaml文件的下发和启动

setup-k8s.yaml 入口文件

代码语言:javascript复制
- hosts: $GroupName
  roles:
    - role: deploy-server-istio
      vars:
        pod_name: "$AppPodName"
        replica_num: "$ReplicaNum"
        image_addr_old: "$HarborImageAddressOld"
        image_addr_new: "$HarborImageAddressNew"
        port_num: "$PortNum"
        k8s_dst_dir: "$KuberDstDir"
        domain_name: "$DomainName"

清单文件参考

代码语言:javascript复制
cat host/hosts
[web-server]
192.168.1.118

[all:vars]
ansible_ssh_user=root
ansible_ssh_port=22

主要目录结构参考

代码语言:javascript复制
# tree roles/deploy-server-istio/
roles/deploy-server-istio/
├── tasks
│   └── main.yml
└── templates
    ├── app-deploy-v1.yaml.j2
    ├── app-deploy-v2.yaml.j2
    ├── app-destination-rule.yaml.j2
    ├── app-ingressgateway.yaml.j2
    ├── app-svc.yaml.j2
    ├── app-virtualservice-10.yaml.j2
    ├── app-virtualservice-20.yaml.j2
    ├── app-virtualservice-30.yaml.j2
    ├── app-virtualservice-50.yaml.j2
    ├── app-virtualservice-80.yaml.j2
    ├── app-virtualservice-v1.yaml.j2
    └── app-virtualservice-v2.yaml.j2

下面是用于启动pod的yaml文件参考 v1版本的deployment参考

代码语言:javascript复制
cat app-deploy-v1.yaml.j2
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ pod_name }}-v1
spec:
  replicas: {{ replica_num }}
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:
      app: {{ pod_name }}
      version: v1
  template:
    metadata:
      labels:
        app: {{ pod_name }}
        version: v1
    spec:
      containers:
        - name: {{ pod_name }}
          image: {{ image_addr_old }}
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: 2
              memory: 4096Mi
            requests:
              cpu: 100m
              memory: 100Mi
      imagePullSecrets:
        - name: mima

v2版本的deployment参考

代码语言:javascript复制
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ pod_name }}-v2
spec:
  replicas: {{ replica_num }}
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:
      app: {{ pod_name }}
      version: v2
  template:
    metadata:
      labels:
        app: {{ pod_name }}
        version: v2
    spec:
      containers:
        - name: {{ pod_name }}
          image: {{ image_addr_new }}
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: 2
              memory: 4096Mi
            requests:
              cpu: 100m
              memory: 100Mi
      imagePullSecrets:
        - name: mima

service文件参考

代码语言:javascript复制
apiVersion: v1
kind: Service
metadata:
  name: {{ pod_name }}
  labels:
    app: {{ pod_name }}
spec:
  ports:
  - name: http
    port: {{ port_num }}
    targetPort: {{ port_num }}
  selector:
    app: {{ pod_name }}
  sessionAffinity: None

目的路由规则参考

代码语言:javascript复制
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: {{ pod_name }}
spec:
  host: {{ pod_name }}
  subsets: 
  - name: v1
    labels:
      version: v1
  - name: v2
    labels: 
      version: v2
    trafficPolicy:
      loadBalancer:
        simple: ROUND_ROBIN

入口网关参考

代码语言:javascript复制
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: {{ pod_name }}-gateway
spec:
  selector:
    istio: ingressgateway 
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "{{ domain_name }}"

下面是虚拟服务的各种策略参考

代码语言:javascript复制
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: {{ pod_name }}
spec:
  hosts:
  - "{{ domain_name }}"
  gateways:
  - {{ pod_name }}-gateway
  http:
  - route:
    - destination:
        host: {{ pod_name }}
        subset: v1
        port:
          number: {{ port_num }}
      weight: 90
    - destination:
        host: {{ pod_name }}
        subset: v2
        port:
          number: {{ port_num }}
      weight: 10
    timeout: 60s
    retries:
      attempts: 3
      perTryTimeout: 2s

这个是纯粹基于流量权重的参考

代码语言:javascript复制
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: {{ pod_name }}
spec:
  hosts:
  - "{{ domain_name }}"
  gateways:
  - {{ pod_name }}-gateway
  http:
  - match:
    - headers:
        X-Real-IP:
          regex: ".*192.168.3.148.*"
    route:
    - destination:
        host: {{ pod_name }}
        subset: v2
        port:
          number: 80
  - route:
    - destination:
        host: {{ pod_name }}
        subset: v1
        port:
          number: 80
      weight: 80
    - destination:
        host: {{ pod_name }}
        subset: v2
        port:
          number: 80
      weight: 20
    timeout: 60s
    retries:
      attempts: 3
      perTryTimeout: 2s

这个是基于用户源IP和流量权重的参考

代码语言:javascript复制
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: {{ pod_name }}
spec:
  hosts:
  - "{{ domain_name }}"
  gateways:
  - {{ pod_name }}-gateway
  http:
  - route:
    - destination:
        host: {{ pod_name }}
        subset: v1
        port:
          number: 80
      weight: 100
    - destination:
        host: {{ pod_name }}
        subset: v2
        port:
          number: 80
      weight: 0
    timeout: 60s
    retries:
      attempts: 3
      perTryTimeout: 2s

以上是所有的流量导入v1,v2则相反,剩下的一些策略,就不再一一展示了,大体相同,调整下权重即可 下面看下真正执行这些操作的主文件

代码语言:javascript复制
- name: Check dst dir is already exists.
  stat:
    path: "{{ k8s_dst_dir }}"
  register: dst_dir

- name: Check yaml file is already exists.
  stat:
    path: "{{ k8s_dst_dir }}/app-service.yaml"
  register: yml_file

- name: Create k8s dst dir
  file:
    path: "{{ k8s_dst_dir }}"
    state: directory
    owner: root
    group: root
    mode: 0755
  when: dst_dir.stat.exists == False

- name: Copy local yaml file to k8s server
  template:
    src: "{{ item.src }}"
    dest: "{{ item.dest }}"
    owner: root
    group: root
    mode: 0644
  with_items:
    - src: app-deploy-v1.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-deploy-v1.yaml"
    - src: app-deploy-v2.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-deploy-v2.yaml"
    - src: app-svc.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-svc.yaml"
    - src: app-ingressgateway.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-ingressgateway.yaml"
    - src: app-destination-rule.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-destination-rule.yaml"
    - src: app-virtualservice-10.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-virtualservice-10.yaml"
    - src: app-virtualservice-20.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-virtualservice-20.yaml"
    - src: app-virtualservice-30.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-virtualservice-30.yaml"
    - src: app-virtualservice-50.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-virtualservice-50.yaml"
    - src: app-virtualservice-80.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-virtualservice-80.yaml"
    - src: app-virtualservice-v1.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-virtualservice-v1.yaml"
    - src: app-virtualservice-v2.yaml.j2
      dest: "{{ k8s_dst_dir }}/app-virtualservice-v2.yaml"

- name: Start "{{ pod_name }}" version v1
  shell: "kubectl apply -f app-deploy-v1.yaml || kubectl delete -f app-deploy-v1.yaml ; sleep 3 ; kubectl apply -f app-deploy-v1.yaml"
  args:
    chdir: "{{ k8s_dst_dir }}"

- name: Start "{{ pod_name }}" version v2
  shell: "kubectl apply -f app-deploy-v2.yaml || kubectl delete -f app-deploy-v2.yaml ; sleep 3 ; kubectl apply -f app-deploy-v2.yaml"
  args:
    chdir: "{{ k8s_dst_dir }}"

- name: Start "{{ pod_name }}" svc
  shell: "kubectl apply -f app-svc.yaml || kubectl delete -f app-svc.yaml ; sleep 3 ; kubectl apply -f app-svc.yaml"
  args:
    chdir: "{{ k8s_dst_dir }}"

- name: Start "{{ pod_name }}" destination rule
  shell: "kubectl apply -f app-destination-rule.yaml"
  args:
    chdir: "{{ k8s_dst_dir }}"

- name: Start "{{ pod_name }}" ingressgateway
  shell: "kubectl apply -f app-ingressgateway.yaml"
  args:
    chdir: "{{ k8s_dst_dir }}"

- name: Start "{{ pod_name }}" virtualservice policy, default v2 is 10%
  shell: "kubectl apply -f app-virtualservice-10.yaml"
  args:
    chdir: "{{ k8s_dst_dir }}"

默认在下发时就已经有10%的流量至v2版本了,然后我这里写了一个脚本,用来执行此playbook

代码语言:javascript复制
#!/bin/bash

export PATH=$PATH
dst_yaml="setup-k8s.yml"

ARGS=`getopt -a -l test::,app_name::,replica_number::,image_addr_old::,image_addr_new::,port_num::,kuber_yml_dir::,group_name::,domain_name::, -- "$@"`

if [ $? != 0 ];then
        echo "Terminating..."
        exit 1
fi

eval set -- "${ARGS}"

while :
do
    case $1 in
      --test)
          test=$2
          shift
          ;;
      --app_name)
          app_name=$2
          shift
          ;;
      --replica_number)
          replica_number=$2
          shift
          ;;
      --image_addr_old)
          image_addr_old=$2
          shift
          ;;
      --image_addr_new)
          image_addr_new=$2
          shift
          ;;
      --kuber_yml_dir)
          kuber_yml_dir=$2
          shift
          ;;
      --group_name)
          group_name=$2
          shift
          ;;
      --port_num)
          port_num=$2
          shift
          ;;
      --domain_name)
          domain_name=$2
          shift
          ;;
      --)
          shift
          break
          ;;
      *)
          echo "Internal error!"
          exit 1
          ;;
    esac
shift
done

SedFile() {
  sed -i "s#$AppPodName#$app_name#g" ${dst_yaml}
  sed -i "s#$ReplicaNum#$replica_number#g" ${dst_yaml}
  sed -i "s#$HarborImageAddressOld#$image_addr_old#g" ${dst_yaml}
  sed -i "s#$HarborImageAddressNew#$image_addr_new#g" ${dst_yaml}
  sed -i "s#$PortNum#$port_num#g" ${dst_yaml}
  sed -i "s#$KuberDstDir#$kuber_yml_dir#g" ${dst_yaml}
  sed -i "s#$GroupName#$group_name#g" ${dst_yaml}
  sed -i "s#$DomainName#$domain_name#g" ${dst_yaml}
}

ExcuteAnsible() {
  ansible-playbook -i host setup-k8s.yml
}

SedFile
ExcuteAnsible

执行此脚本的时候,需要传入一堆参数,就是上面pipeline里的内容,甚是繁琐,后面再优化吧

代码语言:javascript复制
bash k8s.sh --test=test --app_name=${app_names} --replica_number=${replica_numbers} --image_addr_new=${whole_img_name_new} --image_addr_old=${whole_img_name_old} --port_num=${port_num} --kuber_yml_dir=${kuber_yml_dirs} --group_name=web-server --domain_name=${domain_name}

以上是ansible-playbook脚本参考,它根据pipeline中定义的参数,去替换模板文件中的变量,然后替换完成后,去按照一定的顺序启动这些yaml文件

其实到这里还有一个问题,就是我怎么能做到我jenkins界面选择什么策略,ansible就根据我的选择来执行相应的yaml文件,完成流量的切分,如果单纯使用上面的playbook脚本,是可以实现的,比如说,只准备一个virtualservice的yaml文件,权重设置成变量的形式,然后设置一个监听器,当此文件变化时,就去执行kubectl apply 操作,但这样会有一个文件,每次去选择策略时,都会从头到尾再次执行一遍,这样效率肯定是一个问题,非常不可取,所以再准备一个专门执行策略的playbook,很有必要。

用于执行策略的playbook

这个playbook非常简单,只执行一条命令,就是根据传入的参数,去执行相应的yaml文件,例如,我选择了策略20,则ansible就会去执行app-virtualservice-20.yaml文件

setup-policy.yaml文件参考

代码语言:javascript复制
- hosts: "{{ group_name }}"
  roles:
    - role: deploy-istio-policy

执行yaml文件的命令参考

代码语言:javascript复制
---
- name: Policy is "{{ deploy_policy }}"
  shell: "kubectl apply -f app-virtualservice-{{ deploy_policy }}.yaml"
  args:
    chdir: "{{ k8s_dst_dir }}"
执行脚本完成tag的替换

下来看下这个脚本,也是有点繁琐,后面有时间再去优化

代码语言:javascript复制
#!/bin/bash
export PATH=$PATH
ARGS=`getopt -a -l type::,stat::,tag_name_v2::,tag_name_v1::,tag_name_tmp::, -- "$@"`

if [ $? != 0 ];then
        echo "Terminating..."
        exit 1
fi

eval set -- "${ARGS}"

while :
do
    case $1 in
      --type)
          type=$2
          shift
          ;;
      --stat)
          stat=$2
          shift
          ;;
      --tag_name_v2)
          tag_name_v2=$2
          shift
          ;;
      --tag_name_v1)
          tag_name_v1=$2
          shift
          ;;
      --tag_name_tmp)
          tag_name_tmp=$2
          shift
          ;;
      --)
          shift
          break
          ;;
      *)
          echo "Internal error!"
          exit 1
          ;;
    esac
shift
done

TagFile="/data/jenkins/workspace/TagFile/tag.txt"
job_name=${JOB_NAME}
#job_name="test-istio"

if test ! -d /data/jenkins/workspace/TagFile;then
  mkdir /data/jenkins/workspace/TagFile
  touch ${TagFile}
fi

tag_stat() {
  job_stat="${job_name}-stat"
  check_tag_stat=$(grep -c ${job_stat} ${TagFile})
  if [ ${check_tag_stat} -eq 0 ];then
    echo "${job_stat}=${stat}" >> ${TagFile}
  else
    sed -i "s#$job_stat=.*#$job_stat=$stat#g" ${TagFile}
  fi
}

tag_v2() {
  git_cm_id_v2=$(echo ${tag_name_v2} | awk -F '-' '{print $NF}')
  job_v2="${job_name}-v2"
  job_name_v2_stat=$(grep -c ${job_v2} ${TagFile})
  if [ ${job_name_v2_stat} -eq 0 ];then
     echo "${job_v2}=${tag_name_v2}" >> ${TagFile}
  else
     check_tag_v2_exist=$(cat ${TagFile} | grep ${job_v2} | grep -c ${git_cm_id_v2})
     if [ ${check_tag_v2_exist} -eq 0 ];then
        sed -i "s#$job_v2=.*#$job_v2=$tag_name_v2#g" ${TagFile}
     else
        echo "当前代码未发生改变"
     fi
  fi
}

tag_v1() {
  git_cm_id_v1=$(echo ${tag_name_v1} | awk -F '-' '{print $NF}')
  job_v1="${job_name}-v1"
  job_name_v1_stat=$(grep -c ${job_v1} ${TagFile})
  if [ ${job_name_v1_stat} -eq 0 ];then
     echo "${job_v1}=${tag_name_v1}" >> ${TagFile}
  else
     check_tag_v1_exist=$(cat ${TagFile} | grep ${job_v1} | grep -c ${git_cm_id_v1})
     if [ ${check_tag_v1_exist} -eq 0 ];then
        sed -i "s#$job_v1=.*#$job_v1=$tag_name_v1#g" ${TagFile}
     else
        echo "当前代码未发生改变"
     fi
  fi
}

tag_tmp() {
  git_cm_id_tmp=$(echo ${tag_name_tmp} | awk -F '-' '{print $NF}')
  job_tmp="${job_name}-tmp"
  job_name_tmp_stat=$(grep -c ${job_tmp} ${TagFile})
  if [ ${job_name_tmp_stat} -eq 0 ];then
     echo "${job_tmp}=${tag_name_tmp}" >> ${TagFile}
  else
     check_tag_tmp_exist=$(cat ${TagFile} | grep ${job_tmp} | grep -c ${git_cm_id_tmp})
     if [ ${check_tag_tmp_exist} -eq 0 ];then
        sed -i "s#$job_tmp=.*#$job_tmp=$tag_name_tmp#g" ${TagFile}
     else
        echo "当前代码未发生改变"
     fi
  fi
  tag_stat
}
tag_name_list=(
tag_name_v2 
tag_name_v1 
tag_name_tmp 
)
for name in ${tag_name_list[@]}
do
  if [ "${name}" = "tag_name_v2" ];then
    if [ -n "${tag_name_v2}" ];then
      tag_v2
    else
      echo "${name} is null"
    fi
  elif [ "${name}" = "tag_name_v1" ];then
    if [ -n "${tag_name_v1}" ];then
      tag_v1
    else
      echo "${name} is null"
    fi
  elif [ "${name}" = "tag_name_tmp" ];then
    if [ -n "${tag_name_tmp}" ];then
      tag_tmp
    else
      echo "${name} is null"
    fi
  fi
done

实际上这个脚本就干了一件事,就是把tag写到一个文件里,然后根据传入的参数去替换tag,但这里有一些需要注意的地方,就是我对于每个任务,多增加了两行内容,一个是tmp,一个是stat,分别用来记录最新构建时的tag和将tmp中的tag更改时的状态记录。这样说好像有些难以理解,下面结合pipeline里的示例来看下 pipeline中执行完k8s.sh这个脚本之后,即执行完yaml文件下发之后,去执行了一个脚本,关于将当前构建任务所打的镜像的标签记录到tmp这个变量中去

代码语言:javascript复制
sh "bash -x TAG/tagrecord.sh --type=istio --stat=false --tag_name_tmp=${tag_current}"

这里传入的变量stat和tag_name_tmp,就是为了将当前的tag存到tmp中,然后把stat中的状态改为false,执行到这是没有改变v2和v1的tag的,也就是说tag.txt文件中的tag是没有任何变化的,只是tmp变成了最新的tag。

我现在只是构建,并没有选择策略,上面的操作可以帮我们解决一个场景,例如,线上发布了一个新的版本,但是出现了一些bug,需要紧急修复,再次发布的时候,tag发生了变化,而由于我们只是将最新的tag存放到了tmp中,并没有对v2,v1这几个变量做修改,所以不管我们重新构建多少次,v2的变量永远是当前的v1版本对应的tag

接下来就是选择策略,去逐步增加流量给新版本,直到新版本的流量到达100%前,tmp中的tag还是没有去进行替换操作,因为直到所有的流量都导入到v2版本时,才认为整个升级完成,然后替换tag

代码语言:javascript复制
sh """
if [ "${PolicyChoice}" == "v2" ];then
    if [ "${tag_stat}" == "false" ];then
        bash -x TAG/tagrecord.sh --type=istio --stat=true --tag_name_v2=${tag_tmp_to_v2} --tag_name_v1=${tag_v2_to_v1} --tag_name_tmp=${tag_v1_to_tmp}
    else
        echo "tag已经替换完成"
    fi
fi
"""

下面说说stat这个变量的作用,主要目的是防止策略选择v2之后,又执行了一遍,再次选择了v2,造成tag替换出现了混乱。

构建测试

其实这里istio将所有的请求流量给接入过来,然后k8s中启动了多个版本的pod,istio根据我们设置的流量拆分策略,进行有选择的流量拆分,对于jenkins结合istio实现灰度发布来说,对与tag的替换很重要,因为不同版本的pod说到底就是根据tag去区分的吗,所以如何滚动的替换tag是非常重要的。

0 人点赞