由于国内的原因,gcr的镜像无法直接拉取,导致knative部署困难,采用
代码语言:javascript复制https://github.com/anjia0532/gcr.io_mirror
的镜像,但是命名需要处理下
代码语言:javascript复制# #原镜像
# gcr.io/knative-releases/knative.dev/eventing/cmd/controller:latest
# #转换后镜像
# anjia0532/knative-releases.knative.dev.eventing.cmd.controller:latest
由于serving-core镜像命名中含有commit id,
替换
代码语言:javascript复制@sha256: xxxx
为
代码语言:javascript复制:latestnimagePullPolicy: IfNotPresentn#xxxx
过滤出需要的镜像,然后下载
代码语言:javascript复制images=`grep 'image:' serverless/knative/setup/serving-core.yaml |grep 'gcr.io'`
eval $(echo ${images}|
sed 's/k8s.gcr.io/anjia0532/google-containers/g;s/gcr.io/anjia0532/g;s///./g;s/ /n/g;s/anjia0532./anjia0532//g' |
uniq |
awk '{print "docker pull "$1";"}'
)
下载完毕后重新命名
代码语言:javascript复制for img in $(docker images --format "{{.Repository}}:{{.Tag}}"| grep "anjia0532"); do
n=$(echo ${img}| awk -F'[/.:]' '{printf "gcr.io/%s",$2}')
image=$(echo ${img}| awk -F'[/.:]' '{printf "/knative.%s/%s/%s/%s",$4,$5,$6,$7}')
tag=$(echo ${img}| awk -F'[:]' '{printf ":%s",$2}')
echo "${n}${image}${tag}"
docker tag $img "${n}${image}${tag}"
[[ ${n} == "gcr.io/google-containers" ]] && docker tag $img "k8s.gcr.io${image}${tag}"
docker rmi $img
done
部署
代码语言:javascript复制kubectl apply -f serving-core.yaml
由于由于可能和现有的istio冲突,所以选择安装krouter
代码语言:javascript复制kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.3.0/kourier.yaml
它依赖两个镜像
代码语言:javascript复制image: gcr.io/knative-releases/knative.dev/net-kourier/cmd/kourier@sha256:84af1fba93bcc1d504ee6fc110a49be80440f08d461ccb0702621b7b62d0f7b6
代码语言:javascript复制image: docker.io/envoyproxy/envoy:v1.18-latest
同样需要到github的镜像地址去下载
代码语言:javascript复制images=`echo "gcr.io/knative-releases/knative.dev/net-kourier/cmd/kourier:latest"`
eval $(echo ${images}|
sed 's/k8s.gcr.io/anjia0532/google-containers/g;s/gcr.io/anjia0532/g;s///./g;s/ /n/g;s/anjia0532./anjia0532//g' |
uniq |
awk '{print "docker pull "$1";"}'
)
docker tag docker.io/anjia0532/knative-releases.knative.dev.net-kourier.cmd.kourier:latest gcr.io/knative-releases/knative.dev/net-kourier/cmd/kourier:latest
docker rmi docker.io/anjia0532/knative-releases.knative.dev.net-kourier.cmd.kourier:latest
然后patch
代码语言:javascript复制% kubectl patch configmap/config-network
--namespace knative-serving
--type merge
--patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
configmap/config-network patched
检查下
代码语言:javascript复制% kubectl --namespace kourier-system get service kourier
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kourier LoadBalancer 10.108.41.244 localhost 80:31859/TCP,443:30753/TCP 34s
代码语言:javascript复制 % kubectl -n kourier-system get pods
NAME READY STATUS RESTARTS AGE
3scale-kourier-gateway-5f96966d45-n5tgg 1/1 Running 0 115s
至此knative serving起来了
代码语言:javascript复制% kubectl get pods -n knative-serving
NAME READY STATUS RESTARTS AGE
activator-7cf4bd8548-gg5cm 1/1 Running 0 29m
autoscaler-577d766bdd-5xmkp 1/1 Running 0 29m
controller-5b74bfcc9f-z6kpf 1/1 Running 0 29m
domain-mapping-5b4f5f66b5-g8cmt 1/1 Running 0 29m
domainmapping-webhook-5d7fb6566d-59blp 1/1 Running 0 29m
net-kourier-controller-766c565d78-5gqpz 1/1 Running 0 50s
webhook-699fc555bf-4t9nk 1/1 Running 0 29m
如果要部署default-domain也同样处理
代码语言:javascript复制images=`echo "gcr.io/knative-releases/knative.dev/serving/cmd/default-domain:latest"`
eval $(echo ${images}|
sed 's/k8s.gcr.io/anjia0532/google-containers/g;s/gcr.io/anjia0532/g;s///./g;s/ /n/g;s/anjia0532./anjia0532//g' |
uniq |
awk '{print "docker pull "$1";"}'
)
# docker tag docker.io/anjia0532/knative-releases.knative.dev.serving.cmd.default-domain:latest gcr.io/knative-releases/knative.dev/serving/cmd/default-domain:latest
# docker rmi docker.io/anjia0532/knative-releases.knative.dev.serving.cmd.default-domain:latest
然后部署我们的hello word例子
代码语言:javascript复制package main
import (
"fmt"
"log"
"net/http"
"os"
)
func handler(w http.ResponseWriter, r *http.Request) {
log.Print("helloworld: received a request")
target := os.Getenv("TARGET")
if target == "" {
target = "World"
}
fmt.Fprintf(w, "Hello %s!n", target)
}
func main() {
log.Print("helloworld: starting server...")
http.HandleFunc("/", handler)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Printf("helloworld: listening on port %s", port)
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
部署文件
代码语言:javascript复制---
apiVersion: v1
kind: Namespace
metadata:
name: helloworld
labels:
serving.knative.dev/service: hello
serving.knative.dev/visibility: cluster-local
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
namespace: helloworld
labels:
serving.knative.dev/service: hello
serving.knative.dev/visibility: cluster-local
spec:
template:
metadata:
labels:
app: hello
annotations:
autoscaling.knative.dev/target: "10"
spec:
containers:
- image: docker.io/xiazemin/helloworld-go
env:
- name: TARGET
value: "World!"
打包镜像
代码语言:javascript复制go mod init helloworld
go mod tidy
docker build -t xiazemin/helloworld-go .
=> => naming to docker.io/xiazemin/helloworld-go
然后部署应用
代码语言:javascript复制% kubectl apply -f helloworld.yaml
namespace/helloworld unchanged
service.serving.knative.dev/hello created
检查下
代码语言:javascript复制 % kubectl get route hello -n helloworld
NAME URL READY REASON
hello http://hello.helloworld.example.com False RevisionMissing
发现镜像拉取失败
代码语言:javascript复制% kubectl -n helloworld describe pod hello-00005-deployment-77c7d84b98-gjwlr
Name: hello-00005-deployment-77c7d84b98-gjwlr
Namespace: helloworld
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Sat, 26 Mar 2022 18:46:22 0800
Labels: app=hello
pod-template-hash=77c7d84b98
serving.knative.dev/configuration=hello
serving.knative.dev/configurationGeneration=5
serving.knative.dev/configurationUID=ee31bffa-a1ac-4001-83b6-4ead5457d4e5
serving.knative.dev/revision=hello-00005
serving.knative.dev/revisionUID=fd0d447e-94cd-4da6-b02a-50b4d1fd8032
serving.knative.dev/service=hello
serving.knative.dev/serviceUID=b96afd85-667a-42f3-95d6-f279ba418235
Annotations: serving.knative.dev/creator: docker-for-desktop
Status: Pending
IP: 10.1.4.234
IPs:
IP: 10.1.4.234
Controlled By: ReplicaSet/hello-00005-deployment-77c7d84b98
Containers:
user-container:
Container ID: docker://bb422eb2d6bf0d6bdec082597814fc86391855f678e684d2b473b05bd5f68888
Image: index.docker.io/xiazemin/helloworld-go@sha256:23b742c725fa51786bb71603a28c34e2723f9b65384a4886349145df532c1405
Image ID: docker-pullable://xiazemin/helloworld-go@sha256:23b742c725fa51786bb71603a28c34e2723f9b65384a4886349145df532c1405
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 26 Mar 2022 18:46:24 0800
Ready: True
Restart Count: 0
Environment:
TARGET: World!
PORT: 8080
K_REVISION: hello-00005
K_CONFIGURATION: hello
K_SERVICE: hello
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h77nr (ro)
queue-proxy:
Container ID:
Image: gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest
Image ID:
Ports: 8022/TCP, 9090/TCP, 9091/TCP, 8012/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Requests:
cpu: 25m
Readiness: http-get http://:8012/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVING_NAMESPACE: helloworld
SERVING_SERVICE: hello
SERVING_CONFIGURATION: hello
SERVING_REVISION: hello-00005
QUEUE_SERVING_PORT: 8012
CONTAINER_CONCURRENCY: 0
REVISION_TIMEOUT_SECONDS: 300
SERVING_POD: hello-00005-deployment-77c7d84b98-gjwlr (v1:metadata.name)
SERVING_POD_IP: (v1:status.podIP)
SERVING_LOGGING_CONFIG:
SERVING_LOGGING_LEVEL:
SERVING_REQUEST_LOG_TEMPLATE: {"httpRequest": {"requestMethod": "{{.Request.Method}}", "requestUrl": "{{js .Request.RequestURI}}", "requestSize": "{{.Request.ContentLength}}", "status": {{.Response.Code}}, "responseSize": "{{.Response.Size}}", "userAgent": "{{js .Request.UserAgent}}", "remoteIp": "{{js .Request.RemoteAddr}}", "serverIp": "{{.Revision.PodIP}}", "referer": "{{js .Request.Referer}}", "latency": "{{.Response.Latency}}s", "protocol": "{{.Request.Proto}}"}, "traceId": "{{index .Request.Header "X-B3-Traceid"}}"}
SERVING_ENABLE_REQUEST_LOG: false
SERVING_REQUEST_METRICS_BACKEND: prometheus
TRACING_CONFIG_BACKEND: none
TRACING_CONFIG_ZIPKIN_ENDPOINT:
TRACING_CONFIG_DEBUG: false
TRACING_CONFIG_SAMPLE_RATE: 0.1
USER_PORT: 8080
SYSTEM_NAMESPACE: knative-serving
METRICS_DOMAIN: knative.dev/internal/serving
SERVING_READINESS_PROBE: {"tcpSocket":{"port":8080,"host":"127.0.0.1"},"successThreshold":1}
ENABLE_PROFILING: false
SERVING_ENABLE_PROBE_REQUEST_LOG: false
METRICS_COLLECTOR_ADDRESS:
CONCURRENCY_STATE_ENDPOINT:
CONCURRENCY_STATE_TOKEN_PATH: /var/run/secrets/tokens/state-token
HOST_IP: (v1:status.hostIP)
ENABLE_HTTP2_AUTO_DETECTION: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h77nr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-h77nr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned helloworld/hello-00005-deployment-77c7d84b98-gjwlr to docker-desktop
Normal Pulled 11m kubelet Container image "index.docker.io/xiazemin/helloworld-go@sha256:23b742c725fa51786bb71603a28c34e2723f9b65384a4886349145df532c1405" already present on machine
Normal Created 11m kubelet Created container user-container
Normal Started 11m kubelet Started container user-container
Warning Failed 9m59s (x2 over 11m) kubelet Failed to pull image "gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 9m22s (x5 over 11m) kubelet Error: ImagePullBackOff
Normal Pulling 9m7s (x4 over 11m) kubelet Pulling image "gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest"
Warning Failed 8m52s (x4 over 11m) kubelet Error: ErrImagePull
Warning Failed 8m52s (x2 over 10m) kubelet Failed to pull image "gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": context deadline exceeded
Normal BackOff 85s (x35 over 11m) kubelet Back-off pulling image "gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest"
但是这个镜像我们本地已经存在了,研究发现deploy的镜像拉取策略是always,
代码语言:javascript复制image: gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest
imagePullPolicy: Always
修改成IfNotPresent
代码语言:javascript复制% kubectl -n helloworld edit deploy hello-00001-deployment
deployment.apps/hello-00001-deployment edited
问题解决了。
代码语言:javascript复制 % kubectl -n helloworld get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hello-00001-deployment 1/1 1 1 3m19s
但是发现另外一个问题
代码语言:javascript复制 % kubectl get ksvc -n helloworld
NAME URL LATESTCREATED LATESTREADY READY REASON
hello http://hello.helloworld.example.com hello-00001 hello-00001 Unknown IngressNotConfigured
发现router类型是loadbalancer修改成Nodeport
代码语言:javascript复制% kubectl get svc -n kourier-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kourier LoadBalancer 10.108.41.244 localhost 80:31859/TCP,443:30753/TCP 143m
kourier-internal ClusterIP 10.96.62.112 <none> 80/TCP 143m
代码语言:javascript复制 % kubectl --namespace kourier-system edit service kourier
service/kourier edited
检查下
代码语言:javascript复制% kubectl --namespace kourier-system get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kourier NodePort 10.108.41.244 <none> 80:31859/TCP,443:30753/TCP 3h13m
kourier-internal ClusterIP 10.96.62.112 <none> 80/TCP 3h13m
问题仍然存在,欲知如何解决,且听下回分解。