1 Overview
我猜很多接触 K8S 的同学应该都是运维的同学为主,一般上来说,运维的同学写 Shell 显然是比 Java 程序员专业的,但是当大数据遇到 K8S 的时候,搞大数据的同学还在每条 kubectl 去操作实在是太浪费时间了。 在我学习的过程中,我会创建很多临时的 Pods,测试完其实这些 Pods 就没用了,或者说 Status 是 Error 或者 Complete 的 Pods 已经不是我学习的对象,想删掉,所以 kubectl get pods 的时候想显示少一点。 简单的办法就是用 Alias 来封装一下各种状态的显示。
2 Examples
以下是我利用 grep 和 awk 封装的两个 alias,可以参考一下。
代码语言:javascript复制alias getComplete="kubectl get pods | grep Completed | awk -F ' ' '{print $1}'"
alias getError="kubectl get pods | grep Error | awk -F ' ' '{print $1}'"
grep
和 awk
不熟悉的同学请千万不要去百度谷歌,因为这样会造成依赖,每次一用就去搜,用完过几天就忘,我的建议是直接看命令的手册,这里举个 awk
中 -F 的例子。
awk
NAME
awk - pattern-directed scanning and processing language
请注意 awk 的用法
SYNOPSIS
awk [ -F fs ] [ -v var=value ] [ 'prog' | -f progfile ] [ file ... ]
请注意看手册,这里的 -F 的作用是什么,就是做分隔符,并且支持正则表达式
DESCRIPTION
Awk scans each input file for lines that match any of a set of patterns specified literally in prog or in one or more files
specified as -f progfile. With each pattern there can be an associated action that will be performed when a line of a file
matches the pattern. Each line is matched against the pattern portion of every pattern-action statement; the associated action
is performed for each matched pattern. The file name - means the standard input. Any file of the form var=value is treated as
an assignment, not a filename, and is executed at the time it would have been opened if it were a filename. The option -v fol-
lowed by var=value is an assignment to be done before prog is executed; any number of -v options may be present. The -F fs
option defines the input field separator to be the regular expression fs.
有了这两个 alias 之后,我们就可以把他加到 .bash_profile 中,以后调用的时候就只要这个 alias 就好了。
代码语言:javascript复制➜ ~ getError
spark-pi-37d1f76b946d7c0f-driver
➜ ~ getComplete
group-by-test-1560763907118-driver
hdfs-test-driver
spark-driver-2.3
spark-hdfs-1561689711995-driver
spark-hdfs-1561689794687-driver
spark-hdfs-1561689834591-driver
spark-hdfs-1561689875798-driver
spark-hdfs-1561690011058-driver
spark-hdfs-1561690211210-driver
spark-hdfs-1561691706756-driver
spark-hdfs-1561700636764-driver
spark-pi-064dbc6e21463c7cb72a82f8b9d0c1ab-driver
spark-pi-1e4bae6b95fe78d9-driver
spark-pi-driver
然后比如说你想删除这些你不需要再研究的某种状态的 Pods。
代码语言:javascript复制➜ ~ getError | xargs kubectl delete pods
pod "spark-pi-37d1f76b946d7c0f-driver" deleted
➜ ~ getComplete | xargs kubectl delete pods
pod "group-by-test-1560763907118-driver" deleted
pod "hdfs-test-driver" deleted
pod "spark-driver-2.3" deleted
pod "spark-hdfs-1561689711995-driver" deleted
pod "spark-hdfs-1561689794687-driver" deleted
pod "spark-hdfs-1561689834591-driver" deleted
pod "spark-hdfs-1561689875798-driver" deleted
pod "spark-hdfs-1561690011058-driver" deleted
pod "spark-hdfs-1561690211210-driver" deleted
pod "spark-hdfs-1561691706756-driver" deleted
pod "spark-hdfs-1561700636764-driver" deleted
pod "spark-pi-064dbc6e21463c7cb72a82f8b9d0c1ab-driver" deleted
pod "spark-pi-1e4bae6b95fe78d9-driver" deleted
pod "spark-pi-driver" deleted
3 Summary
删掉了一堆没用的 Pods 之后,一下就清爽了,其实通过 dashboard 来删除也可以,只是说需要一个个点,效率很低,简单写几个通用的 alias 甚至更高级点的写个 shell 脚本定期去删除,那就更好了。