使用flink kubernetes operator创建flink任务,将flink日志通过sidecar方式发送到es相关配置
代码语言:javascript复制apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: basic-example
spec:
image: xiaozhch5/flink-sql-submit:hudi-0.12-juicefs
flinkVersion: v1_15
flinkConfiguration:
taskmanager.numberOfTaskSlots: "2"
serviceAccount: flink
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
serviceAccount: flink
containers:
# Do not change the main container name
- name: flink-main-container
volumeMounts:
- mountPath: /opt/flink/log
name: flink-logs
- mountPath: /opt/hadoop/etc/hadoop/
name: core-site
# Sample sidecar container
- name: fluentbit
image: fluent/fluent-bit:1.8.12-debug
command: [ 'sh','-c','/fluent-bit/bin/fluent-bit -i tail -p path=/flink-logs/*.log -p multiline.parser=java -o es -p Host=10.104.54.7 -p Port=9200 -p Index=k8s-flink-sql-test -p tls=on -p tls.verify=off -p Suppress_Type_Name=On -p HTTP_User=elastic -p HTTP_Passwd=10oE32VF480h32kKd9aRSVJX ' ]
volumeMounts:
- mountPath: /flink-logs
name: flink-logs
volumes:
- name: flink-logs
emptyDir: { }
- name: core-site
configMap:
name: core-site
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/lib/flink-sql-submit-1.0.jar
args: ["-f", "s3://flink-tasks/k8s-flink-sql-test.sql", "-m", "streaming", "-e", "http://192.168.1.2:9000", "-a", "PSBZMLL1NXZYCX55QMBI", "-s", "CNACTHv4 fPHvYT7gwaKCyWR7K96zHXNU f9yccJ"]
parallelism: 2
upgradeMode: stateless
本文为从大数据到人工智能博主「xiaozhch5」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://cloud.tencent.com/developer/article/2143676