Flink1.8源码编译安装

2019-06-17 15:59:36 浏览数 (1)

总体来说,这里有两种安装模式:

(一)直接从flink官网下载其提供好的针对特定版本的二进制包

优点:官网编译好的版本,下载即用

缺点:必须使用指定的版本

(二)从官网下载源码,进行编译安装

优点:可以任意指定编译的Hadoop版本等

缺点:编译耗时较长,且可能会导致失败

这里我们要介绍的是源码编译的方式,我们需要直接从github上下载最新的relese1.8版本源码或者通过git clone命令来拉取,如下:

代码语言:javascript复制
git clone -b release-1.8.0 https://github.com/apache/flink.git

下载之后,解压某个目录,进入目录执行命令:

代码语言:javascript复制
(1)mvn clean install -DskipTests mvn clean install -DskipTests -Dhadoop.version=2.6.1(2)cd flink-dist(3)mvn clean install

jdk scala maven版本:

scala版本:

代码语言:javascript复制
☁ ~ scala -versionScala code runner version 2.12.8 -- Copyright 2002-2018, LAMP/EPFL and Lightbend, Inc.

java版本:

代码语言:javascript复制
☁ ~ java -versionjava version "1.8.0_112"Java(TM) SE Runtime Environment (build 1.8.0_112-b16)Java HotSpot(TM) 64-Bit Server VM (build 25.112-b16, mixed mode)

maven版本:

代码语言:javascript复制
☁ ~ mvn -vApache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-11T00:41:47 08:00)Maven home: /usr/local/Cellar/maven/3.3.9/libexecJava version: 1.8.0_112, vendor: Oracle CorporationJava home: /Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home/jreDefault locale: zh_CN, platform encoding: UTF-8OS name: "mac os x", version: "10.14.2", arch: "x86_64", family: "mac"

编译了一个半小时,最终结果如下,如果编译失败,可以开个V**再试一下:

代码语言:javascript复制
/*
* 提示:该行代码过长,系统自动注释不进行高亮。一键复制会移除系统注释 
* [INFO] Reactor Summary:[INFO][INFO] force-shading ...................................... SUCCESS [ 52.300 s][INFO] flink .............................................. SUCCESS [02:57 min][INFO] flink-annotations .................................. SUCCESS [ 19.559 s][INFO] flink-shaded-hadoop ................................ SUCCESS [ 0.113 s][INFO] flink-shaded-hadoop2 ............................... SUCCESS [ 48.438 s][INFO] flink-shaded-hadoop2-uber .......................... SUCCESS [ 15.172 s][INFO] flink-shaded-yarn-tests ............................ SUCCESS [ 48.022 s][INFO] flink-shaded-curator ............................... SUCCESS [ 32.725 s][INFO] flink-metrics ...................................... SUCCESS [ 0.088 s][INFO] flink-metrics-core ................................. SUCCESS [ 18.786 s][INFO] flink-test-utils-parent ............................ SUCCESS [ 0.094 s][INFO] flink-test-utils-junit ............................. SUCCESS [ 0.627 s][INFO] flink-core ......................................... SUCCESS [01:34 min][INFO] flink-java ......................................... SUCCESS [ 17.522 s][INFO] flink-queryable-state .............................. SUCCESS [ 0.080 s][INFO] flink-queryable-state-client-java .................. SUCCESS [01:14 min][INFO] flink-filesystems .................................. SUCCESS [ 0.105 s][INFO] flink-hadoop-fs .................................... SUCCESS [ 0.952 s][INFO] flink-runtime ...................................... SUCCESS [08:37 min][INFO] flink-scala ........................................ SUCCESS [ 45.345 s][INFO] flink-mapr-fs ...................................... SUCCESS [07:01 min][INFO] flink-filesystems :: flink-fs-hadoop-shaded ........ SUCCESS [01:00 min][INFO] flink-s3-fs-base ................................... SUCCESS [ 26.478 s][INFO] flink-s3-fs-hadoop ................................. SUCCESS [ 9.955 s][INFO] flink-s3-fs-presto ................................. SUCCESS [01:00 min][INFO] flink-swift-fs-hadoop .............................. SUCCESS [01:16 min][INFO] flink-oss-fs-hadoop ................................ SUCCESS [ 41.278 s][INFO] flink-optimizer .................................... SUCCESS [ 15.992 s][INFO] flink-clients ...................................... SUCCESS [ 1.637 s][INFO] flink-streaming-java ............................... SUCCESS [ 23.951 s][INFO] flink-test-utils ................................... SUCCESS [01:21 min][INFO] flink-runtime-web .................................. SUCCESS [ 1.382 s][INFO] flink-examples ..................................... SUCCESS [ 0.111 s][INFO] flink-examples-batch ............................... SUCCESS [ 24.238 s][INFO] flink-connectors ................................... SUCCESS [ 0.066 s][INFO] flink-hadoop-compatibility ......................... SUCCESS [ 6.666 s][INFO] flink-state-backends ............................... SUCCESS [ 0.070 s][INFO] flink-statebackend-rocksdb ......................... SUCCESS [08:37 min][INFO] flink-tests ........................................ SUCCESS [ 37.243 s][INFO] flink-streaming-scala .............................. SUCCESS [ 39.008 s][INFO] flink-table ........................................ SUCCESS [ 0.092 s][INFO] flink-table-common ................................. SUCCESS [ 0.561 s][INFO] flink-table-api-java ............................... SUCCESS [ 0.240 s][INFO] flink-table-api-java-bridge ........................ SUCCESS [ 0.311 s][INFO] flink-libraries .................................... SUCCESS [ 0.058 s][INFO] flink-cep .......................................... SUCCESS [ 3.094 s][INFO] flink-table-planner ................................ SUCCESS [01:55 min][INFO] flink-orc .......................................... SUCCESS [ 20.472 s][INFO] flink-jdbc ......................................... SUCCESS [ 40.327 s][INFO] flink-hbase ........................................ SUCCESS [02:00 min][INFO] flink-hcatalog ..................................... SUCCESS [ 28.816 s][INFO] flink-metrics-jmx .................................. SUCCESS [ 0.353 s][INFO] flink-connector-kafka-base ......................... SUCCESS [ 15.782 s][INFO] flink-connector-kafka-0.9 .......................... SUCCESS [ 17.500 s][INFO] flink-connector-kafka-0.10 ......................... SUCCESS [ 2.743 s][INFO] flink-connector-kafka-0.11 ......................... SUCCESS [ 25.108 s][INFO] flink-formats ...................................... SUCCESS [ 0.062 s][INFO] flink-json ......................................... SUCCESS [ 0.410 s][INFO] flink-connector-elasticsearch-base ................. SUCCESS [03:00 min][INFO] flink-connector-elasticsearch ...................... SUCCESS [ 9.112 s][INFO] flink-connector-elasticsearch2 ..................... SUCCESS [02:19 min][INFO] flink-connector-elasticsearch5 ..................... SUCCESS [ 41.034 s][INFO] flink-connector-elasticsearch6 ..................... SUCCESS [ 32.790 s][INFO] flink-connector-rabbitmq ........................... SUCCESS [ 5.997 s][INFO] flink-connector-twitter ............................ SUCCESS [ 3.643 s][INFO] flink-connector-nifi ............................... SUCCESS [ 16.823 s][INFO] flink-connector-cassandra .......................... SUCCESS [ 34.173 s][INFO] flink-avro ......................................... SUCCESS [ 17.478 s][INFO] flink-connector-filesystem ......................... SUCCESS [ 1.061 s][INFO] flink-connector-kafka .............................. SUCCESS [ 27.018 s][INFO] flink-sql-connector-elasticsearch6 ................. SUCCESS [ 5.738 s][INFO] flink-sql-connector-kafka-0.9 ...................... SUCCESS [ 0.336 s][INFO] flink-sql-connector-kafka-0.10 ..................... SUCCESS [ 0.414 s][INFO] flink-sql-connector-kafka-0.11 ..................... SUCCESS [ 0.521 s][INFO] flink-sql-connector-kafka .......................... SUCCESS [ 0.644 s][INFO] flink-connector-kafka-0.8 .......................... SUCCESS [ 40.725 s][INFO] flink-avro-confluent-registry ...................... SUCCESS [ 20.910 s][INFO] flink-parquet ...................................... SUCCESS [ 19.917 s][INFO] flink-sequence-file ................................ SUCCESS [ 0.339 s][INFO] flink-csv .......................................... SUCCESS [ 0.422 s][INFO] flink-examples-streaming ........................... SUCCESS [ 22.610 s][INFO] flink-table-api-scala .............................. SUCCESS [ 0.186 s][INFO] flink-table-api-scala-bridge ....................... SUCCESS [ 0.310 s][INFO] flink-examples-table ............................... SUCCESS [ 8.932 s][INFO] flink-examples-build-helper ........................ SUCCESS [ 0.140 s][INFO] flink-examples-streaming-twitter ................... SUCCESS [ 0.623 s][INFO] flink-examples-streaming-state-machine ............. SUCCESS [ 0.411 s][INFO] flink-container .................................... SUCCESS [ 0.421 s][INFO] flink-queryable-state-runtime ...................... SUCCESS [ 0.795 s][INFO] flink-end-to-end-tests ............................. SUCCESS [ 0.059 s][INFO] flink-cli-test ..................................... SUCCESS [ 0.224 s][INFO] flink-parent-child-classloading-test-program ....... SUCCESS [ 0.214 s][INFO] flink-parent-child-classloading-test-lib-package ... SUCCESS [ 0.193 s][INFO] flink-dataset-allround-test ........................ SUCCESS [ 0.255 s][INFO] flink-datastream-allround-test ..................... SUCCESS [ 1.553 s][INFO] flink-stream-sql-test .............................. SUCCESS [ 0.257 s][INFO] flink-bucketing-sink-test .......................... SUCCESS [ 0.356 s][INFO] flink-distributed-cache-via-blob ................... SUCCESS [ 0.203 s][INFO] flink-high-parallelism-iterations-test ............. SUCCESS [ 6.631 s][INFO] flink-stream-stateful-job-upgrade-test ............. SUCCESS [ 0.825 s][INFO] flink-queryable-state-test ......................... SUCCESS [ 1.535 s][INFO] flink-local-recovery-and-allocation-test ........... SUCCESS [ 0.381 s][INFO] flink-elasticsearch1-test .......................... SUCCESS [ 2.830 s][INFO] flink-elasticsearch2-test .......................... SUCCESS [ 3.798 s][INFO] flink-elasticsearch5-test .......................... SUCCESS [ 4.375 s][INFO] flink-elasticsearch6-test .......................... SUCCESS [ 2.903 s][INFO] flink-quickstart ................................... SUCCESS [ 0.926 s][INFO] flink-quickstart-java .............................. SUCCESS [ 17.374 s][INFO] flink-quickstart-scala ............................. SUCCESS [ 0.177 s][INFO] flink-quickstart-test .............................. SUCCESS [ 0.358 s][INFO] flink-confluent-schema-registry .................... SUCCESS [ 1.655 s][INFO] flink-stream-state-ttl-test ........................ SUCCESS [ 3.434 s][INFO] flink-sql-client-test .............................. SUCCESS [ 8.459 s][INFO] flink-streaming-file-sink-test ..................... SUCCESS [ 0.199 s][INFO] flink-state-evolution-test ......................... SUCCESS [ 0.844 s][INFO] flink-e2e-test-utils ............................... SUCCESS [ 19.787 s][INFO] flink-streaming-python ............................. SUCCESS [12:56 min][INFO] flink-mesos ........................................ SUCCESS [02:00 min][INFO] flink-yarn ......................................... SUCCESS [ 18.645 s][INFO] flink-gelly ........................................ SUCCESS [ 3.626 s][INFO] flink-gelly-scala .................................. SUCCESS [ 13.349 s][INFO] flink-gelly-examples ............................... SUCCESS [ 11.611 s][INFO] flink-metrics-dropwizard ........................... SUCCESS [ 3.773 s][INFO] flink-metrics-graphite ............................. SUCCESS [ 1.206 s][INFO] flink-metrics-influxdb ............................. SUCCESS [ 32.311 s][INFO] flink-metrics-prometheus ........................... SUCCESS [ 9.836 s][INFO] flink-metrics-statsd ............................... SUCCESS [ 0.255 s][INFO] flink-metrics-datadog .............................. SUCCESS [ 0.313 s][INFO] flink-metrics-slf4j ................................ SUCCESS [ 0.237 s][INFO] flink-python ....................................... SUCCESS [ 0.678 s][INFO] flink-cep-scala .................................... SUCCESS [ 9.680 s][INFO] flink-ml ........................................... SUCCESS [10:02 min][INFO] flink-ml-uber ...................................... SUCCESS [ 3.454 s][INFO] flink-table-uber ................................... SUCCESS [ 1.659 s][INFO] flink-sql-client ................................... SUCCESS [ 9.779 s][INFO] flink-scala-shell .................................. SUCCESS [ 10.039 s][INFO] flink-dist ......................................... SUCCESS [01:46 min][INFO] flink-end-to-end-tests-common ...................... SUCCESS [ 0.514 s][INFO] flink-metrics-availability-test .................... SUCCESS [ 0.256 s][INFO] flink-metrics-reporter-prometheus-test ............. SUCCESS [ 0.196 s][INFO] flink-heavy-deployment-stress-test ................. SUCCESS [ 6.269 s][INFO] flink-streaming-kafka-test-base .................... SUCCESS [ 0.332 s][INFO] flink-streaming-kafka-test ......................... SUCCESS [ 5.518 s][INFO] flink-streaming-kafka011-test ...................... SUCCESS [ 5.507 s][INFO] flink-streaming-kafka010-test ...................... SUCCESS [ 5.374 s][INFO] flink-contrib ...................................... SUCCESS [ 0.062 s][INFO] flink-connector-wikiedits .......................... SUCCESS [ 14.400 s][INFO] flink-yarn-tests ................................... SUCCESS [ 34.073 s][INFO] flink-fs-tests ..................................... SUCCESS [ 0.451 s][INFO] flink-docs ......................................... SUCCESS [ 41.917 s][INFO] ------------------------------------------------------------------------[INFO] BUILD SUCCESS[INFO] ------------------------------------------------------------------------[INFO] Total time: 01:33 h[INFO] Finished at: 2019-06-04T23:00:08 08:00[INFO] Final Memory: 435M/2373M[INFO] ------------------------------------------------------------------------
*/

进入编译后的目录,查看包结构,如下:

代码语言:javascript复制
LICENSENOTICENOTICE-binaryREADME.mdbuild-target -> /Users/search/Downloads/flink-release-1.8.0/flink-dist/target/flink-1.8.0-bin/flink-1.8.0docsflink-annotationsflink-clientsflink-connectorsflink-containerflink-contribflink-coreflink-distflink-docsflink-end-to-end-testsflink-examplesflink-filesystemsflink-formatsflink-fs-testsflink-javaflink-jepsenflink-librariesflink-mesosflink-metricsflink-optimizerflink-queryable-stateflink-quickstartflink-runtimeflink-runtime-webflink-scalaflink-scala-shellflink-shaded-curatorflink-shaded-hadoopflink-state-backendsflink-streaming-javaflink-streaming-scalaflink-tableflink-test-utils-parentflink-testsflink-yarnflink-yarn-testslicenseslicenses-binarypom.xmltargettools

注意里面有一个软连接:build-target

这个包下面所有的东西,就是我们部署时候需要安装的所有的东西,我们可以将这个目录给压缩一下,就可以单独部署或者部署在hadoop上

通过一个yarn-session的方式启动命令如下:

代码语言:javascript复制
bin/yarn-session.sh

如报下面的异常:

代码语言:javascript复制
NoClassDefFoundError ,xxx org/apache/hadoop/yarn/exceptions/YarnException

或者是:

代码语言:javascript复制
java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactor

多半是因为你没有配置HADOOP_CLASSPATH导致的,你可以执行下面的命令查看,以及配置了之后报某个类缺失异常,一般都是配置的jar路径少导致的,

代码语言:javascript复制
echo $HADOOP_CLASSPATH

如果没有,就需要加上:

代码语言:javascript复制
vi ~/.bash_profile
添加完毕后,让他生效source ~/.bash_profile

一份通用的配置模板内容如下:

代码语言:javascript复制
# .bash_profile
# Get the aliases and functionsif [ -f ~/.bashrc ]; then. ~/.bashrcfi
# User specific environment and startup programs
PATH=$PATH:$HOME/.local/bin:$HOME/bin
export PATHexport PATH=.:$PATH
export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m"#jdkexport JAVA_HOME=/home/search/jdk1.8.0_112export CLASSPATH=.:$JAVA_HOME/libexport PATH=$JAVA_HOME/bin:$PATH
#mavenexport MAVEN_HOME=/home/search/apache-maven-3.3.9export CLASSPATH=$CLASSPATH:$MAVEN_HOME/libexport PATH=$PATH:$MAVEN_HOME/bin
#antexport ANT_HOME=/home/search/apache-ant-1.9.7export CLASSPATH=$CLASSPATH:$ANT_HOME/libexport PATH=$PATH:$ANT_HOME/bin

#hadoopexport HADOOP_HOME=/home/search/hadoopexport PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_CONF_DIR=$HADOOP_HOME/etc/hadoopexport PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexport CLASSPATH=.:$CLASSPATH:$HADOOP_COMMON_HOME:$HADOOP_COMMON_HOME/lib:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_HDFS_HOME
export HADOOP_CLASSPATH=$HADOOP_COMMON_HOME/lib:$HADOOP_HOME/share/hadoop/yarn/*:$HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/tools/*:$HADOOP_HOME/share/hadoop/httpfs/*:$HADOOP_HOME/share/hadoop/kms/*:$HADOOP_HOME/share/hadoop/common/lib/*

#Hbaseexport HBASE_HOME=/home/search/hbaseexport CLASSPATH=$CLASSPATH:$HBASE_HOME/libexport PATH=$HBASE_HOME/bin:$PATH
#Pigexport PIG_HOME=/home/search/pigexport PIG_CLASSPATH=$PIG_HOME/lib:$HADOOP_HOME/etc/hadoopexport PATH=/ROOT/server/bigdata/pig/bin:$PATH
#Zookeeperexport ZOOKEEPER_HOME=/home/search/zookeeperexport CLASSPATH=.:$ZOOKEEPER_HOME/libexport PATH=$PATH:$ZOOKEEPER_HOME/bin
#Hiveexport HIVE_HOME=/home/search/hiveexport HIVE_CONF_DIR=$HIVE_HOME/confexport CLASSPATH=$CLASSPATH:$HIVE_HOME/libexport PATH=$PATH:$HIVE_HOME/bin:$HIVE_HOME/conf

启动成功之后,可以访问下面的web页面查看状态:

http://ip:39369/#/running-jobs

通过运行在yarn上来测试一个WordCount例子:(注意输出目录,如果存在的话,需要提前删除)

(1)在yarn上共用session执行,适合开发测试

代码语言:javascript复制
//启动一个yarn-sessionbin/yarn-session.sh//将任务联到一个session里面bin/yarn-session.sh -id application_1559664144394_0002//提交任务bin/flink run ./examples/batch/WordCount.jar -input hdfs://h1:8020/tmp/input/LICENSE-2.0.txt -output hdfs://h1:8020/tmp/output/result//查看结果hadoop fs -cat /tmp/output/result

(2) 单独的yarn任务执行,适合在生产环境 注意输出目录,如果存在的话,需要提前删除

代码语言:javascript复制
bin/flink run -m yarn-cluster -yn 2 -yjm 1024 -ytm 1024 ./examples/batch/WordCount.jar -input hdfs://h1:8020/tmp/input/LICENSE-2.0.txt -output hdfs://h1:8020/tmp/output/result

0 人点赞