一、最流行的大数据框架Spark
- Yarn 环境搭建
- Spark History Server 以及 Yarn MapReduce History Servcer
- Spark-submit 提交到Yarn 运行
二、Docker部署Hadoop Yarn
部署结果:
- 一台namenode节点,运行
namenode resourcemanager JobHistoryServer HistoryServer
- 2台datanode节点,运行
datanode nodemanager
- 主机Mac
docker宿主机(virtualbox) Intellij idea spark client hdfs client
三、网络结构
- mac 192.168.99.1
- namenode 172.18.0.11
- datanode1,datanode2 172.18.0.13 172.18.0.14
- virtualbox 网桥 192.168.99.100
建立192.168.99.1 ~ 172.18.0.0 路由
代码语言:javascript复制sudo route -n add 172.18.0.0/24 192.168.99.100
docker 创建 172.18 网段,命名hadoopnet,docker设置ip必须先创建网络
代码语言:javascript复制docker network create --subnet=172.18.0.0/16 hadoopnet
四、启动docker
本地文件夹,整理好的工作空间
hadoop文件夹
说明:每个文件夹包含一个启动脚本和一个hdfs挂载的共享卷data
etc/hadoop
说明:本地的hadoop目录会挂载到docker中hadoop/etc/hadoop配置文件目录。
1. NameNode
代码语言:javascript复制docker run --name namenode
--hostname namenode
--network hadoopnet
--ip 172.18.0.11
-d
-v $PWD/data:/opt/tmp
-v /Users/wangsen/hadoop/datanode/hadoop:/opt/hadoop-2.7.3/etc/hadoop
-v $PWD/spark-2.1.1-bin-hadoop2.7:/opt/spark
--rm dbp/hadoop
dbp/hadoop是docker镜像的名字,共加载了3个共享卷(文件夹)
- /opt/tmp hdfs 存储路径
- etc/hadoop hadpoop配置路径
- 主节点挂载spark
在创建镜像的时候没有装载spark,hadoop是通过Dockerfile创建dbp/hadoop时,装载到镜像中的;设置spark采用装载模式,也可以重新commit或build dockerfile生成包含spark的镜像。
2. DataNode(datanode1、datanode2)
代码语言:javascript复制docker run --name datanode1 --hostname datanode1 --network hadoopnet --ip 172.18.0.13 -d -v $PWD/data:/opt/tmp -v /Users/wangsen/hadoop/datanode/hadoop:/opt/hadoop-2.7.3/etc/hadoop --rm dbp/hadoop
代码语言:javascript复制docker run --name datanode2 --hostname datanode2 --network hadoopnet --ip 172.18.0.14 -d -v $PWD/data:/opt/tmp -v /Users/wangsen/hadoop/datanode/hadoop:/opt/hadoop-2.7.3/etc/hadoop --rm dbp/hadoop
五、启动HDFS、YARN
- etc/hadoop/core-site.xml
## 配置HDFS路径
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:9000</value>
</property>
- etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/tmp</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/tmp</value>
</property>
- etc/hadoop/yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>namenode:18040</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>namenode:18030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>namenode:18025</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>namenode:18141</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>namenode:18088</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log.server.url</name>
<value>http://namenode:19888/jobhistory/logs</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
- spark/conf/spark-env
export HADOOP_CONF_DIR=/opt/hadoop-2.7.3/etc/hadoop
- spark/conf/spark-defaults.conf
## 配置spark ui 页面,通过yarn history服务查看spark任务运行结果
## hdfs:///tmp/spark/events是hdfs上的路径,保存spark运行信息
spark.master=local
spark.yarn.historyServer.address=namenode:18080
spark.history.ui.port=18080
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///tmp/spark/events
spark.history.fs.logDirectory=hdfs:///tmp/spark/events
- hadoop/etc/hadoop-env.sh
修改JAVA_HOME,填写java_home的绝对路径
启动顺序
- HDFS namenode -->sbin/hadoop-daemon.sh start namenode datanode -->sbin/hadoop-daemon.sh start datanode (已经设置好ssh免密码登录,docker共享了public_key文件。)
- Yarn namenode --> sbin/yarn-daemon.sh start resourcemanager datanode -->sbin/yarn-daemon.sh start nodemanager
- Spark jobserver namenode--> sbin/mr-jobhistory-daemon.sh start historyserver namenode--> spart/sbin/start-history-server.sh
六、浏览spark histroy页面
http://namenode:18080
spark history
附录 Dockerfile
如果你希望按作者的思路,搭建自己的spark docker集群,那么你可以从Dockerfile 创建image开始。
代码语言:javascript复制FROM ubuntu:16.04
MAINTAINER wsn
RUN apt-get update
RUN apt-get install -y openjdk-8-jdk
RUN apt-get install -y vim
RUN apt install -y net-tools
RUN apt install -y iputils-ping
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:root' |chpasswd
RUN sed -ri 's/^PermitRootLogins .*/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
RUN sed -ri 's/# StrictHostKeyChecking ask/StrictHostKeyChecking no/' /etc/ssh/ssh_config
RUN mkdir /root/.ssh
RUN ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
RUN cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV JRE_HOME /usr/lib/jvm/java-8-openjdk-amd64/jre
ENV PATH /opt/hadoop-2.7.3/bin:/opt/hadoop-2.7.3/sbin:/usr/lib/jvm/java-8-openjdk-amd64/bin:$PATH
ENV CLASSPATH ./:/usr/lib/jvm/java-8-openjdk-amd64/lib:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib
ADD hadoop-2.7.3.tar.gz /opt/
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]