文章目录
- 配置三台虚拟机环境
- ssh免密登录
- JDK和Hadoop
- 启动
- 分布式节点下实战
配置三台虚拟机环境
复制三台虚拟机
代码语言:javascript复制#修改hostname
vi /etc/hostname
hadoop01
#设置全局ip
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.210.121 hadoop01
192.168.210.122 hadoop02
192.168.210.123 hadoop03
#设置新的ip
vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
hadoop01
TYPE="Ethernet"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
NAME="eno16777736"
UUID="7490940a-d84f-4b04-b1dc-a00d63563bae"
DEVICE="eno16777736"
ONBOOT="yes"
IPADDR="192.168.210.121"
PREFIX="24"
GATEWAY="192.168.210.2"
DNS1="8.8.8.8"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_PRIVACY="no"
ssh免密登录
代码语言:javascript复制ssh-keygen -t rsa
一直回车,就会生成一个密钥
将密钥拷贝给远程的主机:
ssh-copy-id slave1
连接远程主机:
ssh slave1
退回到本机:
exit
分别配置三台机器的免密登录,记得给本机也发一次密钥。
JDK和Hadoop
解压并配置环境变量
代码语言:javascript复制export JAVA_HOME=/root/software/jdk1.8.0_91
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/root/software/hadoop-2.6.0-cdh5.7.0
export PATH=$HADOOP_HOME/bin:$PATH
修改hadoop配置文件 /root/software/hadoop-2.6.0-cdh5.7.0/etc/hadoop
vi hadoop-env.sh
代码语言:javascript复制export JAVA_HOME=/root/software/jdk1.8.0_91
vi core-site.xml (端口号也有配8020的)
代码语言:javascript复制<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>
vi hdfs-site.xml 此时三台虚拟机
代码语言:javascript复制<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/root/data/tmp/dfs</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/root/data/tmp/dfs/data</value>
</property>
</configuration>
vi yarn-site.xml
代码语言:javascript复制 <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop01</value>
</property>
vi mapred-site.xml (单独复制模板)
代码语言:javascript复制<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
vi slaves (从节点)
代码语言:javascript复制hadoop01
hadoop02
hadoop03
分发到hadoop02、hadoop03
代码语言:javascript复制scp -r hadoop-2.6.0-cdh5.7.0 hadoop03:$PWD
注意hdfs的临时目录
data]# mkdir tmp
启动
代码语言:javascript复制#格式化NNnamenode,只在虚拟机hadoop01
/root/software/hadoop-2.6.0-cdh5.7.0/bin
./hdfs namenode -format
#启动all
/root/software/hadoop-2.6.0-cdh5.7.0/sbin
./start-all.sh
验证jps
代码语言:javascript复制hadoop01
3193 NameNode
3284 DataNode
3464 SecondaryNameNode
3705 NodeManager
3610 ResourceManager
hadoop02
3352 NodeManager
3257 DataNode
hadoop03
3352 NodeManager
3257 DataNode
url验证 hadoop http://hadoop01:50070
yarn http://hadoop01:8088/
分布式节点下实战
日志分析部署到分布式下