搭建环境 | 系统:centos 7 | Java 1.8 | Hadoop:2.8.1
[root@master sbin]# ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [master] The authenticity of host 'master (192.168.91.10)' can't be established. RSA key fingerprint is 7e:3f:e7:5b:69:74:e3:0e:87:7b:2b:df:3d:64:b3:1e. Are you sure you want to continue connecting (yes/no)? yes master: Warning: Permanently added 'master,192.168.91.10' (RSA) to the list of known hosts. master: Error: JAVA_HOME is not set and could not be found. master: Error: JAVA_HOME is not set and could not be found. slave0: Error: JAVA_HOME is not set and could not be found. slave1: Error: JAVA_HOME is not set and could not be found. Starting secondary namenodes [0.0.0.0] The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established. RSA key fingerprint is 7e:3f:e7:5b:69:74:e3:0e:87:7b:2b:df:3d:64:b3:1e. Are you sure you want to continue connecting (yes/no)? yes 0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts. 0.0.0.0: Error: JAVA_HOME is not set and could not be found. starting yarn daemons starting resourcemanager, logging to /opt/hadoop/logs/yarn-root-resourcemanager-master.out slave0: Error: JAVA_HOME is not set and could not be found. slave1: Error: JAVA_HOME is not set and could not be found. master: Error: JAVA_HOME is not set and could not be found.
报错后内心是崩溃的....
查看各个配置文件 core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml 都没问题
代码语言:javascript复制修改 profile 文件 export HADOOP_HOME=/opt/hadoop export PATH=$PATH:${HADOOP_HOME}/bin (这里要修改成自己的安装路径)
首先修改文件:
代码语言:javascript复制[root@master hadoop]# vi hadoop-env.sh
文件中:
export JAVA_HOME=${JAVA_HOME}
修改为(修改成自己的jdk路径):
export JAVA_HOME=/usr/jdk1.8
在重新启动,启动成功 查看进程,进程(还)运(活)行(着)
错误解决
代码语言:javascript复制[root@master sbin]# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [master] master: starting namenode, logging to /opt/hadoop/logs/hadoop-root-namenode-master.out slave1: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-slave1.out master: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-master.out slave0: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-slave0.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-root-secondarynamenode-master.out starting yarn daemons resourcemanager running as process 3253. Stop it first. slave1: starting nodemanager, logging to /opt/hadoop/logs/yarn-root-nodemanager-slave1.out master: starting nodemanager, logging to /opt/hadoop/logs/yarn-root-nodemanager-master.out slave0: starting nodemanager, logging to /opt/hadoop/logs/yarn-root-nodemanager-slave0.out
[root@master sbin]# jps
4176 NodeManager
4289 Jps
3253 ResourceManager
3669 NameNode
3957 SecondaryNameNode
3771 DataNode