写在最前注意: 1、master,slave都需要修改start-dfs.sh,stop-dfs.sh,start-yarn.sh,stop-yarn.sh四个文件 2、如果你的Hadoop是另外启用其它用户来启动,记得将root改为对应用户
HDFS格式化后启动dfs出现以下错误:
代码语言:javascript复制[root@master sbin]# ./start-dfs.sh
Starting namenodes on [master]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [slave1]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
查度娘,见一仁兄的博客有次FAQ,故参考处理顺便再做一记录 参考地址:https://blog.csdn.net/u013725455/article/details/70147331
在/hadoop/sbin路径下: 将start-dfs.sh,stop-dfs.sh两个文件顶部添加以下参数
代码语言:javascript复制#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
还有,start-yarn.sh,stop-yarn.sh顶部也需添加以下:
代码语言:javascript复制#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
修改后重启 ./start-dfs.sh,成功!
代码语言:javascript复制[root@master sbin]# ./start-dfs.sh
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Starting namenodes on [master]
上一次登录:日 6月 3 03:01:37 CST 2018从 slave1pts/2 上
master: Warning: Permanently added 'master,192.168.43.161' (ECDSA) to the list of known hosts.
Starting datanodes
上一次登录:日 6月 3 04:09:05 CST 2018pts/1 上
Starting secondary namenodes [slave1]
上一次登录:日 6月 3 04:09:08 CST 2018pts/1 上