1.系统环境说明
CentOS 7.0 x64 版本
192.168.1.7 master 192.168.1.8 slave 192.168.1.9 slave 192.168.1.10 slave
2.安装前的准备工作
2.1 关闭防火墙
代码语言:javascript复制# systemctl status firewalld.service --查看防火墙状态
# systemctl stop firewalld.service --关闭防火墙
# systemctl disable firewalld.service --永久关闭防火墙
2.2 检查ssh安装情况,如果没有则安装ssh
代码语言:javascript复制# systemctl status sshd.service --查看ssh状态
# yum install openssh-server openssh-clients
2.3 安装vim
代码语言:javascript复制# yum -y install vim
2.4 设置静态ip地址
代码语言:javascript复制# vim /etc/sysconfig/network-scripts/ifcfg-eno16777736
BOOTPROTO="static" ONBOOT="yes" IPADDR0="192.168.1.7" PREFIX0="255.255.255.0" GATEWAY0="192.168.1.1" DNS1="61.147.37.1" DNS2="101.226.4.6"
2.5 修改host名称
代码语言:javascript复制# vim /etc/sysconfig/network
代码语言:javascript复制HOSTNAME=master
# vim /etc/hosts
代码语言:javascript复制192.168.1.7 master 192.168.1.8 slave1 192.168.1.9 slave2 192.168.1.10 slave3
# hostnamectl set-hostname master (CentOS7 下原有的修改host方法无效了)
2.6 创建hadoop用户
代码语言:javascript复制# useradd hadoop --创建用户名为hadoop的用户# passwd hadoop --为用户hadoop设置密码
2.7 配置ssh无密钥登录
-----------下面是在master上面的操作
代码语言:javascript复制# su hadoop --切换到hadoop用户$ cd ~ --打开用户文件夹
$ ssh-keygen -t rsa -P '' --生成密码对,/home/hadoop/.ssh/id_rsa和/home/hadoop/.ssh/id_rsa.pub
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys --把id_rsa.pub追加到授权的key里面去
$ chmod 600 ~/.ssh/authorized_keys --修改权限
$ su --切换到root用户# vim /etc/ssh/sshd_config --修改ssh配置文件
RSAAuthentication yes #启用RSA认证
PubkeyAuthentication yes #启用公钥私钥配对认证方式
AuthorizedKeysFile .ssh/authorized_keys #公钥文件路径
# su hadoop --切换到hadoop用户
$ scp ~/.ssh/id_rsa.pub hadoop@192.168.1.8:~/ --把公钥复制所有的Slave机器上
----------下面是在slave1上面的操作
代码语言:javascript复制 # su hadoop --切换到hadoop用户
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ cat ~/id_rsa.pub >> ~/.ssh/authorized_keys --追加到授权文件"authorized_keys"
$ chmod 600 ~/.ssh/authorized_keys --修改权限
$ su --切换回root用户 # vim /etc/ssh/sshd_config --修改ssh配置文件
RSAAuthentication yes #启用RSA认证
PubkeyAuthentication yes #启用公钥私钥配对认证方式
AuthorizedKeysFile .ssh/authorized_keys #公钥文件路径
3.安装必须的软件
3.1 安装JDK
代码语言:javascript复制# rpm -ivh jdk-7u67-linux-x64.rpm
代码语言:javascript复制Preparing... ##################################### [100%] 1:jdk ##################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... tools.jar... localedata.jar...
# vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_67
export PATH=$PATH:$JAVA_HOME/bin
# source profile --修改生效
3.2 安装其他必须软件
代码语言:javascript复制# yum install maven svn ncurses-devel gcc* lzo-devel zlib-devel autoconf automake libtool cmake openssl-devel
3.3 安装ant
代码语言:javascript复制# tar zxvf apache-ant-1.9.4-bin.tar.gz# vim /etc/profile
export ANT_HOME=/usr/local/apache-ant-1.9.4
export PATH=$PATH:$ANT_HOME/bin
3.4 安装findbugs
代码语言:javascript复制# tar zxvf findbugs-3.0.0.tar.gz# vim /etc/profile
export FINDBUGS_HOME=/usr/local/findbugs-3.0.0
export PATH=$PATH:$FINDBUGS_HOME/bin
3.5 安装protobuf
代码语言:javascript复制# tar zxvf protobuf-2.5.0.tar.gz(必须是2.5.0版本的,不然编译hadoop的时候报错)
# cd protobuf-2.5.0
# ./configure --prefix=/usr/local
# make && make install
4. 编译hadoop源码
代码语言:javascript复制# tar zxvf hadoop-2.5.0-src.tar.gz
# cd hadoop-2.5.0-src
# mvn package -Pdist,native,docs -DskipTests -Dtar
4.1 maven中央仓库的配置(改成oschina,增加访问速度)
代码语言:javascript复制# vim /usr/share/mavem/conf/settings.xml
nexus-osc
*
Nexus osc
http://maven.oschina.net/content/groups/public/
jdk17
true
1.7
1.7
1.7
1.7
nexus
local private nexus
http://maven.oschina.net/content/groups/public/
true
false
nexus
local private nexus
http://maven.oschina.net/content/groups/public/
true
false
4.2 编译完成之后,目录/usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0
代码语言:javascript复制# ./bin/hadoop versionHadoop 2.5.0Subversion Unknown -r Unknown
Compiled by root on 2014-09-12T00:47Z
Compiled with protoc 2.5.0From source with checksum 423dcd5a752eddd8e45ead6fd5ff9a24
This command was run using /usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0/share/hadoop/common/hadoop-common-2.5.0.jar
# file lib//native/*lib//native/libhadoop.a:
current ar archivelib//native/libhadooppipes.a:
current ar archivelib//native/libhadoop.so:
symbolic link to `libhadoop.so.1.0.0'lib//native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x972b31264a1ce87a12cfbcc331c8355e32d0e774, not strippedlib//native/libhadooputils.a:
current ar archivelib//native/libhdfs.a:
current ar archivelib//native/libhdfs.so:
symbolic link to `libhdfs.so.0.0.0'lib//native/libhdfs.so.0.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x200ccf97f44d838239db3347ad5ade435b472cfa, not stripped
5. 配置hadoop
5.1 基础操作
代码语言:javascript复制# cp -r /usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0 /opt/hadoop-2.5.0
# chown -R hadoop:hadoop /opt/hadoop-2.5.0
# vi /etc/profile
export HADOOP_HOME=/opt/hadoop-2.5.0
export PATH=$PATH:$HADOOP_HOME/bin
# su hadoop
$ cd /opt/hadoop-2.5.0
$ mkdir -p dfs/name
$ mkdir -p dfs/data
$ mkdir -p tmp
$ cd etc/hadoop
5.2 配置所有slave节点
代码语言:javascript复制$ vim slaves
slave1
slave2
slave3
5.3 修改hadoop-env.sh和yarn-env.sh
代码语言:javascript复制$ vim hadoop-env.sh / vim yarn-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_67
5.4 修改core-site.xml
fs.defaultFS hdfs://master:9000 io.file.buffer.size 131702 hadoop.tmp.dir file:/opt/hadoop-2.5.0/tmp hadoop.proxyuser.hadoop.hosts hadoop.proxyuser.hadoop.groups
5.5 修改hdfs-site.xml
dfs.namenode.name.dir /opt/hadoop-2.5.0/dfs/name dfs.datanode.data.dir /opt/hadoop-2.5.0/dfs/data dfs.replication 3 dfs.namenode.secondary.http-address master:9001 dfs.webhdfs.enabled true
5.6 修改mapred-site.xml
代码语言:javascript复制# cp mapred-site.xml.template mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobhistory.address master:10020 mapreduce.jobhistory.webapp.address master:19888
5.7 配置yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.auxservices.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address master:8032 yarn.resourcemanager.scheduler.address master:8030 yarn.resourcemanager.resource-tracker.address master:8031 yarn.resourcemanager.admin.address master:8033 yarn.resourcemanager.webapp.address master:8088 yarn.nodemanager.resource.memory-mb 768
5.8 格式化namenode
代码语言:javascript复制$ ./bin/hdfs namenode -format
5.9 启动hdfs
代码语言:javascript复制$ ./sbin/start-dfs.sh
$ ./sbin/start-yarn.sh
5.10 检查启动情况
代码语言:javascript复制http://192.168.1.7:8088http://192.168.1.7:50070