Mapreduce 包
你需从发布页面获得MapReduce tar包。若不能,你要将源码打成tar包。
$ mvn clean install -DskipTests $ cd Hadoop-mapreduce-project $ mvn clean install assembly:assembly -Pnative |
---|
- Ubuntu 13.04上搭建Hadoop环境 http://www.linuxidc.com/Linux/2013-06/86106.htm
- Ubuntu 12.10 Hadoop 1.2.1版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm
- Ubuntu上搭建Hadoop环境(单机模式 伪分布模式) http://www.linuxidc.com/Linux/2013-01/77681.htm
- Ubuntu下Hadoop环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm
- 单机版搭建Hadoop环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm
- 搭建Hadoop环境(在Winodws环境下用虚拟机虚拟两个Ubuntu系统进行搭建) http://www.linuxidc.com/Linux/2011-12/48894.htm
配置环境 假设你已经安装hadoop-common/hadoop-hdfs,并且输出了HADOOP_COMMON_HOME/HADOOP_HDFS_HOME,解压hadoop mapreduce 包,配置环境变量HADOOP_MAPRED_HOME到要安装的目录。HADOOP_YARN_HOME的配置和 注意:下面的操作假设你已经运行了hdfs。 设置配置信息 要启动ResourceManager and NodeManager, 你必须升级配置。假设你的 $HADOOP_CONF_DIR是配置目录,并且已经安装了HDFS和core-site.xml。还有2个配置文件你必须设置 mapred-site.xml 和yarn-site.xml. 设置 mapred-site.xml 添加下面的配置到你的mapred-site.xml.
代码语言:javascript复制<property>
<name>mapreduce.cluster.temp.dir</name>
<value></value>
<description>No description</description>
<final>true</final>
</property>
<property>
<name>mapreduce.cluster.local.dir</name>
<value></value>
<description>No description</description>
<final>true</final>
</property>
设置 yarn-site.xml 添加下面的配置到你的yarn-site.xml.
代码语言:javascript复制
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>host:port</value>
<description>host is the hostname of the resource manager and
port is the port on which the NodeManagers contact the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>host:port</value>
<description>host is the hostname of the resourcemanager and port is the port
on which the Applications in the cluster talk to the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
<description>In case you do not want to use the default scheduler</description>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>host:port</value>
<description>the host is the hostname of the ResourceManager and the port is the port on
which the clients can talk to the Resource Manager. </description>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value></value>
<description>the local directories used by the nodemanager</description>
</property>
<property>
<name>yarn.nodemanager.address</name>
<value>0.0.0.0:port</value>
<description>the nodemanagers bind to this port</description>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>10240</value>
<description>the amount of memory on the NodeManager in GB</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/app-logs</value>
<description>directory on hdfs where the application logs are moved to </description>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value></value>
<description>the directories used by Nodemanagers as log directories</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run </description>
</property>
设置 capacity-scheduler.xml 确保你放置根队列到capacity-scheduler.xml.
代码语言:javascript复制<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>unfunded,default</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.capacity</name>
<value>100</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.unfunded.capacity</name>
<value>50</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.capacity</name>
<value>50</value>
</property>
运行守护进程
假设环境变量 $HADOOP_COMMON_HOME, $HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $HADOOP_YARN_HOME,$JAVA_HOME 和 $HADOOP_CONF_DIR 已经设置正确。$$YARN_CONF_DIR 的设置同 $HADOOP_CONF_DIR。
运行ResourceManager 和 NodeManager 如下:
$ cd $HADOOP_MAPRED_HOME $ sbin/yarn-daemon.sh start resourcemanager $ sbin/yarn-daemon.sh start nodemanager |
---|
你应该启动和运行。你可以运行randomwriter如下:
$ $HADOOP_COMMON_HOME/bin/hadoop jar hadoop-examples.jar randomwriter out |
---|
祝你好运。