近学习Hadoop,在Windows Eclipse 虚拟机Hadoop集群环境下运行Mapreduce程序遇到了很多问题。上网查了查,并经过自己的分析,最终解决,在此分享一下,给遇到同样问题的人提供参考。
我的Hadoop集群环境:
虚拟机上4台机器:192.168.137.111(master)、192.168.137.112(slave1)、192.168.137.113(slave2)、192.168.137.114(slave3)
Hadoop集群用户名:hadoop
Hadoop版本:hadoop-1.1.2
开发环境:Windows7 Eclipse Hadoop插件
异常1:
14/10/18 08:23:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/10/18 08:23:47 ERROR security.UserGroupInformation: PriviledgedActionException as:guilin cause:java.io.IOException: Failed to set permissions of path: tmphadoop-guilinmapredstagingguilin1651756173.staging to 0700 Exception in thread "main" java.io.IOException: Failed to set permissions of path: tmphadoop-guilinmapredstagingguilin1651756173.staging to 0700 at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:662) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:918) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912) at org.apache.hadoop.mapreduce.Job.submit(Job.java:500) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530) at com.guilin.hadoop.mapreduce.WordCount.main(WordCount.java:75)
原因:wordcount程序连的是本地windows上的hadoop,需添加conf.set("mapred.job.tracker", "master:9001"),连接集群。
异常2:
14/10/18 08:37:14 ERROR security.UserGroupInformation: PriviledgedActionException as:guilin cause:org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=guilin, access=EXECUTE, inode="hadoop":hadoop:supergroup:rwx------ Exception in thread "main" org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=guilin, access=EXECUTE, inode="hadoop":hadoop:supergroup:rwx------ at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1030) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:103) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:918) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912) at org.apache.hadoop.mapreduce.Job.submit(Job.java:500) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530) at com.guilin.hadoop.mapreduce.WordCount.main(WordCount.java:75) Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: Permission denied: user=guilin, access=EXECUTE, inode="hadoop":hadoop:supergroup:rwx------ at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:199) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:155) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:125) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5468) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5447) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2168) at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:888) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:230) at com.sun.proxy.$Proxy2.getFileInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy2.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1028) ... 12 more
原因:wordcount程序使用windows7的账户登录集群hadoop,我的系统账户名是guilin,而hadoop集群账户是hadoop,并且集群hadoop目录权限设置的是仅hadoop用户有读、写、执行权限。
解决办法:第一种是修改windows管理员(Administrator)账户名为hadoop账户名;第二种是在集群上创建一个账户名称与windows管理员账户名相同,并设置对hadoop目录有读、写、执行权限。推荐使用第一种,
异常3:
14/10/18 09:57:19 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). 14/10/18 09:57:19 INFO input.FileInputFormat: Total input paths to process : 5 14/10/18 09:57:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/10/18 09:57:19 WARN snappy.LoadSnappy: Snappy native library not loaded 14/10/18 09:57:20 INFO mapred.JobClient: Running job: job_201410181754_0001 14/10/18 09:57:21 INFO mapred.JobClient: map 0% reduce 0% 14/10/18 09:57:29 INFO mapred.JobClient: Task Id : attempt_201410181754_0001_m_000004_0, Status : FAILED java.lang.RuntimeException: java.lang.ClassNotFoundException: com.guilin.hadoop.mapreduce.WordCount$TokenizerMapper at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:849) at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.Child.main(Child.java:249)
原因:hadoop集群上运行mapreduce程序需要jar包。
解决办法:添加conf.set("mapred.jar","hadoop-test.jar");
把项目打包为jar文件hadoop-test.jar,放置在项目根目录下。
wordcount完整代码
package com.guilin.hadoop.mapreduce;
import java.io.IOException; import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private static final IntWritable one = new IntWritable(1); private Text word = new Text();
public void map(Object key, Text value, Mapper<Object, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { this.word.set(itr.nextToken()); context.write(this.word, one); } } }
public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum = val.get(); } this.result.set(sum); context.write(key, this.result); } }
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Configuration conf = new Configuration(); conf.set("mapred.job.tracker", "master:9001"); conf.set("mapred.jar", "hadoop-test.jar"); String[] ars = new String[] { "hdfs://master:9000/usr/hadoop/input", "hdfs://master:9000/usr/hadoop/newout1" }; String[] otherArgs = new GenericOptionsParser(conf, ars) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "wordcount"); job.setJarByClass(WordCount.class); job.setMapperClass(WordCount.TokenizerMapper.class); job.setCombinerClass(WordCount.IntSumReducer.class); job.setReducerClass(WordCount.IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); }
}
最后运行成功
14/10/18 10:12:27 INFO input.FileInputFormat: Total input paths to process : 2 14/10/18 10:12:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/10/18 10:12:27 WARN snappy.LoadSnappy: Snappy native library not loaded 14/10/18 10:12:27 INFO mapred.JobClient: Running job: job_201410181754_0004 14/10/18 10:12:28 INFO mapred.JobClient: map 0% reduce 0% 14/10/18 10:12:32 INFO mapred.JobClient: map 100% reduce 0% 14/10/18 10:12:39 INFO mapred.JobClient: map 100% reduce 33% 14/10/18 10:12:40 INFO mapred.JobClient: map 100% reduce 100% 14/10/18 10:12:40 INFO mapred.JobClient: Job complete: job_201410181754_0004 14/10/18 10:12:40 INFO mapred.JobClient: Counters: 29 14/10/18 10:12:40 INFO mapred.JobClient: Job Counters 14/10/18 10:12:40 INFO mapred.JobClient: Launched reduce tasks=1 14/10/18 10:12:40 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=4614 14/10/18 10:12:40 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 14/10/18 10:12:40 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 14/10/18 10:12:40 INFO mapred.JobClient: Launched map tasks=2 14/10/18 10:12:40 INFO mapred.JobClient: Data-local map tasks=2 14/10/18 10:12:40 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=8329 14/10/18 10:12:40 INFO mapred.JobClient: File Output Format Counters 14/10/18 10:12:40 INFO mapred.JobClient: Bytes Written=31 14/10/18 10:12:40 INFO mapred.JobClient: FileSystemCounters 14/10/18 10:12:40 INFO mapred.JobClient: FILE_BYTES_READ=75 14/10/18 10:12:40 INFO mapred.JobClient: HDFS_BYTES_READ=264 14/10/18 10:12:40 INFO mapred.JobClient: FILE_BYTES_WRITTEN=154204 14/10/18 10:12:40 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=31 14/10/18 10:12:40 INFO mapred.JobClient: File Input Format Counters 14/10/18 10:12:40 INFO mapred.JobClient: Bytes Read=44 14/10/18 10:12:40 INFO mapred.JobClient: Map-Reduce Framework 14/10/18 10:12:40 INFO mapred.JobClient: Map output materialized bytes=81 14/10/18 10:12:40 INFO mapred.JobClient: Map input records=2 14/10/18 10:12:40 INFO mapred.JobClient: Reduce shuffle bytes=81 14/10/18 10:12:40 INFO mapred.JobClient: Spilled Records=12 14/10/18 10:12:40 INFO mapred.JobClient: Map output bytes=78 14/10/18 10:12:40 INFO mapred.JobClient: CPU time spent (ms)=1090 14/10/18 10:12:40 INFO mapred.JobClient: Total committed heap usage (bytes)=241246208 14/10/18 10:12:40 INFO mapred.JobClient: Combine input records=8 14/10/18 10:12:40 INFO mapred.JobClient: SPLIT_RAW_BYTES=220 14/10/18 10:12:40 INFO mapred.JobClient: Reduce input records=6 14/10/18 10:12:40 INFO mapred.JobClient: Reduce input groups=4 14/10/18 10:12:40 INFO mapred.JobClient: Combine output records=6 14/10/18 10:12:40 INFO mapred.JobClient: Physical memory (bytes) snapshot=311574528 14/10/18 10:12:40 INFO mapred.JobClient: Reduce output records=4 14/10/18 10:12:40 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1034760192 14/10/18 10:12:40 INFO mapred.JobClient: Map output records=8