Windows10(64位)中Eclipse Luna Service Release 2 (4.4.2 64位)中Hadoop2.6.0配置
1 系统配置
Windows10(64位)
Eclipse Luna Service Release 2 (4.4.2 64位)
Hadoop2.6.0
JDK1.8.0(64位)
SVN1.8.6
ANT1.9.6
2 Eclipse和Hadoop插件制作
具体制作步骤参考文章:http://my.oschina.net/muou/blog/408543,该文章中Eclipse使用的是Juno版本,制作完全没有问题,但使用Luna版本就会有问题,制作出来的插件Eclipse无法识别。最后自己使用在网上下载的2.2.0版本插件。
按照步骤将制作成功的插件放进Eclipse的plugin目录下,重启Eclipse即可,如果Eclipse识别该插件在Eclipse中即可看到该图标:
3 Eclipse配置
3.1 MapReduce引入
Window --> Show View --> Other,选择Map/Reduce Location,然后点击右键新建Hdfs链接即可。
3.2 Eclipse中Hadoop路径配置
4 Hadoop配置
4.1 解压Hadoop文件,在bin目录中配置hadoop.dll和winutils.exe,这两个插件下载地址:https://codeload.github.com/srccodes/hadoop-common-2.2.0-bin/zip/master。同时将winutils.exe在Windows的system32目录下放一份,然后重启电脑生效。
4.2 Hadoop配置文件配置,在etc/hadoop下面,core-site.xml,hdfs-site.xml,mapred-site.xml,yarn-site.xml,这四个文件里面的配置参数和集群中配置保持一致,否则无法在Eclipse中直接提交Mapreduce任务到集群。
5 WordCount测试
新建一个简单的Java工程,如下配置即可正常运行:
6 项目工程文件测试
项目工程文件测试时一直报错:
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570) at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977) at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:173) at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:160) at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:94) at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285) at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115) at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131) at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163) at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Unknown Source) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) at service.app.mapreduce.WordCount2.WordCount2.main(WordCount2.java:104)
研究后发现,是工程插件配置的问题。后来发现mahout和spark两个相应的jar文件不能放在user library里面,而直接导入即可正常运行。jar文件配置如下所示: