软件准备:
1.Tomcat
2.solr-5.2.1.tgz
3.hadoop-2.7.2
运行环境
centos7
看以前文档hadoop安装好
在hadoop-2.7.2/etc/hadoop下的hdfs-site.xml增加了以下内容
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
WebHDFS观念是基于HTTP操作,比如GET、PUT、POST和DELETE,引入Rest接口
REST http的格式为:
http://<HOST>:<HTTP_PORT>/webhdfs/v1/<PATH>? [user.name=<USER>&]op=..
安装solr(看以前文档)
修改solrhome文件夹中tika/conf 下的solrconfig.xml
替换原有directoryFactory配置为如下
<directoryFactory name="DirectoryFactory" class="solr.HdfsDirectoryFactory">
<str name="solr.hdfs.home">hdfs://172.xx.xx.xx:9000/solr/tika</str>
<bool name="solr.hdfs.blockcache.enabled">true</bool>
<int name="solr.hdfs.blockcache.slab.count">1</int>
<bool name="solr.hdfs.blockcache.direct.memory.allocation">true</bool>
<int name="solr.hdfs.blockcache.blocksperbank">16384</int>
<bool name="solr.hdfs.blockcache.read.enabled">true</bool>
<bool name="solr.hdfs.blockcache.write.enabled">true</bool>
<bool name="solr.hdfs.nrtcachingdirectory.enable">true</bool>
<int name="solr.hdfs.nrtcachingdirectory.maxmergesizemb">16</int>
<int name="solr.hdfs.nrtcachingdirectory.maxcachedmb">192</int>
</directoryFactory>
注:solr.hdfs.home 为你所安装的Hadoop的HDFS的访问路劲
替换原有lockType为如下:
<lockType>${solr.lock.type:hdfs}</lockType>
替换dataDir
<dataDir>${solr.data.dir:hdfs://172.xx.xx.xxx:9000/solr/tika/data}</dataDir>
在/apache-tomcat7-solr/webapps/solr/WEB-INF/lib替换jar包:
rm hadoop-*.jar
rm protobuf-java-*.jar
rm -rf htrace-core-3.0.4.jar
在hadoop中share文件夹下分别得到
commons-collections-3.2.2.jar,hadoop-annotations-2.7.2.jar,hadoop-auth-2.7.2.jar,
hadoop-common-2.7.2.jar,hadoop-hdfs-2.7.2.jar,htrace-core-3.1.0-incubating.jar,
protobuf-java-2.5.0.jar
复制到/apache-tomcat7-solr/webapps/solr/WEB-INF/lib中
启动tomcat,即可访问solr
http://172.xxx.xx.xxx:28080/solr/