phoenix安装---安装系列十

2023-06-29 13:58:05 浏览数 (1)

承接安装系列hadoop,hbase

部署到(cdh5.5.1)

安装flume

下载安装包并解压

flume-ng-1.6.0-cdh5.5.1.tar.gz

配置环境变量:~/.bash_profile

export FLUME_HOME=/itcast/flume-1.6.0

export PATH=$PATH:$FLUME_HOME/bin

配置flume-env.sh文件

$FLUME_HOME/conf:

vim flume-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_45

export HADOOP_HOME=/itcast/hadoop-2.6.0

版本验证

flume-ng version

安装phoenix

重新编译phoenix

在phoenix源码中pom.xml修改:

<repositories>

<repository>

<id>cloudera</id>

<url>https://repository.cloudera.com/artifactory/cloudera-repos</url>

</repository>

<repository>

<id>conjars.org</id>

<url>http://conjars.org/repo</url>

</repository>

<repository>

<id>sonatype-nexus-snapshots</id>

<name>Sonatype Nexus Snapshots</name>

<url>https://oss.sonatype.org/content/repositories/snapshots</url>

<snapshots>

<enabled>true</enabled>

</snapshots>

</repository>

</repositories>

<hbase.version>hbase-1.0.0-cdh5.5.1</hbase.version>

<hadoop-two.version>hadoop-2.6.0-cdh5.5.1</hadoop-two.version>

<hive.version>hive-1.1.0-cdh5.5.1</hive.version>

<hadoop.version>hadoop-2.6.0-cdh5.5.1</hadoop.version>

<spark.version>spark-1.5.0-cdh5.5.1</spark.version>

<scala.version>scala-2.11.4</scala.version>

<scala.binary.version>scala-2.11.4</scala.binary.version>

编译:

mvn clean package -DskipTests -Dcdh.flume.version=1.6.0

mvn clean install –DskipTests

在phoenix-for-cloudera-4.6-HBase-1.0-cdh5.5/phoenix-for-cloudera-4.6-HBase-1.0-cdh5.5/phoenix-assembly/

解压tar

环境变量

export PHOENIX_HOME=/itcast/phoenix

CLASSPATH=.$PHOENIX_HOME/phoenix-4.6.0-client.jar

修改权限$PHOENIX_HOME/bin

chmod x *.py

将phoenix-4.6.0-cdh5.5.1中的phoenix-4.6.0-cdh5.5.1-server.jar和phoenix-4.6.0-client.jar拷贝到每一个RegionServer下/opt/cloudera/parcels/CDH/lib/hbase/lib

将hbase的配置文件hbase-site.xml 放到phoenix-4.6.0-bin/bin/下,替换Phoenix原来的 配置文件。

hbase-site.xml:

<property>

<name>phoenix.schema.dropMetaData</name>

<value>true</value>

</property>

我们需要在集群所有RegionServer的hbase-site.xml配置文件里面增加如下配置:

<property>

<name>hbase.regionserver.executor.openregion.threads</name>

<value>100</value>

</property>

在phoenix 上配置HBase支持Phoenix二级索引

配置文件:在每一个RegionServer的hbase-site.xml里加入如下属性

<property>

<name>hbase.regionserver.wal.codec</name>

<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>

</property>

<property>

<name>hbase.region.server.rpc.scheduler.factory.class</name>

<value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>

<description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>

</property>

<property>

<name>hbase.rpc.controllerfactory.class</name>

<value>org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory</value>

<description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>

</property>

<property>

<name>hbase.coprocessor.regionserver.classes</name>

<value>org.apache.hadoop.hbase.regionserver.LocalIndexMerger</value>

</property>

在每一个master的hbase-site.xml里加入如下属性

<property>

<name>hbase.master.loadbalancer.class</name>

<value>org.apache.phoenix.hbase.index.balancer.IndexLoadBala ncer</value>

</property>

<property>

<name>hbase.coprocessor.master.classes</name>

<value>org.apache.phoenix.hbase.index.master.IndexMasterObserver</value>

</property>

0 人点赞