ImportTsv-HBase数据导入工具
作者:幽鸿
一、概述
HBase官方提供了基于Mapreduce的批量数据导入工具:Bulk load和ImportTsv。关于Bulk load大家可以看下我另一篇博文。
通常HBase用户会使用HBase API导数,但是如果一次性导入大批量数据,可能占用大量Regionserver资源,影响存储在该Regionserver上其他表的查询,本文将会从源码上解析ImportTsv数据导入工具,探究如何高效导入数据到HBase。
二、ImportTsv介绍
ImportTsv是Hbase提供的一个命令行工具,可以将存储在HDFS上的自定义分隔符(默认t)的数据文件,通过一条命令方便的导入到HBase表中,对于大数据量导入非常实用,其中包含两种方式将数据导入到HBase表中:
第一种是使用TableOutputformat在reduce中插入数据;
第二种是先生成HFile格式的文件,再执行一个叫做CompleteBulkLoad的命令,将文件move到HBase表空间目录下,同时提供给client查询。
三、源码解析
本文基于CDH5 HBase0.98.1,ImportTsv的入口类是org.apache.hadoop.hbase.mapreduce.ImportTsv
[java] view plaincopyprint?
- String hfileOutPath = conf.get(BULK_OUTPUT_CONF_KEY);
- String columns[] = conf.getStrings(COLUMNS_CONF_KEY);
- if (hfileOutPath != null) {
- if (!admin.tableExists(tableName)) {
- LOG.warn(format("Table '%s' does not exist.", tableName));
- // TODO: this is backwards. Instead of depending on the existence of a table,
- // create a sane splits file for HFileOutputFormat based on data sampling.
- createTable(admin, tableName, columns);
- }
- HTable table = new HTable(conf, tableName);
- job.setReducerClass(PutSortReducer.class);
- Path outputDir = new Path(hfileOutPath);
- FileOutputFormat.setOutputPath(job, outputDir);
- job.setMapOutputKeyClass(ImmutableBytesWritable.class);
- if (mapperClass.equals(TsvImporterTextMapper.class)) {
- job.setMapOutputValueClass(Text.class);
- job.setReducerClass(TextSortReducer.class);
- } else {
- job.setMapOutputValueClass(Put.class);
- job.setCombinerClass(PutCombiner.class);
- }
- HFileOutputFormat.configureIncrementalLoad(job, table);
- } else {
- if (mapperClass.equals(TsvImporterTextMapper.class)) {
- usage(TsvImporterTextMapper.class.toString()
- " should not be used for non bulkloading case. use "
- TsvImporterMapper.class.toString()
- " or custom mapper whose value type is Put.");
- System.exit(-1);
- }
- // No reducers. Just write straight to table. Call initTableReducerJob
- // to set up the TableOutputFormat.
- TableMapReduceUtil.initTableReducerJob(tableName, null, job);
- job.setNumReduceTasks(0);
- }
String hfileOutPath = conf.get(BULK_OUTPUT_CONF_KEY);
String columns[] = conf.getStrings(COLUMNS_CONF_KEY);
if (hfileOutPath != null) {
if (!admin.tableExists(tableName)) {
LOG.warn(format("Table '%s' does not exist.", tableName));
// TODO: this is backwards. Instead of depending on the existence of a table,
// create a sane splits file for HFileOutputFormat based on data sampling.
createTable(admin, tableName, columns);
}
HTable table = new HTable(conf, tableName);
job.setReducerClass(PutSortReducer.class);
Path outputDir = new Path(hfileOutPath);
FileOutputFormat.setOutputPath(job, outputDir);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
if (mapperClass.equals(TsvImporterTextMapper.class)) {
job.setMapOutputValueClass(Text.class);
job.setReducerClass(TextSortReducer.class);
} else {
job.setMapOutputValueClass(Put.class);
job.setCombinerClass(PutCombiner.class);
}
HFileOutputFormat.configureIncrementalLoad(job, table);
} else {
if (mapperClass.equals(TsvImporterTextMapper.class)) {
usage(TsvImporterTextMapper.class.toString()
" should not be used for non bulkloading case. use "
TsvImporterMapper.class.toString()
" or custom mapper whose value type is Put.");
System.exit(-1);
}
// No reducers. Just write straight to table. Call initTableReducerJob
// to set up the TableOutputFormat.
TableMapReduceUtil.initTableReducerJob(tableName, null, job);
job.setNumReduceTasks(0);
}
从ImportTsv.createSubmittableJob方法中判断参数BULK_OUTPUT_CONF_KEY开始,这步直接影响ImportTsv的Mapreduce作业最终以哪种方式入HBase库
如果不为空并且用户没有自定义Mapper实现类(参数importtsv.mapper.class)时,则使用PutSortReducer,其中会对Put排序,如果每行记录有很多column,则会占用Reducer大量的内存资源进行排序。
[java] view plaincopyprint?
- Configuration conf = job.getConfiguration();
- HBaseConfiguration.merge(conf, HBaseConfiguration.create(conf));
- job.setOutputFormatClass(TableOutputFormat.class);
Configuration conf = job.getConfiguration();
HBaseConfiguration.merge(conf, HBaseConfiguration.create(conf));
job.setOutputFormatClass(TableOutputFormat.class);
如果为空,调用TableMapReduceUtil.initTableReducerJob初始化TableOutputformat的Reducer输出,此方式不需要使用Reducer,因为直接在mapper的Outputformat中会批量的调用Put API将数据提交到Regionserver上(相当于并行的执行HBase Put API)
四、实战
1、使用TableOutputformat的Put API上传数据,非bulk-loading
[java] view plaincopyprint?
- $ bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=a,b,c
$ bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=a,b,c
2、使用bulk-loading生成StoreFiles(HFile)
step1、生成Hfile
[java] view plaincopyprint?
- $ bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=a,b,c -Dimporttsv.bulk.output=hdfs://storefile-outputdir
$ bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=a,b,c -Dimporttsv.bulk.output=hdfs://storefile-outputdir
step2、完成导入
[java] view plaincopyprint?
- $ bin/hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
$ bin/hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
五、总结
在使用ImportTsv时,一定要注意参数importtsv.bulk.output的配置,通常来说使用Bulk output的方式对Regionserver来说更加友好一些,这种方式加载数据几乎不占用Regionserver的计算资源,因为只是在HDFS上移动了HFile文件,然后通知HMaster将该Regionserver的一个或多个region上线。