文档编写目的
目前各个企业都在利用Hadoop大数据平台,每天都会通过ETL产生大量的文件到hdfs上,如何有效的去监测数据的有效性,防止数据的无限增长导致物理资源跟不上节奏,我们必须控制成本,让有限的资源发挥大数据的极致功能。本文介绍如何去分析hdfs上的文件变化情况,以及老生常谈的小文件的监控情况的一种实现方式。
实现方式说明
本次分析方案有两种:
- 利用hdfs的api文档,通过hdfs实例的listStatus方法递归出hdfs上所有的文件及目录的具体情况,包括path、ower、size等重要属性。然后将这些数据写到本地文件中,上传到hdfs上,然后在hive上建一个外表来映射这些数据,最后利用sql进行各种分析;
- 第二种方式主要是在获取源数据时跟第一种不同,这次采用的是hdfs自带的分析fsimage文件的命令hdfs oiv -i fsimage文件 -o 输出文件 -p Delimited,该命令将fsimage文件解析成可阅读的csv文件,后续操作跟第一种一样都是上传到hdfs建外表用sql来分析各种指标。
代码讲解
方法一:用java代码通过hdfs的api文档获取完整数据
源码如下
代码语言:javascript复制package com.mljr.hdfs;
import java.io.*;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class HdfsStatus {
public static void main(String[] args) {
FileSystem hdfs = null;
try{
Configuration config = new Configuration();
config.set("fs.default.name", "nameservice1");
hdfs = FileSystem.get(new URI("nameservice1"),//主节点ip或者hosts
config, "hdfs");
Path path = new Path("/");//这里定义从hdfs的根节点开始计算
String content_csv = "/tmp/content.csv";
long startTime=System.currentTimeMillis(); //获取开始时间
BufferedOutputStream out =new BufferedOutputStream(new FileOutputStream(new File(content_csv)));
iteratorShowFiles(hdfs, path,out);
out.close();
long endTime=System.currentTimeMillis(); //获取结束时间
long runTime = (endTime-startTime)/1000/60;
System.out.println("程序运行时间: " runTime "min");
}catch(Exception e){
e.printStackTrace();
}finally{
if(hdfs != null){
try {
hdfs.closeAll();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
/**
*
* @param hdfs FileSystem 对象
* @param path 文件路径
*/
public static void iteratorShowFiles(FileSystem hdfs, Path path,BufferedOutputStream out){
String line = System.getProperty("line.separator");
try{
if(hdfs == null || path == null){
return;
}
//获取文件列表
FileStatus[] files = hdfs.listStatus(path);
//创建输出文件
//展示文件信息
for (int i = 0; i < files.length; i ) {
try{
if(files[i].isDirectory()){
String text = (files[i].getPath().toString().replace("hdfs://nameservice1","")
"," files[i].getOwner()
"," "0"
"," "0"
"," files[i].getBlockSize()
"," files[i].getPermission()
"," files[i].getAccessTime()
"," files[i].getModificationTime()
"," files[i].getReplication() line);
out.write(text.getBytes());
//递归调用
iteratorShowFiles(hdfs, files[i].getPath(),out);
}else if(files[i].isFile()){
String text=files[i].getPath().toString().replace("hdfs://nameservice1","")
"," files[i].getOwner()
"," "1"
"," files[i].getLen()
"," files[i].getBlockSize()
"," files[i].getPermission()
"," files[i].getAccessTime()
"," files[i].getModificationTime()
"," files[i].getReplication() line;
out.write(text.getBytes());
}
}catch(Exception e){
e.printStackTrace();
}
}
}catch(Exception e){
e.printStackTrace();
}
}
}
将本地的文件上传到hdfs上,然后建hive外表
代码语言:javascript复制#!/bin/bash
source /etc/profile
cd /home/dmp/hdfs
#生成hdfs目录文件和节点信息
java -cp ./HdfsStatus-1.0-SNAPSHOT.jar com.mljr.hdfs.HdfsStatus
#将文件上传到hdfs(hdfs目录需要提前创建好)
hadoop fs -rm -r /tmp/dfs/content/content.csv /tmp/dfs/nodes/nodes.csv
hadoop fs -put /tmp/content.csv /tmp/dfs/content
于Hive建立外部表
代码语言:javascript复制CREATE EXTERNAL TABLE `default.hdfs_info`(
`path` string,
`owner` string,
`is_dir` string,
`filesize` string,
`blocksize` string,
`permisson` string,
`acctime` string,
`modificatetime` string,
`replication` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'field.delim'=',',
'serialization.format'=',')
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://nameservice1/tmp/dfs/content'
SQL分析计算
代码语言:javascript复制#sql分析一级目录大小
select joinedpath, sumsize
from
(
select joinedpath,round(sum(filesize)/1024/1024/1024,2) as sumsize
from
(select concat('/',split(path,'/')[1]) as joinedpath,accTime,filesize,owner
from default.hdfs_info
)t
group by joinedpath
)h
order by sumsize desc
#sql分析二级目录大小
select joinedpath, sumsize
from
(
select joinedpath,round(sum(filesize)/1024/1024/1024,2) as sumsize
from
(select concat('/',split(path,'/')[1],'/',split(path,'/')[2]) as joinedpath,accTime,filesize,owner
from default.hdfs_info
)t
group by joinedpath
)h
order by sumsize desc
###后面的各级目录方式类似,就不再详述了,下面说下各级目录小文件统计的sql
#三级目录下小于100k文件数量的统计
SELECT concat('/',split(path,'/')[1],'/',split(path,'/')[2],'/',split(path,'/')[3]) as path ,count(*) as small_file_num
FROM
(SELECT relative_size,path
FROM
(SELECT (case filesize < 100*1024 WHEN true THEN 'small' ELSE 'large' end)
AS
relative_size, path
FROM default.hdfs_info WHERE is_dir='1') tmp
WHERE
relative_size='small') tmp2
group by concat('/',split(path,'/')[1],'/',split(path,'/')[2],'/',split(path,'/')[3])
order by small_file_num desc;
###其他各级目录小文件数量的统计,方法类似,下面说下hive某个库下面表大小以及修改时间的统计
SELECT joinedpath,
from_unixtime(ceil(acctime/1000),'yyyy-MM-dd HH:mm:ss') AS acctime,
from_unixtime(ceil(modificatetime/1000),'yyyy-MM-dd HH:mm:ss') AS modificatetime,
sumsize
FROM
(SELECT joinedpath,
min(accTime) AS acctime,
max(modificatetime) AS modificatetime,
round(sum(filesize)/1024/1024/1024,2) AS sumsize
FROM
(SELECT concat('/',split(path,'/')[1],'/',split(path,'/')[2],'/',split(path,'/')[3],'/',split(path,'/')[4],'/',split(path,'/')[5]) AS joinedpath,
accTime,
modificatetime,
filesize,
OWNER
FROM default.hdfs_info
WHERE concat('/',split(path,'/')[1],'/',split(path,'/')[2],'/',split(path,'/')[3],'/',split(path,'/')[4])='/user/hive/warehouse/default.db')t
WHERE joinedpath != 'null'
GROUP BY joinedpath)h
ORDER BY sumsize DESC
HDFS元数据可用来分析的太多了,本文只是抛砖引玉给出了一些基本的sql分析。
方法二:使用Shell脚本获取HDFS元数据镜像FSImage文件
首先,我们看下HDFS元数据镜像文件FSImage有哪些字段内容,使用以下命令将其转换为可读的csv格式文件。
代码语言:javascript复制nohup bin/hdfs oiv -i ./fsimage_XXXXX -o ./fsimage_0127.csv -p Delimited -delimiter ',' --temp /data02/tmp &
其第一行有每个字段的解释,打出来看一下:
代码语言:javascript复制# head -n 2 fsimage_0127.csv
Path,Replication,ModificationTime,AccessTime,PreferredBlockSize,BlocksCount,FileSize,NSQUOTA,DSQUOTA,Permission,UserName,GroupName
/,0,2020-03-26,16:00,1970-01-01,08:00,0,0,0,9223372036854775807,-1,drwxr-xr-x,hdfs,hdfs
/tmp,0,2020-01-08,14:40,1970-01-01,08:00,0,0,0,-1,-1,drwxrwxrwx,hdfs,hdfs
看字面意思很好理解,这里就不挨个解释了。
代码语言:javascript复制#!/bin/bash
prepare_operation()
{
# get parameters
t_save_fsimage_path=$1
# delete history fsimage
fsimage_tmp_file=`find ${t_save_fsimage_path} -name "fsimage*"`
if [ ! -z "${fsimage_tmp_file}" ]
then
for file in ${fsimage_tmp_file}
do
rm -f ${file}
done
fi
# 使用set -e时,如果命令返回结果不为0就报错,即无法再使用$?获取命令结果,可用||或!处理
}
get_hdfs_fsimage()
{
# 获取传入参数
t_save_fsimage_path=$1
# 从namenode上下载fsimage
hdfs dfsadmin -fetchImage ${t_save_fsimage_path}
# 获取下载的fsimage具体文件路径
t_fsimage_file=`ls ${t_save_fsimage_path}/fsimage*`
# 处理fsimage为可读的csv格式文件
hdfs oiv -i ${t_fsimage_file} -o ${t_save_fsimage_path}/fsimage.csv -p Delimited
# 删除fsimage.csv的首行数据
sed -i -e "1d" ${t_save_fsimage_path}/fsimage.csv
# 创建数据目录
hadoop fs -test -e ${t_save_fsimage_path}/fsimage || hdfs dfs -mkdir -p ${t_save_fsimage_path}/fsimage
# 拷贝fsimage.csv到指定的路径
hdfs dfs -copyFromLocal -f ${t_save_fsimage_path}/fsimage.csv ${t_save_fsimage_path}/fsimage/
}
main()
{
# 开始时间
begin_time=`date %s`
# 定义本地和HDFS的临时目录路径
t_save_fsimage_path=/tmp/dfs
# 创建临时目录,删除历史数据等操作
prepare_operation ${t_save_fsimage_path}
# 获取HDFS的FSImage
hdfs_fsimage_update_time=`date " %Y-%m-%d %H:%M:%S"`
get_hdfs_fsimage ${t_save_fsimage_path}
# 结束时间
end_time=`date %s`
# 耗时(秒数)
result_time=$((end_time-begin_time))
echo "******************************************************************"
echo "The script has taken ${result_time} seconds..."
echo "Result Table: default.hdfs_meta"
echo "HDFS FSImage update-time before: ${hdfs_fsimage_update_time}"
echo "******************************************************************"
}
#执行主方法
main "$@"
之后在进行建外部表和sql分析操作。
除了上述两种获取HDFS元数据的方法之外,还可以通过WebHDFS REST API获取,并且优雅的Python还有个对WebHDFS REST API接口解析的一个对应的包--pywebhdfs
,可谓是非常方便。
https://pythonhosted.org/pywebhdfs/
总结
其实基于hdfs上的文件以及目录的分析还有很多工作要做,比如:分析hdfs各级目录每天的增量变化情况,得出集群主要的增长数据来自哪个地方;分析hdfs上文件的生命周期,得出hdfs文件的冷热状态,太久没有被访问的文件被认为冷数据,一个文件在hdfs上很久都没变动了是否代表这个数据就没价值了,合理的利用hdfs存储空间可是能帮公司节约很大的成本哦。
又如,在一个多租户的hadoop集群中,分析租户hdfs文件目录配额及使用率,可为租户生成租户账单。
另外hive表实质上也是hdfs上的文件,通过分析hdfs上文件包含的小文件可以知道哪些hive表没有正常使用参数产生了大量的小文件,还可以通过hive表对应的hdfs目录用户的访问频率可以看出哪些hive表用户访问频繁,进而反映出哪些业务数据是热点数据,哪些话题是热点话题等等。元数据价值非常值得我们去挖掘。
本文部分内容来自于https://www.jianshu.com/p/c1c32c4def6f,转载已获得作者同意。