面试题|无索引如何删除亿级数据?

2019-07-01 17:54:22 浏览数 (2)

一 业务需求

某业务表a 数据量大约4.7亿,单表物理大小为370G,其中某些指定xxid='xxx'值的记录大约2亿。受限于磁盘空间比较紧张,需要对在无索引的情况下删除无效数据。如何优雅的删除呢?

二 思路

2.1 xxid本身有索引

存在索引的情况下就比较简单,直接利用索引进行删除,写一个for 循环语句 每次删除500行,每次判断delete 影响的行数可以累加计算删除了多少行,直到删除结果为0行。

代码语言:javascript复制
delete from a where xxid='xxx' limit 500 ;

那么问题来了 ,如果要求不能创建索引怎么处理?

2.2 xxid 字段无索引

因为表占用的空间已经比较大 370G ,再添加索引会更大。因为没有索引,故我们不能直接像方法一 那样 根据 where xxxid='xxx' 删除数据,那样更慢,可能会引发故障。

我们采取分而治之的方式,基于主键把表的数据分段,比如每段1000行-2000行(如果主键id不连续 则实际数据量会小于指定分段数据)。然后在这1000行里面删除指定的数据,这样delete的执行效率会比直接依赖 xxxid='xxx' 好很多。

代码语言:javascript复制
1 select min(a.id) min_id,max(a.id) max_id from (select id from a where id>{init_id} order by id limit 1000) a
2 delete from a where xxid='xxx' and id >=min_id and id <=max_id 
3 init_id = max_id

代码如下:

代码语言:javascript复制
def get_current_max_id():    """    获取当前最大的id    :return:    """    get_max_id = """select max(a.id) max_id from a"""    try:        mydb = pymysql.connect(                    host=IP,                    port=int(PORT),                    user=USER,                    read_timeout=5, write_timeout=5,                    charset='utf8',  autocommit=True)        cursor = mydb.cursor(pymysql.cursors.DictCursor)        cursor.execute(get_max_id)        data = cursor.fetchall()    except Exception as e:        print traceback.format_exc(e)        exit(0)    finally:        mydb.close()
    print "we get max id of table : %s" % (data[0]['max_id'])    return data[0]['max_id']

def get_min_max_id(min_id):    """
    :param min_id:    :return:    """    get_ids = """select min(a.id) min_id,max(a.id) max_id from                 (select id from a where id>{init_id} order by id limit 2000) a""".format(init_id=min_id)
    try:        mydb = pymysql.connect(                    host=IP,                    port=int(PORT),                    user=USER,                    read_timeout=5, write_timeout=5,                    charset='utf8', database='test', autocommit=True)        cursor = mydb.cursor(pymysql.cursors.DictCursor)        cursor.execute(get_ids)        data = cursor.fetchall()    except Exception as e:        print traceback.format_exc(e)        exit(0)    finally:        mydb.close()
    return data[0]['min_id'], data[0]['max_id']

def del_tokens(min_id, max_id):    """    :param min_id:    :param max_id:    :return:    """    del_token = """delete from a                    where client_id in ('xxx','yyy') and id>=%s and id<=%s """    try:        mydb = pymysql.connect(                    host=IP,                    port=int(PORT),                    user=USER,                    read_timeout=5, write_timeout=5,                    charset='utf8', database='test', autocommit=True)        cursor = mydb.cursor(pymysql.cursors.DictCursor)        rows = cursor.execute(del_token, (min_id, max_id))    except Exception as e:        print traceback.format_exc(e)        exit(0)    finally:        mydb.close()
    return rows

def get_last_del_id(file_name):    if not os.path.exists(file_name):        print "{file} is not exist ,exit .".format(file=file_name)        exit(-1)
    with open(file_name, 'r') as fh:        del_id = fh.readline().strip()
    if not del_id.isdigit():        print "it is '{delid}', not a num , exit ".format(delid=del_id)        exit(-1)
    return del_id

def main():    file_name = '/tmp/del_aid.id'    rows_deleted = 0    maxid = get_current_max_id()    init_id = get_last_del_id(file_name)
    while True:        min_id, max_id = get_min_max_id(init_id)        if max_id > maxid:            with open('/tmp/del_aid.id', 'w') as f:                f.write(str(min_id))            print "delete end at : {end_id}".format(end_id=init_id)            exit(0)
        rows = del_tokens(int(min_id), int(max_id))        init_id = max_id        rows_deleted  = rows        print "delete at  %d ,and we have  deleted %d  rows " % (max_id, rows_deleted)        time.sleep(0.3)  ### 可以控制每秒删除的速度

if __name__ == '__main__':    main()

这个脚本可以记录上一次的id,用上一次id 作为 init_id进行删除。第一次使用的时候需要手工初始化/tmp/del_aid.id 比如写入 0 或者符合条件的最小主键 id。

2.3 如何更快速的删除

这个环节就当做思考题吧,可以不考虑从库的延迟。大家有什么好的思路,可以分享一下。

推荐文章

工具|Explain 使用分析

哪些因素会导致慢查询?

-The End-

0 人点赞