大家好,又见面了,我是你们的朋友全栈君。
对于某个城市的出租车数据,一天就有33210000条记录,如何将每辆车的数据单独拎出来放到一个专属的文件中呢?
思路很简单:
就是循环33210000条记录,将每辆车的数据搬运到它该去的文件中。
但是对于3000多万条数据,一个一个循环太消耗时间,我花了2个小时才搬运了60万数据,算算3000万我需要花费100个小时,也就需要4-5天。并且还需要保证这五天全天开机,不能出现卡机的事故。
因此,需要使用并行进行for循环的技巧:
由于3000万数据放到csv中导致csv打不开,因此我就把一个csv通过split软件将其切分成每份60万,共53个csv。
我原来的思路是读取文件夹,获取由每一个60万的csv文件组成的列表,再分别对每一个60万的csv进行处理。实质上还是循环33210000次,并行for循环就是同时处理几个60万的csv文件,就能成倍的减少时间消耗。
并行进行for循环是受下面的方法启发:
我之前的做法类似这样:
words = [“apple”, “bananan”, “cake”, “dumpling”]
for word in words:
print word
并行for循环类似这样:
from multiprocessing.dummy import Pool as ThreadPool
items = list()
pool = ThreadPool()
pool.map(process, items)
pool.close()
pool.join()
其中,process是进行处理的函数
实例代码如下:
# -*- coding: utf-8 -*-
import time
from multiprocessing.dummy import Pool as ThreadPool
def process(item):
print(“正在并行for循环”)
print(item)
time.sleep(5)
items = [“apple”, “bananan”, “cake”, “dumpling”]
pool = ThreadPool()
pool.map(process, items)
pool.close()
pool.join()
补充知识:Python3用多线程替代for循环提升程序运行速度
优化前后新老代码如下:
from git_tools.git_tool import get_collect_projects, QQNews_Git
from threading import Thread, Lock
import datetime
base_url = “http://git.xx.com”
project_members_commits_lang_info = {}
lock = Lock()
threads = []
“””
Author:zenkilan
“””
def count_time(func):
def took_up_time(*args, **kwargs):
start_time = datetime.datetime.now()
ret = func(*args, **kwargs)
end_time = datetime.datetime.now()
took_up_time = (end_time – start_time).total_seconds()
print(f”{func.__name__} execution took up time:{took_up_time}”)
return ret
return took_up_time
def get_project_member_lang_code_lines(git, member, begin_date, end_date):
global project_members_commits_lang_info
global lock
member_name = member[“username”]
r = git.get_user_info(member_name)
if not r[“id”]:
return
user_commits_lang_info = git.get_commits_user_lang_diff_between(r[“id”], begin_date, end_date)
if len(user_commits_lang_info) == 0:
return
lock.acquire()
project_members_commits_lang_info.setdefault(git.project, dict())
project_members_commits_lang_info[git.project][member_name] = user_commits_lang_info
lock.release()
def get_project_lang_code_lines(project, begin_date, end_date):
global threads
git = QQNews_Git(project[1], base_url, project[0])
project_members = git.get_project_members()
if len(project_members) == 0:
return
for member in project_members:
thread = Thread(target=get_project_member_lang_code_lines, args=(git, member, begin_date, end_date))
threads.append(thread)
thread.start()
@count_time
def get_projects_lang_code_lines(begin_date, end_date):
“””
获取项目代码行语言相关统计――新方法(提升效率)
应用多线程替代for循环
并发访问共享外部资源
:return:
“””
global project_members_commits_lang_info
global threads
for project in get_collect_projects():
thread = Thread(target=get_project_lang_code_lines, args=(project, begin_date, end_date))
threads.append(thread)
thread.start()
@count_time
def get_projects_lang_code_lines_old(begin_date, end_date):
“””
获取项目代码行语言相关统计――老方法(耗时严重)
使用最基本的思路进行编程
双层for循环嵌套并且每层都包含耗时操作
:return:
“””
project_members_commits_lang_info = {}
for project in get_collect_projects():
git = QQNews_Git(project[1], base_url, project[0])
project_members = git.get_project_members()
user_commits_lang_info_dict = {}
if len(project_members) == 0:
continue
for member in project_members:
member_name = member[“username”]
r = git.get_user_info(member_name, debug=False)
if not r[“id”]:
continue
try:
user_commits_lang_info = git.get_commits_user_lang_diff_between(r[“id”], begin_date, end_date)
if len(user_commits_lang_info) == 0:
continue
user_commits_lang_info_dict[member_name] = user_commits_lang_info
project_members_commits_lang_info[git.project] = user_commits_lang_info_dict
except:
pass
return project_members_commits_lang_info
def test_results_equal(resultA, resultB):
“””
测试方法
:param resultA:
:param resultB:
:return:
“””
print(resultA)
print(resultB)
assert len(str(resultA)) == len(str(resultB))
if __name__ == “__main__”:
from git_tools.config import begin_date, end_date
get_projects_lang_code_lines(begin_date, end_date)
for t in threads:
t.join()
old_result = get_projects_lang_code_lines_old(begin_date, end_date)
test_results_equal(old_result, project_members_commits_lang_info)
老方法里外层for循环和内层for循环里均存在耗时操作:
1)git.get_project_members()
2)git.get_user_info(member_name, debug=False)
分两步来优化,先里后外或先外后里都行。用多线程替换for循环,并发共享外部资源,加锁避免写冲突。
测试结果通过,函数运行时间装饰器显示(单位秒):
get_projects_lang_code_lines execution took up time:1.85294
get_projects_lang_code_lines_old execution took up time:108.604177
速度提升了约58倍
以上这篇如何提高python 中for循环的效率就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持云海天教程。
发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/170067.html原文链接:https://javaforall.cn