基于Tensorflow + yolo3的安全帽识别系统[通俗易懂]

2022-08-26 15:52:22 浏览数 (1)

大家好,又见面了,我是你们的朋友全栈君。

最近做了一个新的项目,需要将图片或者视频中的人员是否戴安全帽识别出来,并且在网站上进行显示.使用Tensorflow yolo3,后端框架为Django。本机配置为AMD4600 1650显卡

2022.3.27日更新:

已完成对外部摄像头的测试,功能正常!

2022.3.20 日更新:

1.重新使用Flask 写了后端

2.将视频识别的结果返回到了网页中,而不是传统的cv 窗口显示。

存在问题:内存堆积,停止请求后,测试发现cv2还会持续读取视频数据一段时间,如果此时再次进行视频或者摄像头识别容易造成内存泄露问题,暂未解决,持续优化中。

2022.3.14 日更新:

优化了后端识别代码,识别速度更快。

视频地址:tensorflow yolo安全帽识别优化版_哔哩哔哩_bilibili

https://www.bilibili.com/video/BV1Aq4y1q7Hk?p=2

tensorflow yolo安全帽识别优化版

首先是正常的登录注册

代码语言:javascript复制
def my_login(request):
    if request.method == "GET":
        return render(request, 'auth/auth.html')
    else:
        form = LoginForm(request.POST)
        if form.is_valid():
            username = form.cleaned_data.get('username')
            password = form.cleaned_data.get('pwd')
            next = form.cleaned_data.get("next")
            if next:
                next_url = next.split("=")[1]
            else:
                next_url = ""
            user = authenticate(request,username=username,password=password)
            # print("user",user)
            if user:
                login(request,user)
                request.session.set_expiry(None)
                data = {
                    "next_url":next_url
                }
                return restful.result(data=data)
            else:
                return restful.noauth(message="用户名或者密码错误!")
        else:
            print(form.get_error())
            return restful.paramserror(form.get_error())

目前登录注册有很多方式,这个比较常规,用户名密码登录,也没有写的很复杂.

接下来就是主要功能页面了

部分识别代码:

代码语言:javascript复制
        while vid.isOpened():
            _q, image = vid.read()
            frame_id  = 1
            t1 = time.time()
            image_h, image_w, _ = image.shape
            bbox_thick = int(1.5 * (image_h   image_w) / 600)

            person_box, helmet_box, bboxes = T.detect_image(image)
            person_box = utils.xyxy2xywh(person_box)
            features = encoder(image, person_box)
            detections = [Detection(bbox, 1.0, feature) for bbox, feature in zip(person_box, features)]
            boxes = np.array([d.tlwh for d in detections])
            scores = np.array([d.confidence for d in detections])
            indices = preprocessing.non_max_suppression(boxes, nms_max_overlap, scores)
            detections = [detections[i] for i in indices]
            [detections[i].match_helmet(helmet_box) for i in range(len(detections))]

            tracker.predict()
            tracker.update(detections, frame_id)
            if len(tracker.lossed) > 0:
                pass
            for id, track in enumerate(tracker.tracks):
                if not track.is_confirmed() or track.time_since_update > 1:
                    continue
                bbox = track.to_tlbr()
                cv2.rectangle(image, (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])), track.color,
                              bbox_thick)
                cv2.putText(image, str(track.track_id), (int(bbox[0]), int(bbox[1])), 0, 0.0015 * image_h, (0, 255, 0),
                            bbox_thick // 2)
                if track.is_no_helmet:
                    cv2.putText(image, str('no helmet'), (int(bbox[0]   bbox[2]) // 2, int(bbox[1])), 0,
                                0.0012 * image_h, (0, 0, 255), bbox_thick // 2)
                    no_hat  = 1
                else:
                    if track.helmet is not None:
                        bbox = track.helmet
                        cv2.rectangle(image, (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])), track.color,
                                      bbox_thick)
                        cv2.putText(image, str('Helmet'), (int(bbox[0]   bbox[2]) // 2, int(bbox[1])), 0,
                                    0.0012 * image_h, (0, 0, 255), bbox_thick // 2)

                for i, (x, y) in enumerate(track.tracker_path[1:]):
                    pre_x, pre_v = track.tracker_path[i]
                    cv2.line(image, (int(pre_x), int(pre_v)), (int(x), int(y)), track.color, bbox_thick)
            cv2.imshow('VideoShow---Press q to exit!', image)

            fps = (fps   (1. / (time.time() - t1))) / 2
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break
            # elif cv2.getWindowProperty("VideoShow", cv2.WND_PROP_AUTOSIZE) < 1:
            #     break
            # if cv2.waitKey(1) == 27:
            #     break
            # print("no_hat",no_hat)
        vid.release()
        cv2.destroyAllWindows()

如上图所示,可以进行图片及视频识别

正在进行识别

识别代码使用的是tensorflow opencv ,识别结果还可以.

具体效果

此项目的完整带啊以及上传到了面包多,需要的可以前往下载,传送门:

0 人点赞