前言
谷歌近期更新了Tensorflow Object-Detection API里面的detection_model_zoo,模型都是非常前沿的,其性能都处于该领域的领先水平,如下图所示:
引用谷歌官方对这几个模型的介绍:
- MobileDets outperform MobileNetV3 SSDLite by 1.7 mAP at comparable mobile CPU inference latencies. MobileDets also outperform MobileNetV2 SSDLite by 1.9 mAP on mobile CPUs
- MnasFPN with MobileNet-V2 backbone is the most accurate (26.6 mAP at 183ms on Pixel 1) mobile detection model we have released to date. With depth-multiplier, MnasFPN with MobileNet-V2 backbone is 1.8 mAP higher than MobileNet-V3-Large with SSDLite (23.8 mAP vs 22.0 mAP) at similar latency (120ms) on Pixel 1.
- SSDLite with MobileNet-V3-Large backbone, which is 27% faster than Mobilenet V2 SSDLite (119ms vs 162ms) on a Google Pixel phone CPU at the same mAP.
- SSDLite with MobileNet-V3-Small backbone, which is 37% faster than MnasNetSSDLite reduced with depth-multiplier (43ms vs 68ms) at the same mAP.
RK3399及MNN1.0环境
笔者的RK3399系统是带桌面的Xubuntu,系统自带Opencv4.0, 因此不需要安装Opencv。MNN可以在RK3399板子上直接编译,无需交叉编译。编译主要有三部分,模型转换部分,模型推理部分,模型训练部分,编译教程网上有很多,这里不多做叙述。
Tensorflow Object-Detection API环境
想使用这几个最新的模型,就需要更新detection_model_zoo库及tensorflow的版本,detection_model_zoo下载最新的即可,笔记是20200704下载的,tensorflow_gpu的版本是1.15,python3.5,CUDA10.0,cudnn7.6,其他的话,缺什么安装什么。
模型导出及部署
Tensorflow模型的导出及部署,参考了这篇知乎文章:https://zhuanlan.zhihu.com/p/70610865,按照这篇文章,可以得到MNN框架部署需要的mnn模型。笔者在RK3399测试的性能如下(未做量化,仅代表笔者的测试结果):需要说明的是,模型在运行过程中,板子非常容易发热,而导致模型的推理时间变长。
最后贴几张基于ssd_mobilenet_v3_large测试的效果图: