由简入繁,由繁入简。已疯……
- LeNet:Gradient based learning applied to document recognition
- AlexNet:ImageNet Classification with Deep Convolutional Neural Networks
- ZFNet:Visualizing and understanding convolutional networks
- VGGNet:Very deep convolutional networks for large-scale image recognition
- NiN:Network in network
- GoogLeNet:Going deeper with convolutions
- Inception-v3:Rethinking the inception architecture for computer vision
- ResNet:Deep residual learning for image recognition
- Stochastic_Depth:Deep networks with stochastic depth
- WResNet:Weighted residuals for very deep networks
- Inception-ResNet:Inception-v4,inception-resnet and the impact of residual connections on learning
- Fractalnet:Ultra-deep neural networks without residuals
- WRN:Wide residual networks
- ResNeXt:Aggregated Residual Transformations for Deep Neural Networks
- DenseNet:Densely connected convolutional networks
- PyramidNet:Deep Pyramidal Residual Networks
- DPN:Dual Path Networks
- SqueezeNet:AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
- MobileNets:Efficient Convolutional Neural Networks for Mobile Vision Applications
- ShuffleNet:An Extremely Efficient Convolutional Neural Network for Mobile Devices
- LeNet:基于渐变的学习应用于文档识别
- AlexNet:具有深卷积神经网络的ImageNet分类
- ZFNet:可视化和理解卷积网络
- VGGNet:用于大规模图像识别的非常深的卷积网络
- NiN:网络中的网络
- GoogLeNet:卷入更深入
- Inception-v3:重新思考计算机视觉的初始架构
- ResNet:图像识别的深度残差学习
- Stochastic_Depth:具有随机深度的深层网络
- WResNet:非常深的网络的加权残差
- Inception-ResNet:Inception-v4,inception-resnet以及剩余连接对学习的影响
- Fractalnet:没有残差的超深层神经网络
- WRN:宽残留网络
- ResNeXt:深层神经网络的聚合残差变换
- DenseNet:密集连接的卷积网络
- PyramidNet:深金字塔残留网络
- DPN:双路径网络
- SqueezeNet:AlexNet级准确度,参数减少50倍,模型尺寸小于0.5MB
- MobileNets:用于移动视觉应用的高效卷积神经网络
- ShuffleNet:移动设备极高效的卷积神经网络
原创文章,转载请注明: 转载自URl-team
本文链接地址: CNN结构模型一句话概述:从LeNet到ShuffleNet