YOLOv5改进---注意力机制:SKAttention注意力,SENet进阶版本

2023-11-30 15:44:31 浏览数 (1)

摘要:SKAttention注意力助力YOLOv5,即插即用,性能优于SENet

1. SKAttention

论文:https://arxiv.org/pdf/1903.06586.pdf

多个 SK 块的堆叠得到 SKNet,这个名字也是为了致敬 SENet。

SKNet 在 ImageNet、CIFAR 数据集上都取得了 SOTA。

详细的实验分析表明,SKNet 中的神经元可以捕获具有不同比例的目标对象,实验验证了神经元根据输入自适应地调整其感受野大小的能力。

本文的方法分为三个部分:Split,Fuse,Select。Split就是一个multi-branch的操作,用不同的卷积核进行卷积得到不同的特征;Fuse部分就是用SE的结构获取通道注意力的矩阵(N个卷积核就可以得到N个注意力矩阵,这步操作对所有的特征参数共享),这样就可以得到不同kernel经过SE之后的特征;Select操作就是将这几个特征进行相加。

1.1 加入 common.py

代码语言:javascript复制
###################### SKAttention     ####     start   by  AI&CV  ###############################

from torch.nn import init
from collections import OrderedDict


class SKAttention(nn.Module):

    def __init__(self, channel=512, kernels=[1, 3, 5, 7], reduction=16, group=1, L=32):
        super().__init__()
        self.d = max(L, channel // reduction)
        self.convs = nn.ModuleList([])
        for k in kernels:
            self.convs.append(
                nn.Sequential(OrderedDict([
                    ('conv', nn.Conv2d(channel, channel, kernel_size=k, padding=k // 2, groups=group)),
                    ('bn', nn.BatchNorm2d(channel)),
                    ('relu', nn.ReLU())
                ]))
            )
        self.fc = nn.Linear(channel, self.d)
        self.fcs = nn.ModuleList([])
        for i in range(len(kernels)):
            self.fcs.append(nn.Linear(self.d, channel))
        self.softmax = nn.Softmax(dim=0)

    def forward(self, x):
        bs, c, _, _ = x.size()
        conv_outs = []
        ### split
        for conv in self.convs:
            conv_outs.append(conv(x))
        feats = torch.stack(conv_outs, 0)  # k,bs,channel,h,w

        ### fuse
        U = sum(conv_outs)  # bs,c,h,w

        ### reduction channel
        S = U.mean(-1).mean(-1)  # bs,c
        Z = self.fc(S)  # bs,d

        ### calculate attention weight
        weights = []
        for fc in self.fcs:
            weight = fc(Z)
            weights.append(weight.view(bs, c, 1, 1))  # bs,channel
        attention_weughts = torch.stack(weights, 0)  # k,bs,channel,1,1
        attention_weughts = self.softmax(attention_weughts)  # k,bs,channel,1,1

        ### fuse
        V = (attention_weughts * feats).sum(0)
        return V

###################### SKAttention     ####     end   by  AI&CV  ###############################

1.2 加入yolo.py中:

代码语言:javascript复制
        elif m is SKAttention:
            c1, c2 = ch[f], args[0]
            if c2 != nc:
                c2 = make_divisible(min(c2, max_channels) * width, 8)
            args = [c1, *args[1:]]

1.3 yolov5s_SKAttention.yaml

代码语言:javascript复制
# YOLOv5 


	

0 人点赞