EfficientVit实现轻量化改进YOLOV8:级联分组注意力模块的全新实时网络架构#重塑实时物体检测的未来

发布时间:2023年12月29日

目录

yolov8导航

YOLOv8(附带各种任务详细说明链接)

引言

背景介绍

级联分组注意力模块

EfficientVit 相关资源?

YOLOv8 相关资源

EfficientViT改进YOLOV8实现

EfficientViT代码及概述

第一部分

第二部分

第三部分

第四部分

第五部分

第六部分?

开箱即用的YOLOV8-EffucuebtViT项目源码

总结?


yolov8导航

????????如果大家想要了解关于yolov8的其他任务和相关内容可以点击这个链接,我这边整理了许多其他任务的说明博文,后续也会持续更新,包括yolov8模型优化、sam等等的相关内容。

YOLOv8(附带各种任务详细说明链接)

引言

????????随着深度学习技术的不断发展,实时物体检测领域正迅速演进。在这篇博客中,我们将深入探讨一种结合了EfficientVit轻量化技术和YOLOV8的级联分组注意力模块的全新实时网络架构。我们将探讨它如何提升物体检测的效率和准确性,同时保持低延迟和高实时性。

? ? ? ? 这篇博客可能并不适合纯新手小白直接上手观看,如果基础欠佳的小伙伴可以点击上面的YOLOV8导航的链接先观看一下基础部分,先对yolov8有一个了解之后再来看这篇博客。

? ? ? ? 我这边的博客会更重视项目的实现,后续我这边会简单的概述一下相关资料,并且附上一些相关的资料,供给大家参考,并且我这边会重点尽可能的简洁的去概述如何用yolov8的去实现如何实现改进。(我这边整理好了直接能跑通的yolov8-efficientViT项目源码可以直接下载)

背景介绍

  • YOLOV8简介:首先介绍YOLOV8的基本原理和特点,为读者提供足够的背景知识。
  • EfficientVit的轻量化优势:解释EfficientVit的轻量化设计如何有助于提高模型效率和性能。

级联分组注意力模块

  • 工作原理:介绍级联分组注意力模块的工作原理和它是如何与EfficientVit和YOLOV8结合的。
  • 性能提升:分析这种结构如何提高物体检测的性能,特别是在复杂或动态环境中。

EfficientVit 相关资源?

? ? ? ? 以下我这边整理了一些相关的参考文献和参考资源,并附带了链接,希望能给大家提供一些帮助或者思路:?

  1. 研究论文: "EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction" - 这篇论文详细介绍了EfficientViT的设计和应用??。
    EfficientViT paper
  2. GitHub代码库: mit-han-lab/efficientvit - 提供EfficientViT的实现代码和使用说明??。
    EfficientViT 代码库
  3. 研究论文: "EfficientViT: Memory Efficient Vision Transformer with ..." - 论文探讨了EfficientViT在高效视觉处理方面的改进??。
    EfficientViT 改进paper
  4. 论文解析: arXiv Vanity - 提供了EfficientViT论文的易读格式和图表解析??。
    EfficientViT 论文解析
  5. 研究论文: "EfficientViT: Enhanced Linear Attention for High ..." - 论文讨论了线性注意力在EfficientViT中的应用??。
    EfficientViT 应用讨论

YOLOv8 相关资源

????????以下我这边整理了一些相关的参考文献和参考资源,并附带了链接,希望能给大家提供一些帮助或者思路:??

  1. 研究论文: "Real-Time Flying Object Detection with YOLOv8" - 论文介绍了YOLOv8在实时飞行物体检测中的应用??。
    YOLOV8 飞行物体检测应用
  2. 研究论文: "YOLO-SE: Improved YOLOv8 for Remote Sensing Object Detection and Recognition" - 论文详细讨论了YOLOv8在远程感知对象检测中的改进??。
    YOLOV8对象检测改进
  3. 综述论文: "A Comprehensive Review of YOLO: From YOLOv1 and Beyond" - 提供了从YOLOv1到YOLOv8的全面回顾??。
    YOLOV1--YOLOV8综述
  4. 应用研究: "Underwater Object Detection in Marine Ranching Based on Improved YOLOv8" - 讨论了YOLOv8在水下物体检测中的应用??。
    YOLOV8水下物体检测应用
  5. 应用研究: "YOLOv8-Based Visual Detection of Road Hazards: Potholes ..." - 分析了YOLOv8在道路障碍物检测中的效果??。
    YOLOV8物体检测效果分析

EfficientViT改进YOLOV8实现

? ? ? ? 我这边会简答的概述代码相关的内容,不会一步步的教您这边代码具体如何去配置,因为网上开源的代码很多,但是我这边会专门的整理一份带有数据直接可以跑通的代码供给大家直接下载使用。

EfficientViT代码及概述

?????????这里我这边任务代码总共分为5个部分,我会分别的去进行注释讲解。

第一部分

import torch
from timm.models.layers import SqueezeExcite
import numpy as np
import itertools

# 定义可公开访问的类名
__all__ = ['EfficientViT_M0', 'EfficientViT_M1', 'EfficientViT_M2', 'EfficientViT_M3', 'EfficientViT_M4', 'EfficientViT_M5']

# 自定义的序列化模块,结合了卷积层和批处理规范化层
class Conv2d_BN(torch.nn.Sequential):
    def __init__(self, a, b, ks=1, stride=1, pad=0, dilation=1, groups=1, bn_weight_init=1, resolution=-10000):
        super().__init__()
        # 添加2D卷积层
        self.add_module('c', torch.nn.Conv2d(a, b, ks, stride, pad, dilation, groups, bias=False))
        # 添加批处理规范化层
        self.add_module('bn', torch.nn.BatchNorm2d(b))
        # 初始化批处理规范化层的权重和偏置
        torch.nn.init.constant_(self.bn.weight, bn_weight_init)
        torch.nn.init.constant_(self.bn.bias, 0)

    # 用于部署模式的转换,修改了卷积层和批处理规范化层
    @torch.no_grad()
    def switch_to_deploy(self):
        c, bn = self._modules.values()
        w = bn.weight / (bn.running_var + bn.eps)**0.5
        w = c.weight * w[:, None, None, None]
        b = bn.bias - bn.running_mean * bn.weight / (bn.running_var + bn.eps)**0.5
        m = torch.nn.Conv2d(w.size(1) * self.c.groups, w.size(0), w.shape[2:], stride=self.c.stride, padding=self.c.padding, dilation=self.c.dilation, groups=self.c.groups)
        m.weight.data.copy_(w)
        m.bias.data.copy_(b)
        return m

# 用于替换模型中的批处理规范化层
def replace_batchnorm(net):
    for child_name, child in net.named_children():
        if hasattr(child, 'fuse'):
            setattr(net, child_name, child.fuse())
        elif isinstance(child, torch.nn.BatchNorm2d):
            setattr(net, child_name, torch.nn.Identity())
        else:
            replace_batchnorm(child)

第二部分

# PatchMerging: 用于降低输入的空间维度并增加通道数
class PatchMerging(torch.nn.Module):
    def __init__(self, dim, out_dim, input_resolution):
        super().__init__()
        hid_dim = int(dim * 4)
        self.conv1 = Conv2d_BN(dim, hid_dim, 1, 1, 0, resolution=input_resolution)
        self.act = torch.nn.ReLU()
        self.conv2 = Conv2d_BN(hid_dim, hid_dim, 3, 2, 1, groups=hid_dim, resolution=input_resolution)
        self.se = SqueezeExcite(hid_dim, .25)
        self.conv3 = Conv2d_BN(hid_dim, out_dim, 1, 1, 0, resolution=input_resolution // 2)

    def forward(self, x):
        x = self.conv3(self.se(self.act(self.conv2(self.act(self.conv1(x))))))
        return x

# Residual: 实现残差连接结构,可选择性地使用dropout
class Residual(torch.nn.Module):
    def __init__(self, m, drop=0.):
        super().__init__()
        self.m = m
        self.drop = drop

    def forward(self, x):
        if self.training and self.drop > 0:
            return x + self.m(x) * torch.rand(x.size(0), 1, 1, 1, device=x.device).ge_(self.drop).div(1 - self.drop).detach()
        else:
            return x + self.m(x)

# FFN: 前馈网络,由两个卷积层和ReLU激活函数组成
class FFN(torch.nn.Module):
    def __init__(self, ed, h, resolution):
        super().__init__()
        self.pw1 = Conv2d_BN(ed, h, resolution=resolution)
        self.act = torch.nn.ReLU()
        self.pw2 = Conv2d_BN(h, ed, bn_weight_init=0, resolution=resolution)

    def forward(self, x):
        x = self.pw2(self.act(self.pw1(x)))
        return x

????????这部分代码定义了模型中使用的几个关键组件,如PatchMerging、Residual和FFN。

第三部分

# CascadedGroupAttention: 实现级联群体注意力机制
class CascadedGroupAttention(torch.nn.Module):
    def __init__(self, dim, key_dim, num_heads=8, attn_ratio=4, resolution=14, kernels=[5, 5, 5, 5]):
        super().__init__()
        self.num_heads = num_heads
        self.scale = key_dim ** -0.5
        self.key_dim = key_dim
        self.d = int(attn_ratio * key_dim)
        self.attn_ratio = attn_ratio

        # 初始化查询、键、值的卷积层和深度卷积层
        qkvs = []
        dws = []
        for i in range(num_heads):
            qkvs.append(Conv2d_BN(dim // num_heads, self.key_dim * 2 + self.d, resolution=resolution))
            dws.append(Conv2d_BN(self.key_dim, self.key_dim, kernels[i], 1, kernels[i]//2, groups=self.key_dim, resolution=resolution))
        self.qkvs = torch.nn.ModuleList(qkvs)
        self.dws = torch.nn.ModuleList(dws)
        self.proj = torch.nn.Sequential(torch.nn.ReLU(), Conv2d_BN(self.d * num_heads, dim, bn_weight_init=0, resolution=resolution))

        # 计算注意力偏置
        points = list(itertools.product(range(resolution), range(resolution)))
        N = len(points)
        attention_offsets = {}
        idxs = []
        for p1 in points:
            for p2 in points:
                offset = (abs(p1[0] - p2[0]), abs(p1[1] - p2[1]))
                if offset not in attention_offsets:
                    attention_offsets[offset] = len(attention_offsets)
                idxs.append(attention_offsets[offset])
        self.attention_biases = torch.nn.Parameter(torch.zeros(num_heads, len(attention_offsets)))
        self.register_buffer('attention_bias_idxs', torch.LongTensor(idxs).view(N, N))

    @torch.no_grad()
    def train(self, mode=True):
        super().train(mode)
        if mode and hasattr(self, 'ab'):
            del self.ab
        else:
            self.ab = self.attention_biases[:, self.attention_bias_idxs]

    def forward(self, x):  # x (B,C,H,W)
        B, C, H, W = x.shape
        trainingab = self.attention_biases[:, self.attention_bias_idxs]
        feats_in = x.chunk(len(self.qkvs), dim=1)
        feats_out = []
        feat = feats_in[0]
        for i, qkv in enumerate(self.qkvs):
            if i > 0: # 将前一输出加到输入中
                feat = feat + feats_in[i]
            feat = qkv(feat)
            q, k, v = feat.view(B, -1, H, W).split([self.key_dim, self.key_dim, self.d], dim=1)
            q = self.dws[i](q)
            q, k, v = q.flatten(2), k.flatten(2), v.flatten(2) # B, C/h, N
            attn = ((q.transpose(-2, -1) @ k) * self.scale + (trainingab[i] if self.training else self.ab[i]))
            attn = attn.softmax(dim=-1)
            feat = (v @ attn.transpose(-2, -1)).view(B, self.d, H, W)
            feats_out.append(feat)
        x = self.proj(torch.cat(feats_out, 1))
        return x

# LocalWindowAttention: 在局部窗口内应用注意力机制
class LocalWindowAttention(torch.nn.Module):
    def __init__(self, dim, key_dim, num_heads=8, attn_ratio=4, resolution=14, window_resolution=7, kernels=[5, 5, 5, 5]):
        super().__init__()
        self.dim = dim
        self.num_heads = num_heads
        self.resolution = resolution
        assert window_resolution > 0, 'window_size must be greater than 0'
        self.window_resolution = window_resolution
        self.attn = CascadedGroupAttention(dim, key_dim, num_heads, attn_ratio=attn_ratio, resolution=window_resolution, kernels=kernels)

    def forward(self, x):
        B, C, H, W = x.shape
        # 如果输入小于窗口大小,则直接应用注意力机制
        if H <= self.window_resolution and W <= self.window_resolution:
            x = self.attn(x)
        else:
            # 否则,对输入进行分割并在每个窗口内应用注意力
            x = x.permute(0, 2, 3, 1)
            pad_b = (self.window_resolution - H % self.window_resolution) % self.window_resolution
            pad_r = (self.window_resolution - W % self.window_resolution) % self.window_resolution
            padding = pad_b > 0 or pad_r > 0
            if padding:
                x = torch.nn.functional.pad(x, (0, 0, 0, pad_r, 0, pad_b))
            pH, pW = H + pad_b, W + pad_r
            nH = pH // self.window_resolution
            nW = pW // self.window_resolution
            x = x.view(B, nH, self.window_resolution, nW, self.window_resolution, C).transpose(2, 3).reshape(B * nH * nW, self.window_resolution, self.window_resolution, C).permute(0, 3, 1, 2)
            x = self.attn(x)
            x = x.permute(0, 2, 3, 1).view(B, nH, nW, self.window_resolution, self.window_resolution, C).transpose(2, 3).reshape(B, pH, pW, C)
            if padding:
                x = x[:, :H, :W].contiguous()
            x = x.permute(0, 3, 1, 2)
        return x

?????????这部分代码实现了级联群体注意力和局部窗口注意力机制,是模型中用于捕获图像特征的核心组件。

第四部分

# EfficientViTBlock: 定义了EfficientViT模型的基本构建块
class EfficientViTBlock(torch.nn.Module):
    def __init__(self, type, ed, kd, nh=8, ar=4, resolution=14, window_resolution=7, kernels=[5, 5, 5, 5]):
        super().__init__()
        self.dw0 = Residual(Conv2d_BN(ed, ed, 3, 1, 1, groups=ed, bn_weight_init=0., resolution=resolution))
        self.ffn0 = Residual(FFN(ed, int(ed * 2), resolution))

        # 根据type参数选择不同的混合器
        if type == 's':
            self.mixer = Residual(LocalWindowAttention(ed, kd, nh, attn_ratio=ar, resolution=resolution, window_resolution=window_resolution, kernels=kernels))
                
        self.dw1 = Residual(Conv2d_BN(ed, ed, 3, 1, 1, groups=ed, bn_weight_init=0., resolution=resolution))
        self.ffn1 = Residual(FFN(ed, int(ed * 2), resolution))

    def forward(self, x):
        return self.ffn1(self.dw1(self.mixer(self.ffn0(self.dw0(x)))))

# EfficientViT: 构建整个EfficientViT模型
class EfficientViT(torch.nn.Module):
    def __init__(self, img_size=400, patch_size=16, frozen_stages=0, in_chans=3, stages=['s', 's', 's'],
                 embed_dim=[64, 128, 192], key_dim=[16, 16, 16], depth=[1, 2, 3], num_heads=[4, 4, 4], 
                 window_size=[7, 7, 7], kernels=[5, 5, 5, 5], down_ops=[['subsample', 2], ['subsample', 2], ['']],
                 pretrained=None, distillation=False):
        super().__init__()

        resolution = img_size
        # 图像块嵌入
        self.patch_embed = torch.nn.Sequential(Conv2d_BN(in_chans, embed_dim[0] // 8, 3, 2, 1, resolution=resolution), torch.nn.ReLU(),
                           Conv2d_BN(embed_dim[0] // 8, embed_dim[0] // 4, 3, 2, 1, resolution=resolution // 2), torch.nn.ReLU(),
                           Conv2d_BN(embed_dim[0] // 4, embed_dim[0] // 2, 3, 2, 1, resolution=resolution // 4), torch.nn.ReLU(),
                           Conv2d_BN(embed_dim[0] // 2, embed_dim[0], 3, 1, 1, resolution=resolution // 8))

        # 初始化不同阶段的块
        self.blocks1, self.blocks2, self.blocks3 = [], [], []
        for i, (stg, ed, kd, dpth, nh, ar, wd, do) in enumerate(zip(stages, embed_dim, key_dim, depth, num_heads, [embed_dim[i] / (key_dim[i] * num_heads[i]) for i in range(len(embed_dim))], window_size, down_ops)):
            for d in range(dpth):
                eval('self.blocks' + str(i+1)).append(EfficientViTBlock(stg, ed, kd, nh, ar, resolution, wd, kernels))
            if do[0] == 'subsample':
                blk = eval('self.blocks' + str(i+2))
                resolution_ = (resolution - 1) // do[1] + 1
                blk.append(torch.nn.Sequential(Residual(Conv2d_BN(embed_dim[i], embed_dim[i], 3, 1, 1, groups=embed_dim[i], resolution=resolution)),
                                    Residual(FFN(embed_dim[i], int(embed_dim[i] * 2), resolution)),))
                blk.append(PatchMerging(*embed_dim[i:i + 2], resolution))
                resolution = resolution_
                blk.append(torch.nn.Sequential(Residual(Conv2d_BN(embed_dim[i + 1], embed_dim[i + 1], 3, 1, 1, groups=embed_dim[i + 1], resolution=resolution)),
                                    Residual(FFN(embed_dim[i + 1], int(embed_dim[i + 1] * 2), resolution)),))
        self.blocks1 = torch.nn.Sequential(*self.blocks1)
        self.blocks2 = torch.nn.Sequential(*self.blocks2)
        self.blocks3 = torch.nn.Sequential(*self.blocks3)

        # 预测每个阶段输出的通道数
        self.channel = [i.size(1) for i in self.forward(torch.randn(1, 3, 640, 640))]

    def forward(self, x):
        outs = []
        x = self.patch_embed(x)
        x = self.blocks1(x)
        outs.append(x)
        x = self.blocks2(x)
        outs.append(x)
        x = self.blocks3(x)
        outs.append(x)
        return outs

?????????这部分代码定义了EfficientViT模型的主体结构。

第五部分

# 不同配置的EfficientViT模型定义
EfficientViT_m0 = {
    'img_size': 224,
    'patch_size': 16,
    'embed_dim': [64, 128, 192],
    'depth': [1, 2, 3],
    'num_heads': [4, 4, 4],
    'window_size': [7, 7, 7],
    'kernels': [7, 5, 3, 3],
}

EfficientViT_m1 = {
    'img_size': 224,
    'patch_size': 16,
    'embed_dim': [128, 144, 192],
    'depth': [1, 2, 3],
    'num_heads': [2, 3, 3],
    'window_size': [7, 7, 7],
    'kernels': [7, 5, 3, 3],
}

EfficientViT_m2 = {
    'img_size': 224,
    'patch_size': 16,
    'embed_dim': [128, 192, 224],
    'depth': [1, 2, 3],
    'num_heads': [4, 3, 2],
    'window_size': [7, 7, 7],
    'kernels': [7, 5, 3, 3],
}

EfficientViT_m3 = {
    'img_size': 224,
    'patch_size': 16,
    'embed_dim': [128, 240, 320],
    'depth': [1, 2, 3],
    'num_heads': [4, 3, 4],
    'window_size': [7, 7, 7],
    'kernels': [5, 5, 5, 5],
}

EfficientViT_m4 = {
    'img_size': 224,
    'patch_size': 16,
    'embed_dim': [128, 256, 384],
    'depth': [1, 2, 3],
    'num_heads': [4, 4, 4],
    'window_size': [7, 7, 7],
    'kernels': [7, 5, 3, 3],
}

EfficientViT_m5 = {
    'img_size': 224,
    'patch_size': 16,
    'embed_dim': [192, 288, 384],
    'depth': [1, 3, 4],
    'num_heads': [3, 3, 4],
    'window_size': [7, 7, 7],
    'kernels': [7, 5, 3, 3],
}

# 定义EfficientViT的各个版本的构造函数
def EfficientViT_M0(pretrained='', frozen_stages=0, distillation=False, fuse=False, pretrained_cfg=None, model_cfg=EfficientViT_m0):
    model = EfficientViT(frozen_stages=frozen_stages, distillation=distillation, pretrained=pretrained, **model_cfg)
    if pretrained:
        model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)['model']))
    if fuse:
        replace_batchnorm(model)
    return model

def EfficientViT_M1(pretrained='', frozen_stages=0, distillation=False, fuse=False, pretrained_cfg=None, model_cfg=EfficientViT_m1):
    model = EfficientViT(frozen_stages=frozen_stages, distillation=distillation, pretrained=pretrained, **model_cfg)
    if pretrained:
        model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)['model']))
    if fuse:
        replace_batchnorm(model)
    return model

def EfficientViT_M2(pretrained='', frozen_stages=0, distillation=False, fuse=False, pretrained_cfg=None, model_cfg=EfficientViT_m2):
    model = EfficientViT(frozen_stages=frozen_stages, distillation=distillation, pretrained=pretrained, **model_cfg)
    if pretrained:
        model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)['model']))
    if fuse:
        replace_batchnorm(model)
    return model

def EfficientViT_M3(pretrained='', frozen_stages=0, distillation=False, fuse=False, pretrained_cfg=None, model_cfg=EfficientViT_m3):
    model = EfficientViT(frozen_stages=frozen_stages, distillation=distillation, pretrained=pretrained, **model_cfg)
    if pretrained:
        model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)['model']))
    if fuse:
        replace_batchnorm(model)
    return model

def EfficientViT_M4(pretrained='', frozen_stages=0, distillation=False, fuse=False, pretrained_cfg=None, model_cfg=EfficientViT_m4):
    model = EfficientViT(frozen_stages=frozen_stages, distillation=distillation, pretrained=pretrained, **model_cfg)
    if pretrained:
        model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)['model']))
    if fuse:
        replace_batchnorm(model)
    return model

def EfficientViT_M5(pretrained='', frozen_stages=0, distillation=False, fuse=False, pretrained_cfg=None, model_cfg=EfficientViT_m5):
    model = EfficientViT(frozen_stages=frozen_stages, distillation=distillation, pretrained=pretrained, **model_cfg)
    if pretrained:
        model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)['model']))
    if fuse:
        replace_batchnorm(model)
    return model

?????????这部分代码定义了不同配置的EfficientViT模型,包括加载预训练权重和批处理规范化层的替换。

第六部分?

# update_weight: 更新模型的权重,通常用于加载预训练的权重
def update_weight(model_dict, weight_dict):
    idx, temp_dict = 0, {}
    for k, v in weight_dict.items():
        # 如果权重在模型字典中存在且形状匹配,则更新该权重
        if k in model_dict.keys() and np.shape(model_dict[k]) == np.shape(v):
            temp_dict[k] = v
            idx += 1
    model_dict.update(temp_dict)
    print(f'loading weights... {idx}/{len(model_dict)} items')
    return model_dict

????????以上是完整的EfficientViT源码,包含中文注释。这些注释解释了代码中每个类和函数的主要功能和用途,有助于理解整个模型的结构和工作原理。

开箱即用的YOLOV8-EffucuebtViT项目源码

注:这份源码是直接就可以用的源码,但是这里的示意数据就是coco128,数据集很小,如果您这边需要做实验对比,尽量找一些大一些的数据集。?

YOLOv8-EfficientViT: 创新融合用于目标检测的高效网络可执行项目源码

总结?

?????????文章提供了YOLOv8-EfficientViT项目的源代码,分为六个主要部分,每部分都有详细的中文注释,有助于读者更好地理解和应用。本博客的目标是为读者提供一个全面、实用的YOLOv8和EfficientViT结合应用指南,特别适合那些希望在复杂视觉任务中应用最新技术的专业人士。通过这篇文章,读者能够获得EfficientViT和YOLOv8结合应用的简单理解,还能够掌握将这一先进模型应用于实际目标检测任务的知识。如果有哪里写的不够清晰,小伙伴本可以给评论或者留言,我这边会尽快的优化博文内容,另外如有需要,我这边可支持技术答疑与支持。

文章来源:https://blog.csdn.net/qq_42452134/article/details/135249396
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。