CV炼丹心得总结

发布时间:2023年12月18日

1,ResNet的思想 y=F(x)+x 这个经验可帮助模型更快的收敛

class Block(nn.Module):     # Encoder Block
    def __init__(self,
                 dim,       # 每个token的维度
                 drop_rate=0.1,
                 switch_flag=False,
                 num_heads=8):
        super(Block, self).__init__()
        self.switch_flag = switch_flag

        self.norm1 = nn.GroupNorm(1, dim)
        # self.norm1 = nn.BatchNorm2d(dim)
        if self.switch_flag:
            self.attn = MHSA(n_dims=dim, num_heads=num_heads)
        else:
            # self.attn = nn.AdaptiveAvgPool2d((16, 16))
            self.attn = Pooling()
        self.drop_path = DropPath(drop_rate) if drop_rate > 0. else nn.Identity()
        self.norm2 = nn.GroupNorm(1, dim)
        self.mlp = MLP(in_features=dim, drop=drop_rate)

    def forward(self, x):
        x = x + self.drop_path(self.attn(self.norm1(x)))
        x = x + self.mlp(self.norm2(x))
        return x

2,在模型最后输出分类的时候,最好有个归一化层

(head): Sequential(
? ? (global_pool): SelectAdaptivePool2d (pool_type=avg, flatten=Identity())
? ? (norm): LayerNorm2d((512,), eps=1e-06, elementwise_affine=True)
? ? (flatten): Flatten(start_dim=1, end_dim=-1)
? ? (drop): Identity()
? ? (fc): Linear(in_features=512, out_features=1000, bias=True)
? )

self.num_features = dims[len(dims) - 1]
        self.head = nn.Sequential(
                nn.AdaptiveAvgPool2d((1, 1)),                    # [15,64,16,16] --> [15,64,1,1]
                nn.GroupNorm(1, self.num_features, eps=1e-06),
                # nn.BatchNorm2d(self.num_features),
                nn.Flatten(1),                                   # [15,64,1,1] --> [15,64]
                nn.Linear(self.num_features, num_classes)        # [15,64] --> [15,10]
        )

3,在模型Block当中处理的特征图Feature Map,size越小,运行速度越快

(*) 比如下面的例子当中,8*8 运行的速度就比 16*16运行的速度快。

1)self.embedding = nn.Conv2d(3, 64, kernel_size=4, stride=2, padding=1, bias=False)     # [N, C, 16, 16]

2)self.embedding = nn.Conv2d(3, 64, kernel_size=(7, 7), stride=(4, 4), padding=(2, 2))    # [N, C, 8, 8]

文章来源:https://blog.csdn.net/gwd777/article/details/135040573
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。