您好,登录后才能下订单哦!
在深度学习领域,卷积神经网络(Convolutional Neural Networks, CNNs)已经成为图像识别、目标检测等任务的主流方法。然而,随着网络深度的增加,训练过程中会出现梯度消失和梯度爆炸等问题,导致模型性能下降。为了解决这些问题,Kaiming He等人提出了残差网络(Residual Network, ResNet),通过引入残差连接(Residual Connection)来缓解深度网络的训练难题。本文将基于Pytorch框架,对ResNet模块进行详细分析,并通过实例展示其实现过程。
ResNet的核心思想是通过引入残差连接,使得网络能够学习到输入与输出之间的残差映射,而不是直接学习输入到输出的映射。这种结构可以有效地缓解梯度消失问题,使得网络能够训练得更深。
残差块是ResNet的基本构建单元,其结构如下:
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, downsample=None):
super(ResidualBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
在残差块中,输入x
经过两个卷积层和批归一化层后,与原始的输入x
(经过下采样后的identity
)相加,最后通过ReLU激活函数输出。这种结构使得网络能够学习到输入与输出之间的残差映射。
ResNet的整体结构由多个残差块堆叠而成。根据网络深度的不同,ResNet有多个变体,如ResNet-18、ResNet-34、ResNet-50等。以ResNet-18为例,其结构如下:
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000):
super(ResNet, self).__init__()
self.in_channels = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
def _make_layer(self, block, out_channels, blocks, stride=1):
downsample = None
if stride != 1 or self.in_channels != out_channels * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.in_channels, out_channels * block.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(out_channels * block.expansion),
)
layers = []
layers.append(block(self.in_channels, out_channels, stride, downsample))
self.in_channels = out_channels * block.expansion
for _ in range(1, blocks):
layers.append(block(self.in_channels, out_channels))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
在ResNet中,首先通过一个7x7的卷积层和最大池化层对输入图像进行初步处理,然后通过多个残差块(layer1
到layer4
)进行特征提取,最后通过全局平均池化层和全连接层输出分类结果。
下面我们通过一个简单的实例来展示如何使用Pytorch实现ResNet,并在CIFAR-10数据集上进行训练和测试。
首先,我们需要加载CIFAR-10数据集,并进行数据增强和归一化处理。
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32, padding=4),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.247, 0.243, 0.261))
])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=128, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
接下来,我们定义一个简化版的ResNet-18模型。
import torch.nn as nn
import torch.nn.functional as F
class ResNet18(nn.Module):
def __init__(self, num_classes=10):
super(ResNet18, self).__init__()
self.in_channels = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.layer1 = self._make_layer(ResidualBlock, 64, 2, stride=1)
self.layer2 = self._make_layer(ResidualBlock, 128, 2, stride=2)
self.layer3 = self._make_layer(ResidualBlock, 256, 2, stride=2)
self.layer4 = self._make_layer(ResidualBlock, 512, 2, stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512, num_classes)
def _make_layer(self, block, out_channels, blocks, stride=1):
downsample = None
if stride != 1 or self.in_channels != out_channels:
downsample = nn.Sequential(
nn.Conv2d(self.in_channels, out_channels, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(out_channels),
)
layers = []
layers.append(block(self.in_channels, out_channels, stride, downsample))
self.in_channels = out_channels
for _ in range(1, blocks):
layers.append(block(self.in_channels, out_channels))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
net = ResNet18()
我们使用交叉熵损失函数和随机梯度下降(SGD)优化器来训练模型。
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)
for epoch in range(10): # 训练10个epoch
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 100 == 99: # 每100个batch打印一次损失
print(f'Epoch [{epoch + 1}, {i + 1:5d}] loss: {running_loss / 100:.3f}')
running_loss = 0.0
print('Finished Training')
最后,我们在测试集上评估模型的性能。
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print(f'Accuracy of the network on the 10000 test images: {100 * correct / total:.2f}%')
本文详细介绍了ResNet的基本结构,并通过Pytorch实现了一个简化版的ResNet-18模型。通过在CIFAR-10数据集上的训练和测试,我们验证了ResNet在图像分类任务中的有效性。ResNet通过引入残差连接,成功地解决了深度网络训练中的梯度消失问题,使得网络能够训练得更深,从而获得更好的性能。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。