PyTorch是一个基于Python的科学计算库,它主要用于深度学习研究,特别是神经网络。在PyTorch中,你可以使用torch.nn
模块来构建卷积神经网络(CNN)。以下是一个简单的CNN示例:
import torch
import torch.nn as nn
import torch.optim as optim
class SimpleCNN(nn.Module):
def __init__(self, num_classes=10):
super(SimpleCNN, self).__init__()
# 卷积层1
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=1)
# 激活函数
self.relu1 = nn.ReLU()
# 池化层
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
# 卷积层2
self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1)
# 激活函数
self.relu2 = nn.ReLU()
# 池化层
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
# 全连接层
self.fc1 = nn.Linear(in_features=32 * 25 * 25, out_features=1024)
self.relu3 = nn.ReLU()
self.dropout = nn.Dropout(0.5)
# 输出层
self.fc2 = nn.Linear(in_features=1024, out_features=num_classes)
def forward(self, x):
# 通过卷积层和激活函数
x = self.conv1(x)
x = self.relu1(x)
x = self.pool1(x)
# 通过卷积层和激活函数
x = self.conv2(x)
x = self.relu2(x)
x = self.pool2(x)
# 展平特征图
x = x.view(x.size(0), -1)
# 通过全连接层和激活函数
x = self.fc1(x)
x = self.relu3(x)
x = self.dropout(x)
# 输出结果
x = self.fc2(x)
return x
# 实例化网络
num_classes = 10
model = SimpleCNN(num_classes)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# 训练网络
for epoch in range(num_epochs):
for images, labels in train_loader:
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
这个示例中,我们定义了一个简单的CNN网络,包含两个卷积层、两个池化层和两个全连接层。你可以根据你的任务和数据集来调整网络结构。