Part 20: PyTorch, Using Autograd with Neural Networks

by digitaltech2.com

Autograd is integral to training neural networks in PyTorch. It automates the computation of gradients, which is essential for optimizing neural network parameters using backpropagation. This section will cover how autograd interacts with neural networks in PyTorch.

Defining a Neural Network

Neural networks in PyTorch are defined using the torch.nn.Module class. Layers and operations are defined in the __init__ method, and the forward computation is implemented in the forward method.

  • Simple Neural Network Example:
import torch
import torch.nn as nn

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(10, 50)
        self.fc2 = nn.Linear(50, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

model = SimpleNN()
print(model)
Forward and Backward Pass

During the forward pass, data flows through the network layers. The backward pass, triggered by backward(), computes gradients for all tensors with requires_grad=True.

  • Forward and Backward Pass Example:
inputs = torch.randn(64, 10, requires_grad=True)
targets = torch.randn(64, 1)

outputs = model(inputs)
loss_fn = nn.MSELoss()
loss = loss_fn(outputs, targets)

loss.backward()  # Computes gradients
print("Gradients of fc1 weights:", model.fc1.weight.grad)
Optimization

Optimizers in PyTorch, like torch.optim.SGD, update the network parameters based on computed gradients. The typical training loop includes zeroing gradients, performing the forward pass, computing loss, performing the backward pass, and updating weights.

  • Training Loop Example:
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

for epoch in range(100):
    optimizer.zero_grad()  # Zero the gradients
    outputs = model(inputs)  # Forward pass
    loss = loss_fn(outputs, targets)  # Compute loss
    loss.backward()  # Backward pass (compute gradients)
    optimizer.step()  # Update weights

    if epoch % 10 == 0:
        print(f'Epoch [{epoch}/100], Loss: {loss.item():.4f}')
Example: Training a Neural Network with Autograd

Here is a complete example of defining, training, and evaluating a simple neural network using autograd.

  • Complete Training Example:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

# Define the dataset
dataset = TensorDataset(torch.randn(100, 10), torch.randn(100, 1))
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)

# Define the neural network
class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(10, 50)
        self.fc2 = nn.Linear(50, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

model = SimpleNN()
loss_fn = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

# Training loop
for epoch in range(100):
    for inputs, targets in dataloader:
        inputs, targets = inputs.requires_grad_(), targets
        
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = loss_fn(outputs, targets)
        loss.backward()
        optimizer.step()

    if epoch % 10 == 0:
        print(f'Epoch [{epoch}/100], Loss: {loss.item():.4f}')

print("Training completed.")

This example shows the full process of creating a dataset, defining a neural network, training it using autograd for automatic differentiation, and optimizing the network parameters.

Related Posts