Part 15: Tensor Device Management

by digitaltech2.com
Tensor Device Management

Managing tensors across different devices, such as CPUs and GPUs, is essential for leveraging the computational power of GPUs in PyTorch. PyTorch provides straightforward methods to move tensors and models between devices.

Checking Device Availability

Before moving tensors to a GPU, it’s important to check if a GPU is available.

Check for GPU Availability:

import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print("Using device:", device)
Moving Tensors Between Devices

You can move tensors to a specific device using the to method or cuda and cpu methods.

Move Tensor to GPU:

tensor = torch.tensor([1, 2, 3])
tensor_gpu = tensor.to(device)
print(tensor_gpu)

Move Tensor Back to CPU:

tensor_cpu = tensor_gpu.to('cpu')
print(tensor_cpu)
Device-Agnostic Code

Writing device-agnostic code ensures that your code runs seamlessly on both CPUs and GPUs.

Device-Agnostic Example:

def train_model(model, data, target, device):
    model.to(device)
    data, target = data.to(device), target.to(device)
    
    optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
    criterion = torch.nn.MSELoss()

    optimizer.zero_grad()
    output = model(data)
    loss = criterion(output, target)
    loss.backward()
    optimizer.step()

class SimpleNN(torch.nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = torch.nn.Linear(10, 50)
        self.fc2 = torch.nn.Linear(50, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

model = SimpleNN()
data = torch.randn(64, 10)
target = torch.randn(64, 1)

train_model(model, data, target, device)
Example: Training a Model on GPU

Moving a neural network and its inputs to the GPU can significantly speed up training.

Training Example on GPU:

class SimpleNN(torch.nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = torch.nn.Linear(10, 50)
        self.fc2 = torch.nn.Linear(50, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

model = SimpleNN().to(device)
data = torch.randn(64, 10).to(device)
target = torch.randn(64, 1).to(device)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
criterion = torch.nn.MSELoss()

for epoch in range(10):
    optimizer.zero_grad()
    output = model(data)
    loss = criterion(output, target)
    loss.backward()
    optimizer.step()

print("Training on GPU completed.")

This example demonstrates how to define a simple neural network, move it to the GPU, and perform training with data also on the GPU, ensuring efficient computation.

Related Posts