Learn Pytorch - 10 Code Examples & CST Typing Practice Test
PyTorch is an open-source machine learning library developed by Facebook’s AI Research (FAIR). It is widely used for deep learning research, model prototyping, and production deployment, offering dynamic computation graphs and a Pythonic interface.
View all 10 Pytorch code examples →
Learn PYTORCH with Real Code Examples
Updated Nov 24, 2025
Code Sample Descriptions
PyTorch Simple Linear Regression
import torch
import torch.nn as nn
# Sample data
x_train = torch.tensor([[1.0],[2.0],[3.0],[4.0]])
y_train = torch.tensor([[2.0],[4.0],[6.0],[8.0]])
# Define model
model = nn.Linear(1,1)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# Train the model
for epoch in range(500):
optimizer.zero_grad()
outputs = model(x_train)
loss = criterion(outputs, y_train)
loss.backward()
optimizer.step()
# Predict
with torch.no_grad():
y_pred = model(torch.tensor([[10.0]]))
print("Prediction for 10:", y_pred.item())
A minimal PyTorch example performing linear regression on sample data.
PyTorch Simple Neural Network
import torch
import torch.nn as nn
x_train = torch.randn(10,5)
y_train = torch.randn(10,1)
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.fc = nn.Linear(5,1)
def forward(self,x):
return self.fc(x)
model = Net()
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(100):
optimizer.zero_grad()
outputs = model(x_train)
loss = criterion(outputs, y_train)
loss.backward()
optimizer.step()
Defines a simple feedforward neural network and trains on dummy data.
PyTorch Logistic Regression
import torch
import torch.nn as nn
x_train = torch.randn(10,3)
y_train = torch.randint(0,2,(10,1)).float()
model = nn.Linear(3,1)
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
for epoch in range(200):
optimizer.zero_grad()
outputs = model(x_train)
loss = criterion(outputs, y_train)
loss.backward()
optimizer.step()
Performs binary classification using logistic regression.
PyTorch Convolutional Network Example
import torch
import torch.nn as nn
class SimpleCNN(nn.Module):
def __init__(self):
super(SimpleCNN,self).__init__()
self.conv1 = nn.Conv2d(1,8,3)
self.pool = nn.MaxPool2d(2,2)
self.fc1 = nn.Linear(8*13*13,10)
def forward(self,x):
x = self.pool(torch.relu(self.conv1(x)))
x = x.view(-1,8*13*13)
x = self.fc1(x)
return x
model = SimpleCNN()
Defines a small CNN for image data.
PyTorch RNN Example
import torch
import torch.nn as nn
rnn = nn.RNN(input_size=5, hidden_size=3, num_layers=1, batch_first=True)
x = torch.randn(2,4,5)
h0 = torch.zeros(1,2,3)
out, hn = rnn(x,h0)
print(out.shape, hn.shape)
Creates a simple RNN and forward pass with dummy data.
PyTorch Optimizer Example
import torch
import torch.nn as nn
model = nn.Linear(2,1)
x = torch.randn(5,2)
y = torch.randn(5,1)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
criterion = nn.MSELoss()
optimizer.zero_grad()
output = model(x)
loss = criterion(output,y)
loss.backward()
optimizer.step()
Shows using different optimizers with a linear model.
PyTorch Custom Loss Example
import torch
import torch.nn as nn
def custom_loss(y_pred, y_true):
return torch.mean((y_pred-y_true)**2 + 0.1*torch.abs(y_pred))
x = torch.randn(5,1)
y = torch.randn(5,1)
model = nn.Linear(1,1)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for i in range(50):
optimizer.zero_grad()
pred = model(x)
loss = custom_loss(pred,y)
loss.backward()
optimizer.step()
Defines and uses a custom loss function.
PyTorch GPU Tensor Example
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
x = torch.randn(3,3).to(device)
y = torch.ones(3,3).to(device)
z = x + y
print(z.device)
Shows moving tensors to GPU and performing operations.
PyTorch Dataset and DataLoader Example
import torch
from torch.utils.data import TensorDataset, DataLoader
x = torch.randn(10,2)
y = torch.randint(0,2,(10,1)).float()
dataset = TensorDataset(x,y)
dataloader = DataLoader(dataset,batch_size=2,shuffle=True)
for batch_x,batch_y in dataloader:
print(batch_x,batch_y)
Uses TensorDataset and DataLoader for batch processing.
PyTorch Transfer Learning Example
import torch
import torch.nn as nn
import torchvision.models as models
model = models.resnet18(pretrained=True)
model.fc = nn.Linear(model.fc.in_features,10)
x = torch.randn(1,3,224,224)
y = model(x)
print(y.shape)
Uses a pretrained model and replaces the final layer.
Frequently Asked Questions about Pytorch
What is Pytorch?
PyTorch is an open-source machine learning library developed by Facebook’s AI Research (FAIR). It is widely used for deep learning research, model prototyping, and production deployment, offering dynamic computation graphs and a Pythonic interface.
What are the primary use cases for Pytorch?
Deep learning for computer vision tasks (CNNs, object detection, segmentation). Natural language processing (RNNs, Transformers, BERT, GPT). Reinforcement learning and robotics. Time series forecasting and generative modeling. Rapid prototyping of custom neural networks for research or production
What are the strengths of Pytorch?
Flexible and intuitive for dynamic graph experimentation. Pythonic interface for ease of learning. Strong community support for research and tutorials. Seamless GPU support and efficient computation. Integration with production deployment via TorchScript and ONNX
What are the limitations of Pytorch?
Less mature deployment ecosystem than TensorFlow (though improving). Initially slower adoption in production environments. Some high-level tools require third-party libraries (like PyTorch Lightning). Lacks built-in mobile deployment without TorchScript or extra conversion steps. Smaller corporate support ecosystem compared to TensorFlow
How can I practice Pytorch typing speed?
CodeSpeedTest offers 10+ real Pytorch code examples for typing practice. You can measure your WPM, track accuracy, and improve your coding speed with guided exercises.