Deep Learning PyTorch Course, Running Example Files on Colab

Hello! In this post, we will start with the basics of deep learning and PyTorch and write codes that can be practiced. We will also guide you on how to run the code using Google Colab. PyTorch is a deep learning library suitable for in-depth learning and research, providing an intuitive and flexible dynamic computation graph. A key feature added at each stage of the graph is that researchers and engineers can easily modify and optimize the models as needed.

1. Overview of Deep Learning

Deep learning is a field of machine learning that uses artificial neural networks to learn patterns from data. It is mainly used in image recognition, natural language processing, and speech recognition. Essentially, a deep learning model has a structure that receives input data, processes it, and outputs the results. These models are composed of numerous neurons, each processing received values along with weights to produce outputs.

2. What is PyTorch?

PyTorch is a deep learning framework developed by Facebook, popular for its ability to write Pythonic code. One of the advantages of PyTorch is its intuitive interface and powerful GPU acceleration capabilities, allowing for efficient handling of large-scale data and complex models. Additionally, it supports Dynamic Computation Graphs, enabling flexible changes to the model’s structure.

3. Setting up Google Colab

Google Colab provides an online environment to run Python code. It supports GPU acceleration using CUDA, allowing you to complete model training in a short time.

  1. Log in with your Google account.
  2. Access Google Colab.
  3. Create a new notebook.
  4. Click ‘Runtime’ -> ‘Change Runtime Type’ in the top menu and select GPU.

Your Colab environment is now ready!

4. Basic Usage of PyTorch

4.1. Installing PyTorch

PyTorch is installed by default in Colab, but if you want the latest version, you can install it using the command below.

!pip install torch torchvision

4.2. Tensor

Tensor is the core data structure of PyTorch. It fundamentally supports mathematical operations as an N-dimensional array, providing the following features:

  • Portability between CPU and GPU
  • Automatic differentiation capability

Creating Tensors

The code below is an example of creating basic tensors.

import torch

# Creating basic tensors
tensor_1d = torch.tensor([1.0, 2.0, 3.0])
tensor_2d = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
print(tensor_1d)
print(tensor_2d)

5. Building a Deep Learning Model

Now let’s build an actual deep learning model. We will implement a simple Deep Neural Network (DNN) and create a model to recognize handwritten digits using the MNIST dataset.

5.1. Downloading the MNIST Dataset

The MNIST dataset is a collection of handwritten digit images and is commonly used as a test dataset for deep learning models. You can easily download and load the dataset using PyTorch.

from torchvision import datasets, transforms

# Define dataset transformations
transform = transforms.Compose([
    transforms.ToTensor(),  # Convert to tensor
    transforms.Normalize((0.5,), (0.5,))  # Normalization
])

# Download MNIST dataset
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)

# Create data loaders
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=False)

5.2. Defining the Model

The code below is an example of defining a simple deep neural network.

import torch.nn as nn

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(28 * 28, 128)  # Input layer
        self.fc2 = nn.Linear(128, 64)        # Hidden layer
        self.fc3 = nn.Linear(64, 10)         # Output layer

    def forward(self, x):
        x = x.view(-1, 28 * 28)  # Convert 2D to 1D
        x = torch.relu(self.fc1(x))  # Activation function
        x = torch.relu(self.fc2(x))
        x = self.fc3(x)
        return x

model = SimpleNN()

6. Model Training and Evaluation

6.1. Setting Loss Function and Optimizer

Define the loss function and optimizer for model training. Cross Entropy Loss is commonly used for classification problems.

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

6.2. Training the Model

The process of training the model is as follows:

num_epochs = 5

for epoch in range(num_epochs):
    for images, labels in train_loader:
        optimizer.zero_grad()  # Gradient initialization
        outputs = model(images)  # Model prediction
        loss = criterion(outputs, labels)  # Calculate loss
        loss.backward()  # Calculate gradient
        optimizer.step()  # Update weights
    
    print(f'Epoch [{epoch + 1}/{num_epochs}], Loss: {loss.item():.4f}')  # Output loss

7. Evaluating Model Performance

After training the model, evaluate its performance using the test dataset.

correct = 0
total = 0

with torch.no_grad():  # Disable gradient calculations
    for images, labels in test_loader:
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)  # Class with the highest probability
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy of the model on the test images: {100 * correct / total:.2f}%')  # Output accuracy

8. Conclusion

In this post, we learned how to build a simple deep learning model using PyTorch and how to run it on Google Colab. In the future, it would be beneficial to tackle more complex models and various datasets, and to learn advanced topics such as transfer learning or reinforcement learning. Challenge yourself with various projects to gain deeper understanding and experience!

Deep Learning PyTorch Course, What is Colab

In this lecture, we will take a detailed look at Google Colab, a tool that is essential for learning deep learning. Using Colab along with one of the deep learning libraries, PyTorch, allows for easy training and experimentation of machine learning and deep learning models. In this text, we will present an overview of Colab’s features, benefits, and an example of building a simple deep learning model with PyTorch in Python.

1. What is Google Colab?

Google Colaboratory, commonly referred to as Colab, is a free Jupyter notebook environment that supports machine learning, data analysis, and education using Python. Colab is integrated with Google Drive, enabling users to easily store and share their data.

1.1 Key Features

  • Support for GPU and TPU: Free NVIDIA GPU and TPU are provided to speed up the training of complex deep learning models.
  • Google Drive Integration: Users can easily manage and share their data and results.
  • Data Visualization Tools: Supports various visualization libraries such as Matplotlib and Seaborn for smooth data analysis.
  • Easy Library Installation: You can easily install libraries like TensorFlow and PyTorch as needed.

1.2 Benefits of Colab

There are various benefits to using Colab. First, users can perform complex tasks without consuming local computer resources as they work in a cloud environment. This is particularly advantageous for large-scale deep learning projects that require GPU. Furthermore, it allows users to visually confirm the results along with the code execution, making it useful for research and educational purposes.

2. What is PyTorch?

PyTorch is an open-source machine learning library primarily used for deep learning, implemented in Python and C++. PyTorch has the property of dynamic computational graphs, making it particularly suitable for research and prototyping. Additionally, it is highly compatible with Python, making the process of writing and debugging code easier.

2.1 Installation Method

PyTorch can be easily used in Colab. You can install the essential libraries related to PyTorch by running the cell below.

!pip install torch torchvision

3. A Simple Deep Learning Model Using PyTorch

Now, let’s implement a simple neural network model using PyTorch in Google Colab. In this example, we will create a digit recognizer using the MNIST dataset.

3.1 Preparing the Dataset

First, we prepare the MNIST dataset. MNIST consists of digit images of 28×28 pixels and is commonly used as a benchmark dataset to evaluate the performance of deep learning models.

import torch
import torchvision
import torchvision.transforms as transforms

# Define data transformations
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])

# Download training set and test set
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)

testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=False)

3.2 Designing the Neural Network

We will define the neural network architecture as follows. Here we will use a simple model consisting of an input layer, two hidden layers, and an output layer.

import torch.nn as nn
import torch.optim as optim

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(28 * 28, 128)  # Input layer (784 nodes) -> First hidden layer (128 nodes)
        self.fc2 = nn.Linear(128, 64)        # First hidden layer -> Second hidden layer (64 nodes)
        self.fc3 = nn.Linear(64, 10)         # Second hidden layer -> Output layer (10 nodes)

    def forward(self, x):
        x = x.view(-1, 28 * 28)  # Convert each image to a 1D vector
        x = torch.relu(self.fc1(x))  # First hidden layer
        x = torch.relu(self.fc2(x))  # Second hidden layer
        x = self.fc3(x)  # Output layer
        return x

model = SimpleNN()

3.3 Defining the Loss Function and Optimization Algorithm

We will use CrossEntropyLoss as the loss function and Adam Optimizer to train the model.

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

3.4 Training the Model

The next step is the process of training the model. We will update the model weights and reduce the loss over several epochs.

for epoch in range(5):  # Train for 5 epochs
    running_loss = 0.0
    for inputs, labels in trainloader:
        optimizer.zero_grad()  # Reset gradients
        outputs = model(inputs)  # Generate outputs by putting inputs into the model
        loss = criterion(outputs, labels)  # Calculate loss
        loss.backward()  # Backpropagation
        optimizer.step()  # Optimization
        running_loss += loss.item()  # Accumulate loss
        
    print(f'Epoch {epoch + 1}, Loss: {running_loss / len(trainloader)}')  # Output average loss

3.5 Evaluating the Model

Finally, we will evaluate the model’s performance using the test set. We will calculate the accuracy while passing through the prepared test dataset.

correct = 0
total = 0
with torch.no_grad():
    for inputs, labels in testloader:
        outputs = model(inputs)
        _, predicted = torch.max(outputs.data, 1)  # Select the class with the highest probability
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy: {100 * correct / total}%')  # Output accuracy

4. Conclusion

In this post, we explored the features and benefits of Google Colab, as well as how to build a simple deep learning model using PyTorch. Google Colab offers many advantages to data scientists and researchers, enabling them to perform deep learning in a highly useful environment alongside PyTorch. We will return with a variety of advanced topics in the future!

Welcome to the world of deep learning. We hope you continue to learn new technologies and methods as you move forward!

Deep Learning PyTorch Course, What is Kaggle

The field of deep learning is advancing at an astonishing rate and plays a crucial role not only in commercial applications but also in research and education. One of the key platforms in this trend is Kaggle. In this post, we will take a detailed look at the concept, roles, and an example of implementing a deep learning model using PyTorch.

1. Introduction to Kaggle

Kaggle is a data science community and a platform where users can develop and compete with data analysis, machine learning, and deep learning models. Users can explore various datasets, develop models to share with others, or participate in competitions. Kaggle helps in building experience related to data science and machine learning and improving one’s skills.

1.1 Main Features of Kaggle

  • Datasets: Users can explore and download datasets on various topics.
  • Competitions: Participate in data science competitions to solve problems and win prizes.
  • Code Sharing: Users can share their code and learn from others’ code.
  • Community: Network with data scientists for collaboration or knowledge sharing.

2. What is PyTorch?

PyTorch is an open-source machine learning library suitable for building and training dynamic neural networks. PyTorch is particularly popular among researchers, offering flexible modeling capabilities and an easy debugging environment. Many of the latest deep learning research implementations utilize PyTorch.

2.1 Features of PyTorch

  • Flexibility: Easily create complex models using dynamic computation graphs.
  • GPU Support: Fast computation through CUDA is available.
  • User-Friendly API: Provides an API similar to NumPy, making it easy to learn.

3. Implementing a Deep Learning Model with PyTorch

Now, let’s implement a basic neural network using PyTorch. This example will address the MNIST handwritten digit recognition problem. The MNIST dataset consists of images of handwritten digits from 0 to 9.

3.1 Installing Required Libraries

!pip install torch torchvision

3.2 Loading the Dataset

import torch
from torchvision import datasets, transforms

# Define dataset transformations
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])

# Load the MNIST dataset
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)

# Set up data loaders
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=64, shuffle=False)

3.3 Defining a Neural Network Model

import torch.nn as nn
import torch.nn.functional as F

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(28 * 28, 128)
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = x.view(-1, 28 * 28)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return x

3.4 Training the Model

model = SimpleNN()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

# Training loop
for epoch in range(5):  # Train for 5 epochs
    for images, labels in train_loader:
        optimizer.zero_grad()  # Initialize gradients
        outputs = model(images)  # Model predictions
        loss = criterion(outputs, labels)  # Calculate loss
        loss.backward()  # Backpropagation
        optimizer.step()  # Update weights
    print(f'Epoch [{epoch + 1}/5], Loss: {loss.item():.4f}')

3.5 Evaluating the Model

correct = 0
total = 0

with torch.no_grad():  # Deactivate gradient computation
    for images, labels in test_loader:
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy of the model: {100 * correct / total:.2f}%')

4. Conclusion

Kaggle is a crucial resource for data science and machine learning, offering a variety of datasets and learning opportunities. PyTorch is a powerful tool for building and experimenting with models on these datasets. In this tutorial, we explored the basic processes of data loading, modeling, training, and evaluation. Enhance your deep learning skills through the various challenges offered on Kaggle!

5. References

Deep Learning PyTorch Course, Start with Kaggle

With the advancement of deep learning, AI technology is rapidly evolving in various fields. In particular, its application in the field of data science is prominent, and many people are studying through various online platforms to learn machine learning and deep learning. Among them, Kaggle is an all-in-one platform for data scientists and machine learning engineers that provides a variety of datasets and problems. In this article, we will explore how to gain practical experience on Kaggle using PyTorch.

1. What is PyTorch?

PyTorch is an open-source machine learning framework developed by Facebook AI Research (FAIR), and it is very useful for building and training deep learning models. In particular, it supports dynamic computation graphs, which provide flexibility and readability in code, making it easy to implement complex models.

1.1. Key Features of PyTorch

  • Dynamic Computation Graph: The computation graph is created during execution, allowing for flexible modification of the model’s structure.
  • Pythonic Design: It is very similar to the basic syntax of Python, enabling natural and intuitive code writing.
  • Strong GPU Support: Through CUDA, it supports powerful parallel processing, allowing for efficient handling of large datasets.

2. Introduction to Kaggle

Kaggle is a platform for data science competitions where participants analyze datasets and train models to solve various problems, ultimately submitting their prediction results. Kaggle serves as a competitive arena for everyone, from beginners to experts, providing various resources and tutorials to help build skills.

2.1. Creating a Kaggle Account

To get started with Kaggle, you first need to create an account. Visit the Kaggle website to sign up. After registering, you can set up your profile and participate in various competitions.

3. Basic Example Using PyTorch

Now let’s create a deep learning model through a simple PyTorch example. In this example, we will build a model to recognize handwritten digits using the MNIST digit data.

3.1. Installing Required Libraries

!pip install torch torchvision
    

3.2. Downloading the MNIST Dataset

The MNIST dataset consists of handwritten digit images. We will use the dataset provided by torchvision to download it.

import torch
from torchvision import datasets, transforms

# Data preprocessing
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.5,), (0.5,))
])

# Download MNIST dataset
trainset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
    

3.3. Building the Model

We will build a neural network with an MLP (Multi-layer Perceptron) structure. The model can be defined using the code below.

import torch.nn as nn
import torch.nn.functional as F

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(784, 128)  # 28*28 = 784
        self.fc2 = nn.Linear(128, 10)    # 10 classes for digits 0-9

    def forward(self, x):
        x = x.view(x.size(0), -1)  # flatten input
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return x

model = SimpleNN()
    

3.4. Model Training

To train the model, we will define a loss function and an optimization technique, followed by training over several epochs.

import torch.optim as optim

# Define loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

# Train the model
for epoch in range(5):  # 5 epochs
    running_loss = 0.0
    for images, labels in trainloader:
        optimizer.zero_grad()   # initialize gradients to zero
        outputs = model(images) # Forward pass
        loss = criterion(outputs, labels)  # Calculate loss
        loss.backward()  # Backward pass
        optimizer.step() # Update parameters
        running_loss += loss.item()
    print(f'Epoch {epoch+1}, Loss: {running_loss/len(trainloader)}')

    

3.5. Model Evaluation

To evaluate whether the model has been well trained, we will calculate the accuracy on the test data.

# Model evaluation
testset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=False)

correct = 0
total = 0

with torch.no_grad():
    for images, labels in testloader:
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy: {100 * correct / total}%')
    

4. Participating in a Kaggle Competition

Having learned the basic usage of PyTorch through the MNIST example, let’s participate in a Kaggle competition. There are various competitions on Kaggle, and you can join one in a field that interests you. Each competition page provides dataset downloads and example code for you to review.

4.1. Understanding Competition Tasks

Before joining a competition, you need to fully understand the problem description and the structure of the dataset. For instance, in the Titanic Survival Prediction competition, you will create a model to predict survivors using passenger characteristics and survival information.

4.2. Data Preprocessing

To improve model performance, data preprocessing is essential. This includes handling missing values, adding needed features, and normalizing the data.

4.3. Model Selection

You need to choose a suitable model based on the characteristics of the problem. CNNs (Convolutional Neural Networks) are generally used for image data, while RNNs (Recurrent Neural Networks) are utilized for time series data.

4.4. Submission Process

After training the model, save the prediction results as a CSV file for submission. The format of the file may vary depending on the competition, so be sure to check the submission guidelines.

5. Communicating with the Community

One of the greatest advantages of Kaggle is the ability to receive help from the community. You can refer to other participants’ notebooks and learn a lot through questions and answers. Additionally, networking with experienced data scientists can greatly aid in your growth.

5.1. Utilizing Notebooks

Kaggle offers a notebook (NB) feature where you can share your code and processes. It is a great place to organize your know-how or learn from the insights of other participants.

5.2. Scripts and Kaggle API

Using the Kaggle API, you can easily download datasets and submit to competitions. This simplifies repetitive tasks through automation.

!kaggle competitions download -c titanic
!kaggle kernels push
    

6. Conclusion

For many starting in deep learning, PyTorch and Kaggle are excellent starting points. They provide opportunities to gain practical project experience, learn modeling techniques, and understand how to communicate within the community. If you have learned the basic usage of PyTorch and how to participate in Kaggle competitions through this tutorial, you can now start incorporating various theories and techniques to create your own projects. The future of AI lies in your hands!

Appendix

References

Deep Learning PyTorch Course, Supervised Learning

Deep learning is a field of artificial intelligence (AI) that uses multilayer neural networks to learn patterns from data.
Today, we will conduct an in-depth lecture on one of the two most commonly used learning methods in deep learning, known as supervised learning.

1. What is Supervised Learning?

Supervised learning is a method for learning predictive models based on given data.
Here, ‘supervised’ refers to labeled training data.
In supervised learning, relationships between input data and output labels are learned to create a model capable of making predictions on new data.

1.1 Types of Supervised Learning

Supervised learning can be broadly divided into two types: classification and regression.

  • Classification: Predicts whether a given input data belongs to a specific class.
    For example, classifying whether an email is spam or not falls into this category.
  • Regression: Predicts continuous numeric values based on input data.
    For instance, predicting house prices based on the area of the house is an example of regression.

2. Introduction to PyTorch

PyTorch is an open-source machine learning library developed by Facebook that provides a variety of useful features for deep learning researchers and developers.
In particular, it supports dynamic computation graphs,
which makes it easier to debug and modify models.

2.1 Installing PyTorch

To install PyTorch, you can use the following command.
The command below shows how to install PyTorch using pip:

pip install torch torchvision torchaudio

3. Creating a Deep Learning Model

Now, let’s create a simple deep learning model using PyTorch.
This example will address a classification problem, using the famous MNIST dataset to build a model that classifies handwritten digits.
The MNIST dataset consists of images of digits from 0 to 9.

3.1 Loading the Dataset

First, we will load the MNIST dataset and split it into training and testing data.

import torch
from torchvision import datasets, transforms

# Data transformation (normalization)
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.5,), (0.5,))  # Normalized to mean 0.5, standard deviation 0.5
])

# Download and load the MNIST dataset
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)

train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=64, shuffle=False)

3.2 Defining the Model

Deep learning models are defined by inheriting from nn.Module. Here, we will define a simple neural network model.

import torch.nn as nn
import torch.nn.functional as F

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(28 * 28, 128)  # First layer
        self.fc2 = nn.Linear(128, 64)       # Second layer
        self.fc3 = nn.Linear(64, 10)        # Output layer

    def forward(self, x):
        x = x.view(-1, 28 * 28)  # Convert 2D image to 1D vector
        x = F.relu(self.fc1(x))  # Apply ReLU activation function
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

# Create an instance of the model
model = SimpleNN()

3.3 Defining the Loss Function and Optimizer

Now we will define the loss function and optimizer. We will use the cross-entropy loss function and the Adam optimizer.

import torch.optim as optim

# Loss function
criterion = nn.CrossEntropyLoss()
# Optimizer
optimizer = optim.Adam(model.parameters(), lr=0.001)

3.4 Training the Model

Now it’s time to train the model. For each epoch, we will pass the data through the model iteratively,
compute the loss, and update the weights.

num_epochs = 5

for epoch in range(num_epochs):
    for images, labels in train_loader:
        # Zero the gradients
        optimizer.zero_grad()
        # Pass images through the model
        outputs = model(images)
        # Calculate loss
        loss = criterion(outputs, labels)
        # Backpropagation
        loss.backward()
        # Update weights
        optimizer.step()
    
    print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

3.5 Evaluating the Model

Once training is complete, let’s evaluate the model using the test data.

correct = 0
total = 0

with torch.no_grad():  # Disable gradient calculation
    for images, labels in test_loader:
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)  # Extract the index of the maximum value
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy of the model on the test images: {100 * correct / total:.2f}%')

4. Conclusion

In this lecture, we learned how to build a basic deep learning model using PyTorch and
solve a classification problem using the MNIST dataset.
This approach utilizing supervised learning can be applied in various fields and can be expanded into more complex models.

4.1 Additional Learning Resources

If you want to learn more about deep learning, check out the following resources:

Deep learning is a vast field of research that requires continuous learning.
We hope this will help you on your deep learning journey!