Deep Learning PyTorch Course, Deep Learning Training Algorithms

Deep learning is a field of machine learning based on artificial neural networks, which is used to learn patterns from data and perform tasks such as prediction or classification. In this course, we will explain the basic concepts of deep learning along with the learning algorithms using a deep learning framework called PyTorch.

Basic Concepts of Deep Learning

The core of deep learning is neural networks. A neural network is a structure composed of units called nodes that are connected in layers, receiving input data and applying weights and biases to generate output data.
Each node performs a nonlinear transformation, which is accomplished through an activation function.

Structure of Neural Networks

Generally, neural networks consist of an input layer, hidden layers, and an output layer.

  • Input Layer: The place where the model receives data
  • Hidden Layers: Internal layers that process the input data, which can have multiple layers
  • Output Layer: The layer that outputs the final prediction value or class

Activation Functions

Activation functions play the role of introducing non-linearity in nodes. Here are the activation functions commonly used.

  • ReLU (Rectified Linear Unit): $f(x) = max(0, x)$
  • Sigmoid: $f(x) = \frac{1}{1 + e^{-x}}$
  • Tanh: $f(x) = \tanh(x) = \frac{e^{x} – e^{-x}}{e^{x} + e^{-x}}$

Deep Learning Learning Algorithms

To train a deep learning model, a dataset is required. The data consists of inputs and targets (outputs).
The learning process of the model proceeds through the following steps.

1. Forward Pass

The input data is passed through the model to compute the predicted values. At this time, the weights and biases of the neural network are used to generate the output.

2. Loss Calculation

The loss is calculated as the difference between the model’s predictions and the actual target values. Common loss functions include Mean Squared Error (MSE) and Cross-Entropy.

3. Backpropagation

This process adjusts weights and biases based on the loss, using Gradient Descent to update the model’s parameters. The backpropagation algorithm calculates the gradient of the loss for each weight using the chain rule.

4. Weight Update

The calculated gradients are used to update the weights and biases. The update formula is as follows.

w = w - learning_rate * gradient
b = b - learning_rate * gradient

Implementation in PyTorch

Now, based on the explanations above, let’s implement a simple deep learning model in PyTorch. This example uses the MNIST handwritten digit recognition dataset to classify handwritten digits.

Install and Import Required Libraries

pip install torch torchvision
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader

Load and Preprocess Dataset

Load the MNIST dataset and perform normalization on the image data.

# Data Preprocessing
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.5,), (0.5,))
])

# Load Dataset
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)

# Create Data Loaders
train_loader = DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=64, shuffle=False)

Define the Model

Define a simple neural network model. The input size is 28×28 (MNIST image size), and it has two hidden layers. The output layer is set to 10 (digits 0 to 9).

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(28 * 28, 128)  # Input layer -> Hidden layer
        self.fc2 = nn.Linear(128, 64)        # Hidden layer -> Hidden layer
        self.fc3 = nn.Linear(64, 10)         # Hidden layer -> Output layer
        self.activation = nn.ReLU()          # Activation function

    def forward(self, x):
        x = x.view(-1, 28 * 28)              # Reshape image to 1D tensor
        x = self.activation(self.fc1(x))     # Forward pass
        x = self.activation(self.fc2(x))
        x = self.fc3(x)
        return x

Initialize the Model and Set Loss Function and Optimizer

# Initialize Model
model = SimpleNN()

# Set Loss Function and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

Train the Model

Train the model iteratively while recording the loss and periodically evaluate the model’s performance.

# Train the Model
num_epochs = 5

for epoch in range(num_epochs):
    for images, labels in train_loader:
        optimizer.zero_grad()                 # Initialize gradients
        outputs = model(images)               # Forward pass
        loss = criterion(outputs, labels)     # Calculate loss
        loss.backward()                       # Backpropagation
        optimizer.step()                      # Update weights

    print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

Evaluate the Model

Evaluate the accuracy of the model using the test dataset.

# Evaluate the Model
model.eval()  # Set to evaluation mode
with torch.no_grad():  # Disable gradient calculation
    correct = 0
    total = 0
    for images, labels in test_loader:
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)  # Predicted classes
        total += labels.size(0)                     # Total sample count
        correct += (predicted == labels).sum().item()  # Count correct predictions

print(f'Accuracy of the model on the test images: {100 * correct / total:.2f}%')

Conclusion

In this course, we covered the basic concepts of deep learning and implemented a simple neural network model using PyTorch. Through hands-on practice, we learned about data preprocessing, model definition, training, and evaluation processes.
This provided an opportunity to gain a deep understanding of how deep learning works.
Furthermore, we can explore the world of deep learning by dealing with complex architectures, advanced optimization techniques, and various datasets.

Thank you!

Deep Learning PyTorch Course, Deep Learning Learning Process

Deep learning is a branch of artificial intelligence and a collection of machine learning methods based on artificial neural networks. One of the core technologies of deep learning widely used in various fields today is PyTorch. PyTorch is popular among many researchers and developers for its easy-to-use dynamic computation graph and powerful tensor operation capabilities. In this article, we will take a detailed look at the learning process of deep learning using PyTorch.

1. Basics of Deep Learning

Deep learning is a method of analyzing and predicting data through artificial neural networks. An artificial neural network is a model that mimics the structure and function of biological neural networks, where each node represents a nerve cell and is connected to transmit information.

1.1 Structure of Artificial Neural Networks

Artificial neural networks mainly consist of an input layer, hidden layers, and an output layer:

  • Input Layer: The layer where data enters the neural network.
  • Hidden Layer: A layer that performs intermediate calculations, which can have one or more instances.
  • Output Layer: The layer that generates the final result of the neural network.

1.2 Activation Function

The activation function determines whether each neuron in the neural network will be activated. Commonly used activation functions include:

  • Sigmoid: $f(x) = \frac{1}{1 + e^{-x}}$
  • ReLU: $f(x) = max(0, x)$
  • Tanh: $f(x) = \tanh(x)$

2. Introduction to PyTorch

PyTorch is an open-source deep learning library developed by Facebook that works with Python and supports tensor operations, automatic differentiation, and GPU acceleration. The advantages of PyTorch include:

  • Support for dynamic computation graphs
  • Intuitive API and thorough documentation
  • Active community and various available examples

3. Deep Learning Learning Process

The deep learning learning process can be broadly divided into four stages: data preparation, model construction, training, and evaluation.

3.1 Data Preparation

To train a deep learning model, data must be prepared. This typically includes the following steps:

  • Data collection
  • Data preprocessing (normalization, sampling, etc.)
  • Separating the training set and testing set

3.2 Preparing Data in PyTorch

In PyTorch, packages like torchvision can be used to handle data. For example, the code to load the CIFAR-10 dataset is as follows:

import torch
import torchvision
import torchvision.transforms as transforms

transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                          shuffle=True, num_workers=2)

3.3 Model Construction

When constructing a model, the structure of the neural network must be defined. In PyTorch, user-defined models can be created by inheriting the torch.nn.Module class. Below is an example of a simple CNN model:

import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

3.4 Model Training

When training a model, a loss function and an optimization algorithm must be defined. Generally, the cross-entropy loss function is used for classification problems, and optimization algorithms such as SGD or Adam can be applied.

import torch.optim as optim

net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

for epoch in range(2):  # Repeating the dataset multiple times.
    for i, data in enumerate(trainloader, 0):
        inputs, labels = data
        optimizer.zero_grad()  # Initialize gradients
        outputs = net(inputs)  # Forward pass
        loss = criterion(outputs, labels)  # Calculate loss
        loss.backward()  # Backward pass
        optimizer.step()  # Update weights

print('Finished Training')

3.5 Model Evaluation

After training the model, it needs to be evaluated. Typically, the testing dataset is used to calculate accuracy.

correct = 0
total = 0
with torch.no_grad():  # Disable gradient calculation
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))

4. Directions for the Advancement of Deep Learning

Deep learning is being utilized in various fields and will continue to evolve. Especially, it is expected to bring innovations in many areas, including autonomous vehicles, medical diagnosis, natural language processing, and image generation. PyTorch will also continue to evolve in line with these trends.

Conclusion

In this article, we started with the basics of deep learning and took a detailed look at the learning process of deep learning using PyTorch. Through the stages of data preparation, model construction, training, and evaluation, we confirmed the various functions and conveniences provided by PyTorch. I hope this guide helps broaden your understanding of deep learning and aids in applying it to real projects.

Deep Learning PyTorch Course, Deep Learning Training

Deep Learning is a field of machine learning based on artificial neural networks, focused on automatically learning useful patterns from various data. In this course, we will explain in detail the process of building and training deep learning models using PyTorch. Depending on the data that needs to be learned and the business requirements, various network architectures can be designed, and PyTorch is a very useful tool for this.

1. What is PyTorch?

PyTorch is an open-source machine learning library developed by the Facebook AI Research group, primarily used for deep learning research and production.
It provides tensor calculations and automatic differentiation features that facilitate model training and gradient-based optimization.
Additionally, it integrates well with Python to support intuitive code writing.

2. Installing PyTorch

There are several methods to install PyTorch, and you can install it using Conda or pip through the commands below.

            
                # If using Anaconda
                conda install pytorch torchvision torchaudio cpuonly -c pytorch
                
                # If using pip
                pip install torch torchvision torchaudio
            
        

After installation, run the following code to check if the installation has been completed successfully.

            
                import torch
                print(torch.__version__)
            
        

3. Basic Concepts of Deep Learning

The main concepts of deep learning are as follows:

  • Neural Network: A data processing structure composed of input layers, hidden layers, and output layers.
  • Tensor: The basic data structure in PyTorch, referring to multi-dimensional arrays.
  • Activation Function: Determines how each node in the neural network is activated through activation.
  • Loss Function: A function that measures the error between the model’s predictions and the actual values.
  • Optimizer: An algorithm that updates the weights of the network based on the loss function.

4. Building Deep Learning Models with PyTorch

Let’s build a simple neural network using PyTorch. The dataset we will use is the famous MNIST handwritten digit dataset. This dataset consists of black and white images containing digits from 0 to 9.

4.1 Downloading the Dataset

PyTorch makes it easy to download and preprocess various image datasets through the torchvision library.
The code for downloading the MNIST dataset and setting up the DataLoader is as follows.

            
                import torch
                from torchvision import datasets, transforms
                from torch.utils.data import DataLoader

                # Data preprocessing: convert images to tensors and normalize
                transform = transforms.Compose([
                    transforms.ToTensor(),
                    transforms.Normalize((0.5,), (0.5,))
                ])

                # Download dataset
                train_dataset = datasets.MNIST(root='data', train=True, download=True, transform=transform)
                test_dataset = datasets.MNIST(root='data', train=False, download=True, transform=transform)

                # Set up DataLoader
                train_loader = DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
                test_loader = DataLoader(dataset=test_dataset, batch_size=64, shuffle=False)
            
        

4.2 Defining the Neural Network Model

Now let’s define a simple neural network model. The code below represents a neural network with two hidden layers.

            
                import torch.nn as nn
                import torch.nn.functional as F

                class SimpleNet(nn.Module):
                    def __init__(self):
                        super(SimpleNet, self).__init__()
                        self.fc1 = nn.Linear(28 * 28, 128)  # Input layer
                        self.fc2 = nn.Linear(128, 64)        # Hidden layer 1
                        self.fc3 = nn.Linear(64, 10)         # Output layer

                    def forward(self, x):
                        x = x.view(-1, 28 * 28)  # Flatten
                        x = F.relu(self.fc1(x))
                        x = F.relu(self.fc2(x))
                        x = self.fc3(x)
                        return x
            
        

4.3 Setting the Loss Function and Optimizer

To train the model, we need to define the loss function and optimizer. In this case, we will use cross-entropy loss as the loss function and the Adam optimizer as the optimizer.

            
                model = SimpleNet()
                criterion = nn.CrossEntropyLoss()
                optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
            
        

4.4 Training the Model

The code below shows the process of training the model. Data is fetched in mini-batches using the data loader, and for each batch, the model’s output is calculated, followed by loss calculation and weight updates.

            
                num_epochs = 5

                for epoch in range(num_epochs):
                    for images, labels in train_loader:
                        # Zero the gradients
                        optimizer.zero_grad()
                        
                        # Forward pass
                        outputs = model(images)
                        loss = criterion(outputs, labels)

                        # Backward pass
                        loss.backward()
                        optimizer.step()

                    print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')
            
        

4.5 Evaluating the Model

After the model is trained, we evaluate its performance using the test dataset. The code below shows how to measure accuracy on the test dataset.

            
                model.eval()  # Switch to evaluation mode
                correct = 0
                total = 0

                with torch.no_grad():  # Disable gradient calculation
                    for images, labels in test_loader:
                        outputs = model(images)
                        _, predicted = torch.max(outputs.data, 1)
                        total += labels.size(0)
                        correct += (predicted == labels).sum().item()

                accuracy = 100 * correct / total
                print(f'Accuracy on the test set: {accuracy:.2f}%')
            
        

5. Hyperparameter Tuning in Deep Learning

Hyperparameter tuning is an important step in improving the performance of deep learning models. Hyperparameters include learning rate, batch size, size and number of hidden layers, type of activation function, dropout rate, etc.

Generally, techniques such as Grid Search, Random Search, and Bayesian Optimization are used for hyperparameter tuning, each method evaluates various combinations to explore the optimal settings.

6. Conclusion

In this course, we introduced the process of building and training basic deep learning models using PyTorch. We covered key steps in deep learning such as dataset preparation, model definition, training, and evaluation.
Various theories and techniques were also explained to help understand deep learning, so we encourage you to take on more complex models and diverse applications based on this foundation.

7. References

Deep Learning PyTorch Course, Deep Learning Terminology

1. What is Deep Learning?

Deep Learning is a field of machine learning based on artificial neural networks, which learns patterns from data to make predictions. Inspired by the structure of the human brain, deep learning employs multi-layer neural networks to understand and learn from input data through appropriate nonlinear transformations. It is utilized in various fields such as image recognition, natural language processing, and speech recognition.

2. What is PyTorch?

PyTorch is an open-source machine learning library developed by Facebook’s artificial intelligence research team. Using PyTorch, one can easily implement the process of constructing and training deep learning models, and it supports dynamic computation graphs, allowing for more intuitive model development. PyTorch is primarily written in Python and enables high-speed computations using GPUs.

3. Key Terms in Deep Learning

  • 3.1 Artificial Neural Network (ANN)

    An artificial neural network is a model developed based on the structure of biological neural networks. It consists of multiple layers, each processing input signals and passing them to the next layer.

  • 3.2 Loss Function

    The loss function measures the difference between the predicted and actual values of the model. A lower value of the loss function indicates better model performance.

  • 3.3 Backpropagation

    Backpropagation is an algorithm used in neural networks to update weights in order to minimize the loss function. It adjusts the weights using gradient descent.

  • 3.4 Overfitting

    Overfitting is a phenomenon where the model fits the training data too well, resulting in poor generalization performance on new data. Regularization techniques are used to prevent this.

  • 3.5 Hyperparameter

    Hyperparameters are parameters that must be set during the model training process, such as learning rate and batch size. The choice of hyperparameters can significantly affect the model’s performance.

4. PyTorch Example Code

4.1 Constructing a Basic Artificial Neural Network

The following is code that uses PyTorch to construct a basic artificial neural network and train it on the MNIST digit recognition dataset.


import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms

# Load dataset
transform = transforms.Compose([transforms.ToTensor()])
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)

# Define artificial neural network class
class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(28 * 28, 128)
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = x.view(-1, 28 * 28)  # Convert to 1D
        x = torch.relu(self.fc1(x))  # Activation function
        x = self.fc2(x)  # Final output
        return x

# Define model, loss function and optimizer
model = SimpleNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

# Training
for epoch in range(5):  # Training for 5 epochs
    for images, labels in train_loader:
        optimizer.zero_grad()  # Initialize gradients
        outputs = model(images)  # Model prediction
        loss = criterion(outputs, labels)  # Calculate loss
        loss.backward()  # Backpropagation
        optimizer.step()  # Update weights
    print(f'Epoch {epoch+1}, Loss: {loss.item()}')
        

This code is an example of training a very simple artificial neural network model to classify handwritten digits from the MNIST dataset. The processes of loss calculation, backpropagation, and weight updates occur throughout the construction of the neural network.

5. Future Directions of Deep Learning

Deep learning has rapidly advanced in recent years, particularly demonstrating remarkable achievements in the fields of natural language processing and image processing. Technologies such as Transformer models, GANs (Generative Adversarial Networks), and Deep Reinforcement Learning are establishing themselves as cutting-edge technologies for the future and can be applied across various industries. Moreover, research on efficient resource use, environmentally friendly learning, and model lightweighting is being actively pursued.

6. Conclusion

Deep learning has established itself as a core technology in modern artificial intelligence and can be easily accessed through PyTorch. Based on the foundational concepts and terms provided in this course, you will be able to lay the groundwork for building and experimenting with your own deep learning models. I hope you deepen your understanding of the world of deep learning through various practical exercises in the future.

Deep Learning PyTorch Course, Deep Learning Structure

Deep learning is a field of artificial intelligence (AI) that involves creating machines that learn from data through artificial neural networks to perform prediction and classification tasks. The advancements in deep learning over the past few years have brought about revolutionary changes and achievements in the field of artificial intelligence. In this course, we will explore the fundamental structure of deep learning in detail using PyTorch.

1. Basic Concepts of Deep Learning

In deep learning, data is received as input, processed through multiple layers, and generates the final output. During this process, artificial neural networks (ANN) are used. Neural networks are composed of multiple connected units called nodes (or neurons), and each neuron receives input, multiplies it by weights, adds a bias, and applies a nonlinear activation function.

1.1 Basic Structure of Neural Networks

The basic structure of a neural network consists of an input layer, hidden layers, and an output layer. Each layer is connected to the neurons of the next layer; the input layer accepts data, and the output layer provides results.

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(2, 3)  # 2 inputs, 3 outputs
        self.fc2 = nn.Linear(3, 1)  # 3 inputs, 1 output

    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return x

2. Introduction to PyTorch

PyTorch is a popular deep learning framework developed by Facebook AI Research, which offers easy-to-use and flexible features. Using PyTorch allows for simple GPU acceleration with tensor operations and supports dynamic computation graphs.

2.1 Basic Tensor

In deep learning, a tensor is the fundamental structure for representing data. A 1D tensor can be thought of as a vector, a 2D tensor as a matrix, and a 3D tensor as a multidimensional array.

import torch

    # 1D tensor
    tensor_1d = torch.tensor([1, 2, 3])

    # 2D tensor
    tensor_2d = torch.tensor([[1, 2], [3, 4]])

    # 3D tensor
    tensor_3d = torch.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])

3. Building a Deep Learning Model

Now, let’s build a simple deep learning model. We will create a basic neural network model using various APIs provided by PyTorch.

3.1 Data Preprocessing

Data preprocessing plays an important role in deep learning. It is necessary to prepare the dataset and transform it into a suitable format for training.

from sklearn.datasets import make_moons
    from sklearn.model_selection import train_test_split
    from sklearn.preprocessing import StandardScaler

    X, y = make_moons(n_samples=1000, noise=0.2, random_state=42)
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    # Data standardization
    scaler = StandardScaler()
    X_train = scaler.fit_transform(X_train)
    X_test = scaler.transform(X_test)

3.2 Model Definition

As mentioned earlier, the model is defined by inheriting from nn.Module. This time, let’s use the sigmoid activation function instead of Relu.

import torch.nn as nn
    import torch.nn.functional as F

    class SimpleNN(nn.Module):
        def __init__(self):
            super(SimpleNN, self).__init__()
            self.fc1 = nn.Linear(2, 3)
            self.fc2 = nn.Linear(3, 1)

        def forward(self, x):
            x = F.sigmoid(self.fc1(x))
            x = self.fc2(x)
            return x

3.3 Model Training

To train the model, we need to define the loss function and optimization algorithm. We can use binary cross-entropy (BCE) as the loss function and Adam for optimization.

import torch.optim as optim

    model = SimpleNN()
    criterion = nn.BCEWithLogitsLoss()
    optimizer = optim.Adam(model.parameters(), lr=0.001)

    X_train_tensor = torch.tensor(X_train, dtype=torch.float32)
    y_train_tensor = torch.tensor(y_train, dtype=torch.float32).view(-1, 1)

    for epoch in range(1000):
        model.train()
        optimizer.zero_grad()
        outputs = model(X_train_tensor)
        loss = criterion(outputs, y_train_tensor)
        loss.backward()
        optimizer.step()

        if (epoch + 1) % 100 == 0:
            print(f'Epoch [{epoch + 1}/1000], Loss: {loss.item():.4f}')

3.4 Model Evaluation

After the model training is complete, we evaluate the model’s performance using the test data. Here, we measure accuracy.

model.eval()
    with torch.no_grad():
        X_test_tensor = torch.tensor(X_test, dtype=torch.float32)
        y_pred = model(X_test_tensor)
        y_pred = (y_pred > 0).float()
        accuracy = (y_pred.view(-1) == torch.tensor(y_test, dtype=torch.float32)).float().mean()
        print(f'Accuracy: {accuracy:.4f}')

4. Conclusion

In this lecture, we examined the basic concepts of deep learning and the process of building a simple neural network model using PyTorch. Deep learning can be applied to various fields, and more complex models require deeper structures and diverse techniques. In the next lecture, we will learn about more complex deep learning architectures such as CNNs (Convolutional Neural Networks) and RNNs (Recurrent Neural Networks).