Deep Learning PyTorch Course, U-Net

U-Net, one of the deep learning models, is a model widely used for medical image segmentation. The U-Net model is particularly effective for tasks that require pixel-level segmentation of images. In this blog post, we will explore the concepts, structure, and implementation methods of U-Net using PyTorch in detail.

1. History of U-Net

U-Net was proposed in 2015 by Olaf Ronneberger, Philipp Fischer, and Thomas Becker, achieving excellent performance in the medical imaging competition ISBI. U-Net originated from a conventional Convolutional Neural Network (CNN) architecture, designed to perform feature extraction and segmentation tasks simultaneously. For this reason, U-Net demonstrates high performance in specialized segmentation tasks.

2. Structure of U-Net

The structure of U-Net is broadly divided into two parts: the downsampling (contracting) path and the upsampling (expanding) path. The downsampling path gradually reduces the image size while extracting features, and the upsampling path gradually restores the image while generating a segmentation map.

2.1 Downsampling Path

The downsampling path consists of multiple convolutional blocks. Each block is composed of convolutional layers, activation functions, and pooling layers. As the data is processed in this way, the image size decreases and the features are emphasized.

2.2 Upsampling Path

The upsampling path utilizes upsampling layers to restore the image to its original size. During this time, it merges the features extracted from the downsampling path to provide segmented information. This enhances the prediction accuracy for each pixel.

2.3 Skip Connections

U-Net uses ‘Skip Connections’ to link the data from the downsampling path and the upsampling path. This minimizes information loss and yields more refined segmentation results.

3. Implementing U-Net (PyTorch)

Now, let’s implement the U-Net model using PyTorch. First, we need to install the necessary packages and prepare the data.

    
    # Import necessary packages
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    from torchvision import transforms
    from torchvision import datasets
    from torch.utils.data import DataLoader
    
    

3.1 Defining the U-Net Model

Below is the code that defines the basic structure of the U-Net model.

    
    class UNet(nn.Module):
        def __init__(self, in_channels, out_channels):
            super(UNet, self).__init__()

            self.encoder1 = self.conv_block(in_channels, 64)
            self.encoder2 = self.conv_block(64, 128)
            self.encoder3 = self.conv_block(128, 256)
            self.encoder4 = self.conv_block(256, 512)

            self.bottom = self.conv_block(512, 1024)

            self.decoder4 = self.upconv_block(1024, 512)
            self.decoder3 = self.upconv_block(512, 256)
            self.decoder2 = self.upconv_block(256, 128)
            self.decoder1 = self.upconv_block(128, 64)

            self.final_conv = nn.Conv2d(64, out_channels, kernel_size=1)

        def conv_block(self, in_channels, out_channels):
            return nn.Sequential(
                nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1),
                nn.ReLU(inplace=True),
                nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1),
                nn.ReLU(inplace=True)
            )

        def upconv_block(self, in_channels, out_channels):
            return nn.Sequential(
                nn.ConvTranspose2d(in_channels, out_channels, kernel_size=2, stride=2),
                nn.ReLU(inplace=True)
            )

        def forward(self, x):
            enc1 = self.encoder1(x)
            enc2 = self.encoder2(F.max_pool2d(enc1, kernel_size=2))
            enc3 = self.encoder3(F.max_pool2d(enc2, kernel_size=2))
            enc4 = self.encoder4(F.max_pool2d(enc3, kernel_size=2))

            bottleneck = self.bottom(F.max_pool2d(enc4, kernel_size=2))

            dec4 = self.decoder4(bottleneck)
            dec4 = torch.cat((dec4, enc4), dim=1)
            dec4 = self.conv_block(dec4.size(1), dec4.size(1))(dec4)

            dec3 = self.decoder3(dec4)
            dec3 = torch.cat((dec3, enc3), dim=1)
            dec3 = self.conv_block(dec3.size(1), dec3.size(1))(dec3)

            dec2 = self.decoder2(dec3)
            dec2 = torch.cat((dec2, enc2), dim=1)
            dec2 = self.conv_block(dec2.size(1), dec2.size(1))(dec2)

            dec1 = self.decoder1(dec2)
            dec1 = torch.cat((dec1, enc1), dim=1)
            dec1 = self.conv_block(dec1.size(1), dec1.size(1))(dec1)

            return self.final_conv(dec1)
    
    

3.2 Training the Model

Now we are ready to train the U-Net model. We will specify the loss function and optimization algorithm, and prepare the training data.

    
    # Define hyperparameters
    num_epochs = 25
    learning_rate = 0.001

    # Create model
    model = UNet(in_channels=3, out_channels=1).cuda()
    criterion = nn.BCEWithLogitsLoss()
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

    # Load and preprocess data
    transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Resize((128, 128)),
    ])

    train_dataset = datasets.ImageFolder(root='your_dataset_path/train', transform=transform)
    train_loader = DataLoader(dataset=train_dataset, batch_size=16, shuffle=True)

    # Train the model
    for epoch in range(num_epochs):
        for images, masks in train_loader:
            images = images.cuda()
            masks = masks.cuda()

            # Forward pass
            outputs = model(images)
            loss = criterion(outputs, masks)

            # Backward pass and optimization
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')
    
    

4. Applications of U-Net

U-Net is primarily used in the medical imaging field, but it can also be applied in various other fields. For example:

  • Medical Image Analysis: Accurately identifying tissues, tumors, etc., in CT scans, MRI image segmentation, and more.
  • Satellite Image Analysis: Terrain segmentation, urban planning, etc.
  • Autonomous Vehicles: Road and obstacle detection, etc.
  • Video Processing: Object tracking, action recognition, etc.

5. Conclusion

Due to its structure, U-Net exhibits remarkable performance in various image segmentation tasks. In this post, we covered everything from the basics of U-Net to its implementation. U-Net is widely used in the field of medical imaging, but its applications extend far beyond that. As current deep learning technologies continue to evolve, various modifications of U-Net and new approaches utilizing similar network structures are anticipated.

References

  • Ronneberger, Olaf, et al. “U-Net: Convolutional Networks for Biomedical Image Segmentation.” Medical Image Computing and Computer-Assisted Intervention. 2015.
  • Pytorch Documentation: https://pytorch.org/docs/stable/index.html

Deep Learning PyTorch Course, seq2seq

Seq2Seq (Sequence to Sequence) models are gaining attention for solving sequence prediction problems, a field of deep learning. This model is mainly used in natural language processing (NLP) and is useful for converting an input sequence into another sequence. For example, it is utilized in machine translation, text summarization, and chatbots. In this lecture, we will cover the basic concepts, structure of the Seq2Seq model, and implementation examples using PyTorch.

1. Basic Concepts of Seq2Seq Model

The Seq2Seq model consists of two main components: the encoder and the decoder. The encoder encodes the input sequence into a fixed-length vector, while the decoder uses this vector to generate the target sequence.

1.1 Encoder

The encoder processes the given sequence by converting each word into a vector. The last hidden state of the encoder is used as the initial input for the next decoder.

1.2 Decoder

The decoder predicts the next word based on the encoder’s output (hidden state) and outputs the next word using the previously predicted word as input. This process continues until a target sequence of specified length is generated.

2. Structure of Seq2Seq Model

The Seq2Seq model is generally implemented using recurrent neural networks like RNN, LSTM, or GRU. Below is a typical structure of the Seq2Seq model.

  • Encoder: Processes the input sequence and returns the hidden state.
  • Decoder: Starts from the initial last hidden state of the encoder and generates the target sequence.

3. Implementation of Seq2Seq Model using PyTorch

Now, let’s implement the Seq2Seq model using PyTorch. In this example, we will create a sample machine translation model using a small dataset.

3.1 Preparing the Dataset

First, we will initialize the dataset to be used in the example. We will be using an English and French translation dataset. You can use simple strings.

Deep Learning PyTorch Course, RNN Cell Implementation

This article provides a detailed explanation of one of the core structures of deep learning, the Recurrent Neural Network (RNN), and demonstrates how to implement an RNN cell using PyTorch. RNNs are very useful for processing sequence data and are widely used in various fields such as natural language processing, speech recognition, and stock prediction. We will understand how RNNs work, their advantages and disadvantages, and implement a simple RNN cell through this discussion.

1. Overview of RNN

RNN is a type of neural network designed for processing sequence data. While traditional neural networks receive fixed-size inputs, RNNs have a structure that allows them to process information over multiple time steps. This enables them to handle temporally continuous data by using the output from a previous step as input for the current step.

1.1 Structure of RNN

The basic component of an RNN is the cell state (or hidden state). At each time step, the RNN receives an input vector and utilizes the previous hidden state to compute the new hidden state. In mathematical terms, this can be expressed as follows:

RNN Equation

Where:

  • ht is the hidden state at time t
  • ht-1 is the hidden state at the previous time step
  • xt is the current input vector
  • Wh is the weight for the previous hidden state, Wx is the weight for the input, and b is the bias.

1.2 Advantages and Disadvantages of RNN

RNNs have the following advantages and disadvantages:

  • Advantages:
    • Ability to process information that varies over time: RNNs can effectively handle sequence data.
    • Variable length input: RNNs can process inputs of varying lengths.
  • Disadvantages:
    • Long-term dependency problem: RNNs struggle to learn long-term dependencies.
    • Vanishing and exploding gradients: Gradients may vanish or explode during backpropagation, making learning difficult.

2. Implementing RNN Cell with PyTorch

Now that we have understood the basic structure of RNNs, let’s implement an RNN cell using PyTorch. PyTorch has become a powerful tool for deep learning research and prototyping.

2.1 Setting Up the Environment

First, ensure that Python and PyTorch are installed. You can install PyTorch using the command below:

pip install torch

2.2 Implementing the RNN Cell Class

Let’s start by writing a class to implement the RNN cell. This class will take an input vector and the previous hidden state to compute the new hidden state.


import torch
import torch.nn as nn

class SimpleRNNCell(nn.Module):
    def __init__(self, input_size, hidden_size):
        super(SimpleRNNCell, self).__init__()
        self.hidden_size = hidden_size
        self.W_h = nn.Parameter(torch.randn(hidden_size, hidden_size))  # Weight for the previous hidden state
        self.W_x = nn.Parameter(torch.randn(hidden_size, input_size))   # Weight for the input
        self.b = nn.Parameter(torch.zeros(hidden_size))                 # Bias

    def forward(self, x_t, h_t_1):
        h_t = torch.tanh(torch.mm(self.W_h, h_t_1) + torch.mm(self.W_x, x_t) + self.b)
        return h_t
    

2.3 How to Use the RNN Cell

We will now use the defined RNN cell to process sequence data. As a simple example, we will generate random input data and an initial hidden state, and compute the output through the RNN.


# Parameter settings
input_size = 3   # Size of the input vector
hidden_size = 2  # Size of the hidden state vector
sequence_length = 5

# Initialize the model
rnn_cell = SimpleRNNCell(input_size, hidden_size)

# Generate random input data and initial hidden state
x = torch.randn(sequence_length, input_size)  # (sequence_length, input_size)
h_t_1 = torch.zeros(hidden_size)               # Initial hidden state

# Process sequence through the RNN cell
for t in range(sequence_length):
    h_t = rnn_cell(x[t], h_t_1)  # Calculate new hidden state based on current input and previous hidden state
    h_t_1 = h_t  # Set the current hidden state as the previous hidden state for the next step
    print(f"Time step {t}: h_t = {h_t}")
    

3. Extending RNN: LSTM and GRU

Although RNN has a basic structure, LSTMs (Long Short-Term Memory) or GRUs (Gated Recurrent Units) are often used in real applications to tackle the long-term dependency problem. LSTMs regulate information flow using cell states, while GRUs offer a simpler structure but provide similar performance to LSTMs.

3.1 LSTM Structure

LSTMs consist of input gates, forget gates, and output gates, allowing them to remember past information more effectively and to selectively forget it.

3.2 GRU Structure

GRUs simplify the structure of LSTMs using update gates and reset gates to control the information flow. GRUs often use fewer parameters than LSTMs and may exhibit similar or even better performance.

4. Conclusion

In this lecture, we introduced the basic concept of RNNs and the process of implementing an RNN cell in PyTorch. RNNs are effective for processing sequence data; however, due to long-term dependency issues and gradient vanishing problems, structures such as LSTMs and GRUs are widely used. We hope this lecture helped you understand the basics of RNNs and allowed you to practice implementing them.

In the future, we will cover the implementation of LSTMs and GRUs, as well as various projects utilizing RNNs. We hope to learn together in the continuously evolving world of deep learning.

Author: Deep Learning Course Team

Contact: [your-email@example.com]

Deep Learning PyTorch Course, RNN Layer and Cell

Deep Learning is a technique that learns complex patterns through nonlinear functions, based on Artificial Neural Networks. In this article, we will explore the basic concepts of Recurrent Neural Networks (RNN), which are specialized for processing sequence data, and how to implement them using PyTorch.

1. Concept of RNN

RNN stands for Recurrent Neural Network, a neural network structure suitable for processing sequence data. While typical neural networks process each element of the input data independently, RNN learns the dependencies between sequences by reusing the output of the previous state as input to the current state.

1.1 Structure of RNN

The basic structure of an RNN has the following characteristics:

  • The input and output are in sequence form.
  • The model updates its state over time.
  • Information from the previous state influences the next state.

1.2 Advantages of RNN

RNN has several advantages:

  • It can handle the temporal dependencies of sequence data.
  • It can process inputs of variable lengths.

1.3 Disadvantages of RNN

However, RNN also has some disadvantages:

  • It struggles to learn long sequences due to the Gradient Vanishing problem.
  • Its training speed is slow.

2. Operating Principles of RNN

The operation of RNN is as follows. Each element of the input sequence is processed recursively, and the output of the previous state is used as input to the current state. This can be expressed in equations as follows:


    h_t = f(W_xh * x_t + W_hh * h_{t-1} + b_h)
    y_t = W_hy * h_t + b_y
    

Where:

  • h_t: Hidden state at the current time step t
  • x_t: Input at the current time step t
  • W_xh, W_hh, W_hy: Weight matrices
  • b_h, b_y: Bias vectors
  • f: Activation function (e.g., tanh, ReLU, etc.)

3. Implementation of RNN in PyTorch

Now, let’s implement RNN using PyTorch. The following is an example of creating an RNN layer for simple sequence learning.

3.1 Defining the RNN Model


import torch
import torch.nn as nn

class RNNModel(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(RNNModel, self).__init__()
        self.hidden_size = hidden_size
        self.rnn = nn.RNN(input_size, hidden_size, batch_first=True)
        self.fc = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        h0 = torch.zeros(1, x.size(0), self.hidden_size).to(x.device)  # Initial hidden state
        out, _ = self.rnn(x, h0)
        out = self.fc(out[:, -1, :])  # Output from the last time step
        return out
    

3.2 Preparing the Data

Now we prepare the data to train the RNN model. For example, we can use the sine function for simple time series prediction.


import numpy as np

# Data generation
def create_dataset(seq_length):
    x = np.linspace(0, 100, seq_length)
    y = np.sin(x)
    return x, y

# Data transformation
def transform_data(x, y, seq_length):
    x_data = []
    y_data = []
    for i in range(len(x) - seq_length):
        x_data.append(x[i:i + seq_length])
        y_data.append(y[i + seq_length])
    return np.array(x_data), np.array(y_data)

seq_length = 10
x, y = create_dataset(200)
x_data, y_data = transform_data(x, y, seq_length)

# Convert to PyTorch tensors
x_data = torch.FloatTensor(x_data).view(-1, seq_length, 1)
y_data = torch.FloatTensor(y_data).view(-1, 1)
    

3.3 Training the Model

To train the model, we define the loss function and optimization algorithm, and train the model for each epoch.


# Initialize the model
input_size = 1
hidden_size = 16
output_size = 1
model = RNNModel(input_size, hidden_size, output_size)

# Set the loss function and optimization algorithm
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)

# Train the model
num_epochs = 100
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()  # Initialize gradients

    outputs = model(x_data)
    loss = criterion(outputs, y_data)
    
    loss.backward()  # Compute gradients
    optimizer.step()  # Update weights

    if (epoch+1) % 10 == 0:
        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')
    

4. Variations of RNN

There are several variations of RNN. The most notable are Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU).

4.1 LSTM

LSTM is a structure designed to solve the gradient vanishing problem in RNN. LSTM has the ability to selectively remember or forget information through cell states and several gates, making it more effective in handling long-term dependencies.

4.2 GRU

GRU has a simpler structure than LSTM and shows similar performance. GRU uses two gates (reset gate and update gate) to control the flow of information.

5. Applications of RNN

RNN is applied in various fields:

  • Speech Recognition: Processes continuous speech data to understand sentences.
  • Natural Language Processing: Analyzes the meaning of sentences in machine translation, sentiment analysis, etc.
  • Time Series Prediction: Models time series data like financial data or weather predictions.

6. Conclusion

In this article, we explored the basic concepts of RNN, implementation methods using PyTorch, variations, and application areas. RNN reflects the characteristics of sequence data well and plays an important role in the field of deep learning. As you study deep learning, it is essential to learn the various variations of RNN and choose models suitable for specific problems.

References

  • Deep Learning Book – Ian Goodfellow, Yoshua Bengio, Aaron Courville
  • PyTorch Documentation – https://pytorch.org/docs/stable/index.html

Deep Learning PyTorch Course, RNN, LSTM, GRU Performance Comparison

Deep learning has become an essential technology in the fields of data science and artificial intelligence today.
In this course, we will discuss in depth the key artificial neural network structures for processing sequence data: RNN (Recurrent Neural Network), LSTM (Long Short-Term Memory), and GRU (Gated Recurrent Unit), and compare the performance of each model.

1. Understanding RNN (Recurrent Neural Network)

RNN is a type of neural network designed to process sequentially input data. Unlike traditional neural networks, RNN can learn the temporal dependencies of sequence data by using the previous output state as the current input.

1.1. RNN Structure

The basic structure of an RNN is as follows:


    h_t = f(W_hh * h_{t-1} + W_xh * x_t)
    

Here, h_t is the current state, h_{t-1} is the previous state, x_t is the current input, W_hh and W_xh are weight parameters, and f is the activation function.

1.2. Limitations of RNN

RNN struggles to solve the long-term dependency problem. This is because RNN finds it difficult to remember information that occurred a long time ago in long sequences.

2. Introduction to LSTM (Long Short-Term Memory)

LSTM is a structure devised to overcome the limitations of RNN, demonstrating strong performance in learning long sequence data.

2.1. LSTM Structure

LSTM performs the role of selectively remembering and forgetting information through cell states and gate mechanisms. The basic equations for LSTM are as follows:


    f_t = σ(W_f * [h_{t-1}, x_t] + b_f)  // Forget gate
    i_t = σ(W_i * [h_{t-1}, x_t] + b_i)  // Input gate
    o_t = σ(W_o * [h_{t-1}, x_t] + b_o)  // Output gate
    C_t = f_t * C_{t-1} + i_t * tanh(W_c * [h_{t-1}, x_t] + b_c)  // Cell state update
    h_t = o_t * tanh(C_t)  // Final output
    

2.2. Advantages of LSTM

LSTM can maintain the flow of information smoothly, even in long sequences, and is a powerful tool for improving the performance of deep learning models.

3. Comparison of GRU (Gated Recurrent Unit)

GRU is a simplified model of LSTM that achieves similar performance with fewer parameters.

3.1. GRU Structure


    z_t = σ(W_z * [h_{t-1}, x_t] + b_z)  // Update gate
    r_t = σ(W_r * [h_{t-1}, x_t] + b_r)  // Reset gate
    h_t = (1 - z_t) * h_{t-1} + z_t * tanh(W_h * [r_t * h_{t-1}, x_t] + b_h)  // Final output
    

3.2. Advantages of GRU

GRU can be trained with fewer resources while maintaining similar performance to LSTM. Additionally, its relatively simpler structure improves computational efficiency.

4. Practical Comparison of RNN, LSTM, and GRU Performance

Now, we will implement RNN, LSTM, and GRU models using PyTorch and compare their performance. We will proceed with a simple time series prediction problem.

4.1. Data Preparation

The code below generates simple time series data.


import numpy as np
import torch
import torch.nn as nn
import matplotlib.pyplot as plt

# Generate time series data
def create_dataset(seq, time_step=1):
    X, Y = [], []
    for i in range(len(seq) - time_step - 1):
        X.append(seq[i:(i + time_step)])
        Y.append(seq[i + time_step])
    return np.array(X), np.array(Y)

# Time series data
data = np.sin(np.arange(0, 100, 0.1))
time_step = 10
X, Y = create_dataset(data, time_step)

# Convert to PyTorch tensors
X = torch.FloatTensor(X).view(-1, time_step, 1)
Y = torch.FloatTensor(Y)
    

4.2. Model Implementation

Now we will implement each model. The RNN model is as follows:


class RNNModel(nn.Module):
    def __init__(self, input_size, hidden_size):
        super(RNNModel, self).__init__()
        self.rnn = nn.RNN(input_size, hidden_size, batch_first=True)
        self.fc = nn.Linear(hidden_size, 1)

    def forward(self, x):
        out, _ = self.rnn(x)
        out = self.fc(out[:, -1, :])
        return out

# Initialize model
rnn_model = RNNModel(input_size=1, hidden_size=5)
    

Next, let’s implement the LSTM model:


class LSTMModel(nn.Module):
    def __init__(self, input_size, hidden_size):
        super(LSTMModel, self).__init__()
        self.lstm = nn.LSTM(input_size, hidden_size, batch_first=True)
        self.fc = nn.Linear(hidden_size, 1)

    def forward(self, x):
        out, _ = self.lstm(x)
        out = self.fc(out[:, -1, :])
        return out

# Initialize model
lstm_model = LSTMModel(input_size=1, hidden_size=5)
    

Finally, we will implement the GRU model:


class GRUModel(nn.Module):
    def __init__(self, input_size, hidden_size):
        super(GRUModel, self).__init__()
        self.gru = nn.GRU(input_size, hidden_size, batch_first=True)
        self.fc = nn.Linear(hidden_size, 1)

    def forward(self, x):
        out, _ = self.gru(x)
        out = self.fc(out[:, -1, :])
        return out

# Initialize model
gru_model = GRUModel(input_size=1, hidden_size=5)
    

4.3. Model Training

We will train the models and compare their performance.


def train_model(model, X_train, Y_train, num_epochs=100, learning_rate=0.01):
    criterion = nn.MSELoss()
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

    for epoch in range(num_epochs):
        model.train()
        optimizer.zero_grad()
        outputs = model(X_train)
        loss = criterion(outputs, Y_train.view(-1, 1))
        loss.backward()
        optimizer.step()

    return model

# Train models
rnn_trained = train_model(rnn_model, X, Y)
lstm_trained = train_model(lstm_model, X, Y)
gru_trained = train_model(gru_model, X, Y)
    

4.4. Performance Evaluation

We will evaluate the performance of each model.


def evaluate_model(model, X_test):
    model.eval()
    with torch.no_grad():
        predictions = model(X_test)
    return predictions

# Predictions
rnn_predictions = evaluate_model(rnn_trained, X)
lstm_predictions = evaluate_model(lstm_trained, X)
gru_predictions = evaluate_model(gru_trained, X)

# Visualization of results
plt.figure(figsize=(12, 8))
plt.plot(Y.numpy(), label='True')
plt.plot(rnn_predictions.numpy(), label='RNN Predictions')
plt.plot(lstm_predictions.numpy(), label='LSTM Predictions')
plt.plot(gru_predictions.numpy(), label='GRU Predictions')
plt.legend()
plt.show()
    

5. Conclusion

In this course, we understood the basic concepts of RNN, LSTM, and GRU, their implementation methods, and compared their performance to grasp the characteristics of these models. RNN is the most basic form, while LSTM and GRU are powerful tools that can be selected based on specific needs. It is important to choose the appropriate model according to the business problem.

References

For further learning, please refer to the following resources: