Using Hugging Face Transformers Tutorial, BART Tokenization Before Learning Model

Recently, the fields of artificial intelligence and natural language processing (NLP) have made remarkable advancements.
In particular, the transformer architecture has brought innovative results to various NLP tasks.
In this article, we will take a closer look at the tokenization process, which plays a crucial role in data preprocessing, focusing on the
BART (Bidirectional and Auto-Regressive Transformers) model from the Hugging Face library.
BART is a model that combines the strengths of GPT and BERT in simple unsupervised learning and multiple supervised learning tasks,
and it is used for diverse tasks such as text summarization, translation, and question generation.

1. Understanding the BART Model

BART has an encoder-decoder structure and is primarily trained through both auto-regressive and denoising methods.
Thanks to this structure, BART can handle various changes in text well and performs excellently in various NLP tasks such as
text generation, summarization, and translation.
The key features of BART can be summarized as follows:

  • Both an encoder and decoder are present, allowing for flexible use in various tasks
  • A dynamic ability to generate words based on previous context
  • Learning various text features based on sufficient pre-trained data

2. The Necessity of Tokenization

The first step in training natural language processing models is to convert the data into an appropriate format.
A tokenizer splits the text into smaller units, or tokens, to help the model understand it.
This allows the model to better comprehend the relationship between sentences and words.
Tokenization is an essential process in BART and plays a significant role in preparing text data.

3. Installing the Hugging Face Library

Before starting the tokenization process, you need to install Hugging Face’s Transformers library.
You can easily install it with the command below.

pip install transformers

4. Using the BART Tokenizer

Now, let’s use the BART model’s tokenizer to tokenize some text.
Here, we will load BART’s pre-trained model and tokenize a text example,
printing out the tokens and their indices.

4.1. Python Code Example


from transformers import BartTokenizer

# Load the BART tokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')

# Test text
text = "Deep learning is a field of artificial intelligence."

# Tokenization
tokens = tokenizer.tokenize(text)
print("Tokenization result:", tokens)

# Checking the token indices
token_ids = tokenizer.convert_tokens_to_ids(tokens)
print("Token IDs:", token_ids)
    

Running the above code will yield results like the following:


Tokenization result: ['▁Deep', '▁learning', '▁is', '▁a', '▁field', '▁of', '▁artificial', '▁intelligence', '.']
Token IDs: [31050, 17381, 2804, 1839, 1507, 7138, 5390, 1839, 2269, 2252, 2872, 2]
    

5. The Decoding Process

After tokenization, you can restore the original sentence from the tokens through the decoding process before inputting data into the model.
The following code demonstrates how to decode indices back into the original sentence.


# Decoding the token IDs to the original sentence
decoded_text = tokenizer.decode(token_ids)
print("Decoding result:", decoded_text)
    

This allows us to recover the original sentence.
This process demonstrates how to prepare input that the model can understand and restore it back to its original form.

6. Text Summarization Using BART

After tokenization, the next step is to use the BART model to summarize the input text.
Users can provide input text to the model and obtain summarized results.
Below is a simple example of text summarization using BART.


from transformers import BartForConditionalGeneration

# Load the BART model
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')

# Input text
input_text = "Artificial intelligence refers to tasks performed by machines that imitate human intelligence. This technology has made remarkable advancements in recent years."

# Tokenize the text and convert to indices
input_ids = tokenizer.encode(input_text, return_tensors='pt')

# Generate the summary
summary_ids = model.generate(input_ids, max_length=50, num_beams=5, early_stopping=True)

# Decode the summary result
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print("Summary result:", summary)
    

Running the above code will generate a summary of the input text.
The BART model has the ability to understand the input sentences and transform them into a concise format.

7. Conclusion

In this article, we covered the text tokenization process using the Hugging Face BART model and provided a simple summarization example.
Transformer models, including BART, exhibit excellent performance in effectively performing various tasks in natural language processing.
Recognizing the importance of tokenization in the data preprocessing process can enhance the learning efficiency of the model.
In the next article, we will discuss use cases of BART and additional application methods.

Thank you!