{"id":36239,"date":"2024-11-01T09:46:55","date_gmt":"2024-11-01T09:46:55","guid":{"rendered":"http:\/\/atmokpo.com\/w\/?p=36239"},"modified":"2024-11-01T09:46:55","modified_gmt":"2024-11-01T09:46:55","slug":"hugging-face-transformers-practical-course-fine-tuning-and-bert-classification","status":"publish","type":"post","link":"https:\/\/atmokpo.com\/w\/36239\/","title":{"rendered":"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification"},"content":{"rendered":"<p><body><\/p>\n<p>Deep learning and Natural Language Processing (NLP) play a crucial role in modern artificial intelligence technologies. Among them, BERT (Bidirectional Encoder Representations from Transformers) is a powerful language model developed by Google, demonstrating outstanding performance in many NLP tasks. In this course, we will take a detailed look at how to fine-tune the BERT model using Hugging Face&#8217;s Transformers library and perform text classification tasks.<\/p>\n<h2>1. Introduction to Hugging Face and BERT<\/h2>\n<p>Hugging Face is a library that provides various models and tools to make natural language processing models easily accessible. In particular, it allows easy use of transformer-based models like BERT. The BERT model enables a deeper understanding by considering information from both sides of the context. This is why it can achieve superior performance compared to traditional RNN or LSTM-based models.<\/p>\n<h2>2. Basic Structure of the BERT Model<\/h2>\n<p>BERT is a transformer model with an encoder-decoder structure, primarily utilizing the encoder part. The main features of BERT are as follows:<\/p>\n<ul>\n<li><strong>Bidirectional Attention:<\/strong> BERT can learn bidirectional contexts, allowing a richer understanding of the meaning of specific words.<\/li>\n<li><strong>Masked Language Model:<\/strong> During training, some words are masked, and the model is trained to predict the masked words.<\/li>\n<li><strong>Next Sentence Prediction:<\/strong> Given two sentences, it predicts whether the two sentences are actually consecutive.<\/li>\n<\/ul>\n<h2>3. Installing Hugging Face Transformers<\/h2>\n<p>First, you need to install Hugging Face&#8217;s Transformers library. You can use the following command to install it:<\/p>\n<pre><code>pip install transformers<\/code><\/pre>\n<h2>4. Preparing the Dataset<\/h2>\n<p>To train a deep learning model, an appropriate dataset is required. In this course, we will use the IMDB movie review dataset for a simple text classification task. This dataset consists of positive and negative reviews.<\/p>\n<pre><code>\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\n\n# Load IMDB dataset\nurl = 'https:\/\/ai.stanford.edu\/~amaas\/data\/sentiment\/aclImdb_v1.tar.gz'\n# Download and extract the data.\n!wget {url}\n!tar -xvf aclImdb_v1.tar.gz\n\n# Load positive and negative reviews\npos_reviews = pd.read_csv('aclImdb\/train\/pos\/*.txt', delimiter=\"\\n\", header=None)\nneg_reviews = pd.read_csv('aclImdb\/train\/neg\/*.txt', delimiter=\"\\n\", header=None)\n\n# Prepare the data\npositive = [(1, review) for review in pos_reviews[0]]\nnegative = [(0, review) for review in neg_reviews[0]]\n\ndata = positive + negative\ndf = pd.DataFrame(data, columns=['label', 'review'])\ndf['label'] = df['label'].map({0: 'negative', 1: 'positive'})\n\n# Split into training and testing data\ntrain_df, test_df = train_test_split(df, test_size=0.2, random_state=42)\n<\/code><\/pre>\n<h2>5. Data Preprocessing<\/h2>\n<p>To use the BERT model, we need to preprocess the data into an appropriate format. We will use the BERT tokenizer provided by Hugging Face&#8217;s Transformers library.<\/p>\n<pre><code>\nfrom transformers import BertTokenizer\n\n# Load BertTokenizer\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n\n# Tokenize the data\ndef tokenize_data(data):\n    return tokenizer(data['review'].tolist(), padding=True, truncation=True, return_tensors='pt')\n\ntrain_encodings = tokenize_data(train_df)\ntest_encodings = tokenize_data(test_df)\n<\/code><\/pre>\n<h2>6. Creating the Dataset<\/h2>\n<p>We convert the tokenized data into a PyTorch Dataset using the Dataset class.<\/p>\n<pre><code>\nimport torch\n\nclass IMDbDataset(torch.utils.data.Dataset):\n    def __init__(self, encodings, labels):\n        self.encodings = encodings\n        self.labels = labels\n\n    def __getitem__(self, idx):\n        item = {key: val[idx] for key, val in self.encodings.items()}\n        item['labels'] = self.labels[idx]\n        return item\n\n    def __len__(self):\n        return len(self.labels)\n\ntrain_dataset = IMDbDataset(train_encodings, train_df['label'].tolist())\ntest_dataset = IMDbDataset(test_encodings, test_df['label'].tolist())\n<\/code><\/pre>\n<h2>7. Model Setup and Fine-tuning<\/h2>\n<p>Now we will load the BERT model and proceed with fine-tuning for the classification task.<\/p>\n<pre><code>\nfrom transformers import BertForSequenceClassification, Trainer, TrainingArguments\n\n# Load BERT model\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)\n\n# Set training arguments\ntraining_args = TrainingArguments(\n    output_dir='.\/results',\n    evaluation_strategy='epoch',\n    learning_rate=2e-5,\n    per_device_train_batch_size=16,\n    per_device_eval_batch_size=64,\n    num_train_epochs=3,\n    weight_decay=0.01,\n)\n\n# Create trainer object\ntrainer = Trainer(\n    model=model,\n    args=training_args,\n    train_dataset=train_dataset,\n    eval_dataset=test_dataset,\n)\n    \n# Train the model\ntrainer.train()\n<\/code><\/pre>\n<h2>8. Evaluation and Prediction<\/h2>\n<p>After training the model, we will evaluate its performance on the test dataset and make predictions.<\/p>\n<pre><code>\n# Evaluate the model\ntrainer.evaluate()\n\n# Predictions\npredictions = trainer.predict(test_dataset)\npredicted_labels = predictions.predictions.argmax(-1)\n<\/code><\/pre>\n<h2>9. Interpreting Results<\/h2>\n<p>We calculate the accuracy by comparing the predicted labels with the true labels and analyze if there are areas to improve. We can visualize the detailed results through the Confusion Matrix.<\/p>\n<pre><code>\nfrom sklearn.metrics import confusion_matrix\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nconfusion_mtx = confusion_matrix(test_df['label'].tolist(), predicted_labels)\nplt.figure(figsize=(10,7))\nsns.heatmap(confusion_mtx, annot=True, fmt='d', cmap='Blues')\nplt.xlabel('Predicted')\nplt.ylabel('True')\nplt.title('Confusion Matrix')\nplt.show()\n<\/code><\/pre>\n<h2>10. Conclusion<\/h2>\n<p>In this course, we explored how to fine-tune the BERT model using the Hugging Face Transformers library and perform text classification tasks. Using pre-trained models like BERT saves time and resources while significantly improving performance. We hope to achieve better performance in various NLP tasks using models like BERT in the future.<\/p>\n<h2>References<\/h2>\n<ul>\n<li>Hugging Face Transformers official documentation: <a href=\"https:\/\/huggingface.co\/docs\/transformers\/index\" target=\"_blank\" rel=\"noopener\">Link<\/a><\/li>\n<li>BERT original paper: <a href=\"https:\/\/arxiv.org\/abs\/1810.04805\" target=\"_blank\" rel=\"noopener\">Link<\/a><\/li>\n<\/ul>\n<p><\/body><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Deep learning and Natural Language Processing (NLP) play a crucial role in modern artificial intelligence technologies. Among them, BERT (Bidirectional Encoder Representations from Transformers) is a powerful language model developed by Google, demonstrating outstanding performance in many NLP tasks. In this course, we will take a detailed look at how to fine-tune the BERT model &hellip; <a href=\"https:\/\/atmokpo.com\/w\/36239\/\" class=\"more-link\">\ub354 \ubcf4\uae30<span class=\"screen-reader-text\"> &#8220;Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[108],"tags":[],"class_list":["post-36239","post","type-post","status-publish","format-standard","hentry","category---en"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/atmokpo.com\/w\/36239\/\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"og:description\" content=\"Deep learning and Natural Language Processing (NLP) play a crucial role in modern artificial intelligence technologies. Among them, BERT (Bidirectional Encoder Representations from Transformers) is a powerful language model developed by Google, demonstrating outstanding performance in many NLP tasks. In this course, we will take a detailed look at how to fine-tune the BERT model &hellip; \ub354 \ubcf4\uae30 &quot;Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/atmokpo.com\/w\/36239\/\" \/>\n<meta property=\"og:site_name\" content=\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-01T09:46:55+00:00\" \/>\n<meta name=\"author\" content=\"root\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:site\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:label1\" content=\"\uae00\uc4f4\uc774\" \/>\n\t<meta name=\"twitter:data1\" content=\"root\" \/>\n\t<meta name=\"twitter:label2\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data2\" content=\"4\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/atmokpo.com\/w\/36239\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36239\/\"},\"author\":{\"name\":\"root\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\"},\"headline\":\"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification\",\"datePublished\":\"2024-11-01T09:46:55+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36239\/\"},\"wordCount\":464,\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"articleSection\":[\"Using Hugging Face\"],\"inLanguage\":\"ko-KR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/atmokpo.com\/w\/36239\/\",\"url\":\"https:\/\/atmokpo.com\/w\/36239\/\",\"name\":\"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#website\"},\"datePublished\":\"2024-11-01T09:46:55+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36239\/#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/atmokpo.com\/w\/36239\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/atmokpo.com\/w\/36239\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"\ud648\",\"item\":\"https:\/\/atmokpo.com\/w\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/atmokpo.com\/w\/#website\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/atmokpo.com\/w\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"contentUrl\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"width\":400,\"height\":400,\"caption\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\"},\"image\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/bebubo4\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\",\"name\":\"root\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"caption\":\"root\"},\"sameAs\":[\"http:\/\/atmokpo.com\/w\"],\"url\":\"https:\/\/atmokpo.com\/w\/author\/root\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/atmokpo.com\/w\/36239\/","og_locale":"ko_KR","og_type":"article","og_title":"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","og_description":"Deep learning and Natural Language Processing (NLP) play a crucial role in modern artificial intelligence technologies. Among them, BERT (Bidirectional Encoder Representations from Transformers) is a powerful language model developed by Google, demonstrating outstanding performance in many NLP tasks. In this course, we will take a detailed look at how to fine-tune the BERT model &hellip; \ub354 \ubcf4\uae30 \"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification\"","og_url":"https:\/\/atmokpo.com\/w\/36239\/","og_site_name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","article_published_time":"2024-11-01T09:46:55+00:00","author":"root","twitter_card":"summary_large_image","twitter_creator":"@bebubo4","twitter_site":"@bebubo4","twitter_misc":{"\uae00\uc4f4\uc774":"root","\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"4\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/atmokpo.com\/w\/36239\/#article","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/36239\/"},"author":{"name":"root","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7"},"headline":"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification","datePublished":"2024-11-01T09:46:55+00:00","mainEntityOfPage":{"@id":"https:\/\/atmokpo.com\/w\/36239\/"},"wordCount":464,"publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"articleSection":["Using Hugging Face"],"inLanguage":"ko-KR"},{"@type":"WebPage","@id":"https:\/\/atmokpo.com\/w\/36239\/","url":"https:\/\/atmokpo.com\/w\/36239\/","name":"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/#website"},"datePublished":"2024-11-01T09:46:55+00:00","breadcrumb":{"@id":"https:\/\/atmokpo.com\/w\/36239\/#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/atmokpo.com\/w\/36239\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/atmokpo.com\/w\/36239\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"\ud648","item":"https:\/\/atmokpo.com\/w\/en\/"},{"@type":"ListItem","position":2,"name":"Hugging Face Transformers Practical Course, Fine-tuning and BERT Classification"}]},{"@type":"WebSite","@id":"https:\/\/atmokpo.com\/w\/#website","url":"https:\/\/atmokpo.com\/w\/","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","description":"","publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/atmokpo.com\/w\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/atmokpo.com\/w\/#organization","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","url":"https:\/\/atmokpo.com\/w\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/","url":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","contentUrl":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","width":400,"height":400,"caption":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8"},"image":{"@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/bebubo4"]},{"@type":"Person","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7","name":"root","image":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","caption":"root"},"sameAs":["http:\/\/atmokpo.com\/w"],"url":"https:\/\/atmokpo.com\/w\/author\/root\/"}]}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36239","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/comments?post=36239"}],"version-history":[{"count":1,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36239\/revisions"}],"predecessor-version":[{"id":36240,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36239\/revisions\/36240"}],"wp:attachment":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/media?parent=36239"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/categories?post=36239"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/tags?post=36239"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}