{"id":36203,"date":"2024-11-01T09:46:36","date_gmt":"2024-11-01T09:46:36","guid":{"rendered":"http:\/\/atmokpo.com\/w\/?p=36203"},"modified":"2024-11-01T09:46:36","modified_gmt":"2024-11-01T09:46:36","slug":"hugging-face-transformers-tutorial-loading-pre-trained-models","status":"publish","type":"post","link":"https:\/\/atmokpo.com\/w\/36203\/","title":{"rendered":"Hugging Face Transformers Tutorial: Loading Pre-trained Models"},"content":{"rendered":"<p><body><\/p>\n<article>\n<header>\n<\/header>\n<section>\n<h2>1. Introduction<\/h2>\n<p>\n                The advancement of deep learning has achieved remarkable results, especially in the field of natural language processing (NLP). At the center of these advancements are<br \/>\n                pre-trained models. Hugging Face provides a powerful library called <strong>Transformers<\/strong> that makes it easier to use these pre-trained models. In this course, we will learn in detail how to load pre-trained models using Hugging Face&#8217;s<br \/>\n                Transformers library.\n            <\/p>\n<\/section>\n<section>\n<h2>2. What is the Hugging Face Transformers Library?<\/h2>\n<p>\n                The Hugging Face Transformers library is a library that provides various natural language processing (NLP) models,<br \/>\n                including <strong>BERT<\/strong>, <strong>GPT<\/strong>, <strong>RoBERTa<\/strong>, <strong>T5<\/strong>, and several others. With this library, developers can easily load pre-trained language models and<br \/>\n                perform various NLP tasks based on them.\n            <\/p>\n<\/section>\n<section>\n<h2>3. Environment Setup<\/h2>\n<p>\n                Before we get started, we need to install the required libraries. You can install the basic libraries using the command below.\n            <\/p>\n<pre><code>pip install transformers torch<\/code><\/pre>\n<p>\n                Here, <strong>transformers<\/strong> is the Hugging Face library, and <strong>torch<\/strong> is the<br \/>\n                PyTorch framework. If you want to use TensorFlow instead of PyTorch, you can install TensorFlow.\n            <\/p>\n<\/section>\n<section>\n<h2>4. Loading Pre-trained Models<\/h2>\n<p>\n                Now let&#8217;s load a pre-trained model. For example, we can understand the meaning of text using the <strong>BERT<\/strong> model. Below is a way to load the BERT model using Python code.\n            <\/p>\n<pre><code>from transformers import BertTokenizer, BertModel\n\n# Load BERT model's tokenizer and model\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\nmodel = BertModel.from_pretrained('bert-base-uncased')\n\n# Example sentence\nsentence = \"Hugging Face is creating a tool that democratizes AI.\"\n\n# Tokenize the sentence and convert it to input vectors\ninputs = tokenizer(sentence, return_tensors='pt')\noutputs = model(**inputs)\n\n# Check the output\nprint(outputs)\n<\/code><\/pre>\n<p>\n                In the code above, the <strong>BertTokenizer<\/strong> class converts the input sentence into a<br \/>\n                format that the BERT model can understand. The <strong>BertModel<\/strong> class loads the actual model and passes<br \/>\n                the transformed input through the model to generate the output.\n            <\/p>\n<\/section>\n<section>\n<h2>5. Analyzing Output Results<\/h2>\n<p>\n                The <strong>outputs<\/strong> variable in the code above contains two main pieces of information:\n            <\/p>\n<ul>\n<li><strong>last_hidden_state<\/strong>: The last hidden state, showing the vector representation of each token.<\/li>\n<li><strong>pooler_output<\/strong>: A vector summarizing the entire input sequence, mainly used for classification tasks.<\/li>\n<\/ul>\n<p>\n                The vector representation of each token is very useful information for natural language processing. The hidden state outputted for each token can be accessed as below.\n            <\/p>\n<pre><code># Accessing the last hidden state\nhidden_states = outputs.last_hidden_state\nprint(hidden_states.shape)  # (batch size, sequence length, hidden dimension)\n<\/code><\/pre>\n<\/section>\n<section>\n<h2>6. Using Various Pre-trained Models<\/h2>\n<p>\n                Hugging Face supports several other models in addition to BERT. It provides various models so that users can choose the models suitable for different tasks. The usage of models like <strong>GPT-2<\/strong>, <strong>RoBERTa<\/strong>,<br \/>\n                <strong>T5<\/strong> is quite similar. For example, if you want to use the <strong>GPT-2<\/strong> model, you can load it as follows.\n            <\/p>\n<pre><code>from transformers import GPT2Tokenizer, GPT2Model\n\n# Load GPT-2 model's tokenizer and model\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\nmodel = GPT2Model.from_pretrained('gpt2')\n\n# Example sentence\nsentence = \"Hugging Face has become a leader in NLP.\"\n\n# Tokenize the sentence and convert it to input vectors\ninputs = tokenizer(sentence, return_tensors='pt')\noutputs = model(**inputs)\n\nprint(outputs)\n<\/code><\/pre>\n<\/section>\n<section>\n<h2>7. Training Your Own Model<\/h2>\n<p>\n                In addition to obtaining pre-trained models, users can also fine-tune models for their datasets. This process involves the following steps:\n            <\/p>\n<ol>\n<li>Data preparation and preprocessing<\/li>\n<li>Loading a pre-trained model<\/li>\n<li>Setting up the loss function and optimizer for training<\/li>\n<li>Training the model<\/li>\n<\/ol>\n<h3>7.1 Data Preparation and Preprocessing<\/h3>\n<p>\n                Data can be prepared in a format such as a CSV file, and a series of processes are required to load and preprocess it.\n            <\/p>\n<pre><code>import pandas as pd\n\n# Load the dataset\ndata = pd.read_csv('data.csv')\nprint(data.head())  # Check the first 5 rows of the dataset\n<\/code><\/pre>\n<h3>7.2 Loading a Pre-trained Model<\/h3>\n<p>\n                You can load the model in the way explained above.\n            <\/p>\n<h3>7.3 Setting Up Loss Function and Optimizer<\/h3>\n<p>\n                For model training, the loss function and optimizer need to be set. For example, you can use the<br \/>\n                <strong>AdamW<\/strong> optimizer and <strong>CrossEntropyLoss<\/strong> loss function.\n            <\/p>\n<pre><code>from transformers import AdamW\n\noptimizer = AdamW(model.parameters(), lr=5e-5)  # Set the learning rate\nloss_fn = torch.nn.CrossEntropyLoss()\n<\/code><\/pre>\n<h3>7.4 Training the Model<\/h3>\n<p>\n                You can train the model using the preprocessed data along with the configured loss function and optimizer.<br \/>\n                Typically, you set the number of epochs and iterate to optimize the model.\n            <\/p>\n<pre><code>for epoch in range(num_epochs):\n    model.train()\n    outputs = model(**inputs)  # Model's output\n    loss = loss_fn(outputs, labels)  # Calculate loss\n    loss.backward()  # Backpropagation\n    optimizer.step()  # Update weights\n    optimizer.zero_grad()  # Reset gradients\n<\/code><\/pre>\n<\/section>\n<section>\n<h2>8. Conclusion<\/h2>\n<p>\n                Through this course, we have learned how to use Hugging Face&#8217;s Transformers library to load pre-trained models and perform various tasks based on them. This library serves as a powerful tool in the field of<br \/>\n                natural language processing, especially helping to utilize models with a consistent dataset and easy API. Now you are equipped with the ability to use Hugging Face&#8217;s Transformers for your own projects.\n            <\/p>\n<\/section>\n<footer>\n<p>This article is part of a deep learning course using Hugging Face Transformers. For more courses, please refer to related materials for your study.<\/p>\n<\/footer>\n<\/article>\n<p><\/body><\/p>\n","protected":false},"excerpt":{"rendered":"<p>1. Introduction The advancement of deep learning has achieved remarkable results, especially in the field of natural language processing (NLP). At the center of these advancements are pre-trained models. Hugging Face provides a powerful library called Transformers that makes it easier to use these pre-trained models. In this course, we will learn in detail how &hellip; <a href=\"https:\/\/atmokpo.com\/w\/36203\/\" class=\"more-link\">\ub354 \ubcf4\uae30<span class=\"screen-reader-text\"> &#8220;Hugging Face Transformers Tutorial: Loading Pre-trained Models&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[108],"tags":[],"class_list":["post-36203","post","type-post","status-publish","format-standard","hentry","category---en"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Hugging Face Transformers Tutorial: Loading Pre-trained Models - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/atmokpo.com\/w\/36203\/\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Hugging Face Transformers Tutorial: Loading Pre-trained Models - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"og:description\" content=\"1. Introduction The advancement of deep learning has achieved remarkable results, especially in the field of natural language processing (NLP). At the center of these advancements are pre-trained models. Hugging Face provides a powerful library called Transformers that makes it easier to use these pre-trained models. In this course, we will learn in detail how &hellip; \ub354 \ubcf4\uae30 &quot;Hugging Face Transformers Tutorial: Loading Pre-trained Models&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/atmokpo.com\/w\/36203\/\" \/>\n<meta property=\"og:site_name\" content=\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-01T09:46:36+00:00\" \/>\n<meta name=\"author\" content=\"root\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:site\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:label1\" content=\"\uae00\uc4f4\uc774\" \/>\n\t<meta name=\"twitter:data1\" content=\"root\" \/>\n\t<meta name=\"twitter:label2\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data2\" content=\"4\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/atmokpo.com\/w\/36203\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36203\/\"},\"author\":{\"name\":\"root\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\"},\"headline\":\"Hugging Face Transformers Tutorial: Loading Pre-trained Models\",\"datePublished\":\"2024-11-01T09:46:36+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36203\/\"},\"wordCount\":611,\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"articleSection\":[\"Using Hugging Face\"],\"inLanguage\":\"ko-KR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/atmokpo.com\/w\/36203\/\",\"url\":\"https:\/\/atmokpo.com\/w\/36203\/\",\"name\":\"Hugging Face Transformers Tutorial: Loading Pre-trained Models - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#website\"},\"datePublished\":\"2024-11-01T09:46:36+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36203\/#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/atmokpo.com\/w\/36203\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/atmokpo.com\/w\/36203\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"\ud648\",\"item\":\"https:\/\/atmokpo.com\/w\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Hugging Face Transformers Tutorial: Loading Pre-trained Models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/atmokpo.com\/w\/#website\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/atmokpo.com\/w\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"contentUrl\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"width\":400,\"height\":400,\"caption\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\"},\"image\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/bebubo4\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\",\"name\":\"root\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"caption\":\"root\"},\"sameAs\":[\"http:\/\/atmokpo.com\/w\"],\"url\":\"https:\/\/atmokpo.com\/w\/author\/root\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Hugging Face Transformers Tutorial: Loading Pre-trained Models - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/atmokpo.com\/w\/36203\/","og_locale":"ko_KR","og_type":"article","og_title":"Hugging Face Transformers Tutorial: Loading Pre-trained Models - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","og_description":"1. Introduction The advancement of deep learning has achieved remarkable results, especially in the field of natural language processing (NLP). At the center of these advancements are pre-trained models. Hugging Face provides a powerful library called Transformers that makes it easier to use these pre-trained models. In this course, we will learn in detail how &hellip; \ub354 \ubcf4\uae30 \"Hugging Face Transformers Tutorial: Loading Pre-trained Models\"","og_url":"https:\/\/atmokpo.com\/w\/36203\/","og_site_name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","article_published_time":"2024-11-01T09:46:36+00:00","author":"root","twitter_card":"summary_large_image","twitter_creator":"@bebubo4","twitter_site":"@bebubo4","twitter_misc":{"\uae00\uc4f4\uc774":"root","\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"4\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/atmokpo.com\/w\/36203\/#article","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/36203\/"},"author":{"name":"root","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7"},"headline":"Hugging Face Transformers Tutorial: Loading Pre-trained Models","datePublished":"2024-11-01T09:46:36+00:00","mainEntityOfPage":{"@id":"https:\/\/atmokpo.com\/w\/36203\/"},"wordCount":611,"publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"articleSection":["Using Hugging Face"],"inLanguage":"ko-KR"},{"@type":"WebPage","@id":"https:\/\/atmokpo.com\/w\/36203\/","url":"https:\/\/atmokpo.com\/w\/36203\/","name":"Hugging Face Transformers Tutorial: Loading Pre-trained Models - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/#website"},"datePublished":"2024-11-01T09:46:36+00:00","breadcrumb":{"@id":"https:\/\/atmokpo.com\/w\/36203\/#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/atmokpo.com\/w\/36203\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/atmokpo.com\/w\/36203\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"\ud648","item":"https:\/\/atmokpo.com\/w\/en\/"},{"@type":"ListItem","position":2,"name":"Hugging Face Transformers Tutorial: Loading Pre-trained Models"}]},{"@type":"WebSite","@id":"https:\/\/atmokpo.com\/w\/#website","url":"https:\/\/atmokpo.com\/w\/","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","description":"","publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/atmokpo.com\/w\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/atmokpo.com\/w\/#organization","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","url":"https:\/\/atmokpo.com\/w\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/","url":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","contentUrl":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","width":400,"height":400,"caption":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8"},"image":{"@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/bebubo4"]},{"@type":"Person","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7","name":"root","image":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","caption":"root"},"sameAs":["http:\/\/atmokpo.com\/w"],"url":"https:\/\/atmokpo.com\/w\/author\/root\/"}]}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36203","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/comments?post=36203"}],"version-history":[{"count":1,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36203\/revisions"}],"predecessor-version":[{"id":36204,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36203\/revisions\/36204"}],"wp:attachment":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/media?parent=36203"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/categories?post=36203"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/tags?post=36203"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}