{"id":36087,"date":"2024-11-01T09:45:40","date_gmt":"2024-11-01T09:45:40","guid":{"rendered":"http:\/\/atmokpo.com\/w\/?p=36087"},"modified":"2024-11-01T09:45:40","modified_gmt":"2024-11-01T09:45:40","slug":"introduction-to-using-hugging-face-transformers-bert-ensemble-learning-library-setup","status":"publish","type":"post","link":"https:\/\/atmokpo.com\/w\/36087\/","title":{"rendered":"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup"},"content":{"rendered":"<p><body><\/p>\n<p>Recently, natural language processing (NLP) has become a major challenge in the field of artificial intelligence, with models like BERT (Bidirectional Encoder Representations from Transformers) leading innovations in this area. The BERT model provides the ability to understand the context of words in both directions, enabling more sophisticated approaches to solving natural language problems. In this course, we will explore how to set up the BERT model using Hugging Face&#8217;s Transformers library and implement ensemble learning.<\/p>\n<h2>1. What is BERT Ensemble Learning?<\/h2>\n<p>Ensemble learning is a methodology that combines the predictions of multiple models to create a final prediction. This can be done by averaging the predictions of several models or by using majority voting, which helps reduce the bias of a single model and improves generalization performance. Leveraging multiple powerful language models such as BERT in an ensemble can maximize the learning and prediction performance of the models.<\/p>\n<h2>2. Environment Setup<\/h2>\n<p>To use Hugging Face&#8217;s Transformers library, you first need to install the necessary packages. You can install them using the following command.<\/p>\n<pre><code>pip install transformers torch<\/code><\/pre>\n<p>Additionally, we will use <code>pandas<\/code> for data processing and <code>scikit-learn<\/code> for model performance evaluation.<\/p>\n<pre><code>pip install pandas scikit-learn<\/code><\/pre>\n<h2>3. Data Preparation<\/h2>\n<p>In this course, we will use a movie review sentiment analysis dataset. This dataset contains reviews and sentiment labels, distinguishing between positive and negative reviews. The dataset can be loaded using pandas.<\/p>\n<pre><code>import pandas as pd\n\n# Load dataset\ndata = pd.read_csv('movie_reviews.csv')\nprint(data.head())<\/code><\/pre>\n<h2>4. BERT Model Setup<\/h2>\n<p>We will set up the BERT model using Hugging Face&#8217;s Transformers library. To use BERT, we first need to load the model and set up the tokenizer to process the input data.<\/p>\n<pre><code>from transformers import BertTokenizer, BertForSequenceClassification\nimport torch\n\n# Load BERT model and tokenizer\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)\n\n# Tokenizing input data\ndef tokenize_data(sentences):\n    return tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')\n\n# Tokenizing the first sentence as an example\ntokens = tokenize_data(data['review'].tolist())\nprint(tokens)  # Check tokenized data<\/code><\/pre>\n<h2>5. Data Preprocessing<\/h2>\n<p>Data preprocessing is necessary for model training. Each review will be tokenized and converted into a format that the model can recognize. Additionally, a batch size will be set to improve training speed on the GPU.<\/p>\n<pre><code>from torch.utils.data import DataLoader, TensorDataset\n\n# Setting input data and labels\ninputs = tokens['input_ids']\nattn_masks = tokens['attention_mask']\nlabels = torch.tensor(data['label'].tolist())\n\n# Creating tensor dataset\ndataset = TensorDataset(inputs, attn_masks, labels)\n\n# Setting data loader\nbatch_size = 16\ndataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)<\/code><\/pre>\n<h2>6. Model Training<\/h2>\n<p>To train the BERT model, we need to set up the optimizer and loss function. Here, we will use the AdamW optimizer and CrossEntropyLoss as the loss function for model training.<\/p>\n<pre><code>from transformers import AdamW\nfrom torch import nn\n\n# Setting optimizer\noptimizer = AdamW(model.parameters(), lr=5e-5)\n\n# Setting loss function\nloss_fn = nn.CrossEntropyLoss()\n\n# Function for training the model\ndef train_model(dataloader, model, optimizer, loss_fn, epochs=3):\n    model.train()\n    for epoch in range(epochs):\n        total_loss = 0\n        for batch in dataloader:\n            input_ids, attention_masks, labels = batch\n            \n            # Sending data to model\n            input_ids = input_ids.to('cuda')\n            attention_masks = attention_masks.to('cuda')\n            labels = labels.to('cuda')\n            \n            # Initializing gradients\n            optimizer.zero_grad()\n            \n            # Model prediction\n            outputs = model(input_ids, token_type_ids=None, attention_mask=attention_masks)\n            loss = loss_fn(outputs.logits, labels)\n            \n            # Calculating loss and backpropagation\n            total_loss += loss.item()\n            loss.backward()\n            optimizer.step()\n        print(f'Epoch: {epoch+1}, Loss: {total_loss\/len(dataloader)}')\n\n# Training the model\ntrain_model(dataloader, model.to('cuda'), optimizer, loss_fn)<\/code><\/pre>\n<h2>7. Ensemble Model Setup<\/h2>\n<p>Having set up the basic BERT model, we will now ensemble multiple BERT models to enhance performance. Here, we will train two BERT models and average their predictions for the final prediction.<\/p>\n<pre><code>def create_ensemble_model(num_models=2):\n    models = []\n    for _ in range(num_models):\n        model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2).to('cuda')\n        models.append(model)\n    return models\n\n# Creating ensemble model\nensemble_models = create_ensemble_model()<\/code><\/pre>\n<h2>8. Ensemble Training and Prediction<\/h2>\n<p>We will train the ensemble models, perform predictions on the test data, and then average the results to create the final prediction.<\/p>\n<pre><code>def train_ensemble(models, dataloader, optimizer, loss_fn, epochs=3):\n    for model in models:\n        train_model(dataloader, model, optimizer, loss_fn, epochs)\n\ndef ensemble_predict(models, input_ids, attention_masks):\n    preds = []\n    for model in models:\n        model.eval()\n        with torch.no_grad():\n            outputs = model(input_ids, attention_mask=attention_masks)\n            preds.append(outputs.logits)\n    return sum(preds) \/ len(preds)\n\n# Training ensemble model\ntrain_ensemble(ensemble_models, dataloader, optimizer, loss_fn)\n\n# Predicting the first sentence as an example\ninputs = tokenize_data([data['review'].iloc[0]])\naverage_logits = ensemble_predict(ensemble_models, inputs['input_ids'].to('cuda'), inputs['attention_mask'].to('cuda'))\npredictions = torch.argmax(average_logits, dim=1)\nprint(f'Predicted label: {predictions}')  # Check prediction result<\/code><\/pre>\n<h2>9. Model Performance Evaluation<\/h2>\n<p>Finally, we will evaluate the model&#8217;s performance on the test dataset. We will measure accuracy, precision, recall, etc., to review performance.<\/p>\n<pre><code>from sklearn.metrics import accuracy_score, classification_report\n\n# Load test data\ntest_data = pd.read_csv('movie_reviews_test.csv')\ntest_tokens = tokenize_data(test_data['review'].tolist())\ntest_inputs = test_tokens['input_ids'].to('cuda')\ntest_masks = test_tokens['attention_mask'].to('cuda')\n\n# Ensemble prediction\ntest_logits = ensemble_predict(ensemble_models, test_inputs, test_masks)\ntest_predictions = torch.argmax(test_logits, axis=1)\n\n# Output accuracy and evaluation metrics\naccuracy = accuracy_score(test_data['label'].tolist(), test_predictions.cpu())\nreport = classification_report(test_data['label'].tolist(), test_predictions.cpu())\n\nprint(f'Accuracy: {accuracy}\\n')\nprint(report)<\/code><\/pre>\n<h2>Conclusion<\/h2>\n<p>In this course, we explored how to use the BERT model to solve natural language processing problems, as well as how to enhance performance by ensembling multiple models. With Hugging Face&#8217;s Transformers library, applying the BERT model is straightforward, and through custom ensemble modeling, we can expect even stronger performance. I hope to continue utilizing such technologies in various natural language processing problems in the future.<\/p>\n<p><\/body><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recently, natural language processing (NLP) has become a major challenge in the field of artificial intelligence, with models like BERT (Bidirectional Encoder Representations from Transformers) leading innovations in this area. The BERT model provides the ability to understand the context of words in both directions, enabling more sophisticated approaches to solving natural language problems. In &hellip; <a href=\"https:\/\/atmokpo.com\/w\/36087\/\" class=\"more-link\">\ub354 \ubcf4\uae30<span class=\"screen-reader-text\"> &#8220;Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[108],"tags":[],"class_list":["post-36087","post","type-post","status-publish","format-standard","hentry","category---en"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/atmokpo.com\/w\/36087\/\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"og:description\" content=\"Recently, natural language processing (NLP) has become a major challenge in the field of artificial intelligence, with models like BERT (Bidirectional Encoder Representations from Transformers) leading innovations in this area. The BERT model provides the ability to understand the context of words in both directions, enabling more sophisticated approaches to solving natural language problems. In &hellip; \ub354 \ubcf4\uae30 &quot;Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/atmokpo.com\/w\/36087\/\" \/>\n<meta property=\"og:site_name\" content=\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-01T09:45:40+00:00\" \/>\n<meta name=\"author\" content=\"root\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:site\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:label1\" content=\"\uae00\uc4f4\uc774\" \/>\n\t<meta name=\"twitter:data1\" content=\"root\" \/>\n\t<meta name=\"twitter:label2\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data2\" content=\"5\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/atmokpo.com\/w\/36087\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36087\/\"},\"author\":{\"name\":\"root\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\"},\"headline\":\"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup\",\"datePublished\":\"2024-11-01T09:45:40+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36087\/\"},\"wordCount\":492,\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"articleSection\":[\"Using Hugging Face\"],\"inLanguage\":\"ko-KR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/atmokpo.com\/w\/36087\/\",\"url\":\"https:\/\/atmokpo.com\/w\/36087\/\",\"name\":\"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#website\"},\"datePublished\":\"2024-11-01T09:45:40+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36087\/#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/atmokpo.com\/w\/36087\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/atmokpo.com\/w\/36087\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"\ud648\",\"item\":\"https:\/\/atmokpo.com\/w\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/atmokpo.com\/w\/#website\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/atmokpo.com\/w\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"contentUrl\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"width\":400,\"height\":400,\"caption\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\"},\"image\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/bebubo4\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\",\"name\":\"root\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"caption\":\"root\"},\"sameAs\":[\"http:\/\/atmokpo.com\/w\"],\"url\":\"https:\/\/atmokpo.com\/w\/author\/root\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/atmokpo.com\/w\/36087\/","og_locale":"ko_KR","og_type":"article","og_title":"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","og_description":"Recently, natural language processing (NLP) has become a major challenge in the field of artificial intelligence, with models like BERT (Bidirectional Encoder Representations from Transformers) leading innovations in this area. The BERT model provides the ability to understand the context of words in both directions, enabling more sophisticated approaches to solving natural language problems. In &hellip; \ub354 \ubcf4\uae30 \"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup\"","og_url":"https:\/\/atmokpo.com\/w\/36087\/","og_site_name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","article_published_time":"2024-11-01T09:45:40+00:00","author":"root","twitter_card":"summary_large_image","twitter_creator":"@bebubo4","twitter_site":"@bebubo4","twitter_misc":{"\uae00\uc4f4\uc774":"root","\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"5\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/atmokpo.com\/w\/36087\/#article","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/36087\/"},"author":{"name":"root","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7"},"headline":"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup","datePublished":"2024-11-01T09:45:40+00:00","mainEntityOfPage":{"@id":"https:\/\/atmokpo.com\/w\/36087\/"},"wordCount":492,"publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"articleSection":["Using Hugging Face"],"inLanguage":"ko-KR"},{"@type":"WebPage","@id":"https:\/\/atmokpo.com\/w\/36087\/","url":"https:\/\/atmokpo.com\/w\/36087\/","name":"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/#website"},"datePublished":"2024-11-01T09:45:40+00:00","breadcrumb":{"@id":"https:\/\/atmokpo.com\/w\/36087\/#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/atmokpo.com\/w\/36087\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/atmokpo.com\/w\/36087\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"\ud648","item":"https:\/\/atmokpo.com\/w\/en\/"},{"@type":"ListItem","position":2,"name":"Introduction to Using Hugging Face Transformers, BERT Ensemble Learning Library Setup"}]},{"@type":"WebSite","@id":"https:\/\/atmokpo.com\/w\/#website","url":"https:\/\/atmokpo.com\/w\/","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","description":"","publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/atmokpo.com\/w\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/atmokpo.com\/w\/#organization","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","url":"https:\/\/atmokpo.com\/w\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/","url":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","contentUrl":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","width":400,"height":400,"caption":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8"},"image":{"@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/bebubo4"]},{"@type":"Person","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7","name":"root","image":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","caption":"root"},"sameAs":["http:\/\/atmokpo.com\/w"],"url":"https:\/\/atmokpo.com\/w\/author\/root\/"}]}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36087","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/comments?post=36087"}],"version-history":[{"count":1,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36087\/revisions"}],"predecessor-version":[{"id":36088,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36087\/revisions\/36088"}],"wp:attachment":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/media?parent=36087"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/categories?post=36087"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/tags?post=36087"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}