{"id":36443,"date":"2024-11-01T09:48:32","date_gmt":"2024-11-01T09:48:32","guid":{"rendered":"http:\/\/atmokpo.com\/w\/?p=36443"},"modified":"2024-11-01T11:53:10","modified_gmt":"2024-11-01T11:53:10","slug":"deep-learning-pytorch-course-implementing-gru-cell","status":"publish","type":"post","link":"https:\/\/atmokpo.com\/w\/36443\/","title":{"rendered":"Deep Learning PyTorch Course, Implementing GRU Cell"},"content":{"rendered":"<p><body><\/p>\n<p>In deep learning, Recurrent Neural Networks (RNNs) are widely used to model sequential data such as time series data or natural language processing. Among these, the Gated Recurrent Unit (GRU) is a variant of RNN developed to address the long-term dependency problem, and it has a similar structure to Long Short-Term Memory (LSTM). In this post, we will explain the fundamental concepts of GRU and how to implement it using PyTorch.<\/p>\n<h2>1. What is GRU?<\/h2>\n<p>GRU is a structure proposed by Kyunghyun Cho in 2014 that operates in a simpler and less computationally intense manner by combining input information and previous state information to determine the current state. GRU uses two primary gates:<\/p>\n<ul>\n<li><strong>Reset Gate<\/strong>: Determines how much to reduce the influence of previous information.<\/li>\n<li><strong>Update Gate<\/strong>: Determines how much of the previous state to reflect.<\/li>\n<\/ul>\n<p>The main equations of GRU are as follows:<\/p>\n<h3>1.1 Equation Definition<\/h3>\n<p>1. For the input vector <code>x_t<\/code> and the previous hidden state <code>h_{t-1}<\/code>, we define the reset gate <code>r_t<\/code> and the update gate <code>z_t<\/code>.<\/p>\n<pre><code>r_t = \u03c3(W_r * x_t + U_r * h_{t-1})\nz_t = \u03c3(W_z * x_t + U_z * h_{t-1})<\/code><\/pre>\n<p>Here, <code>W_r<\/code>, <code>W_z<\/code> are weight parameters, and <code>U_r<\/code>, <code>U_z<\/code> are weights related to the previous state. <code>\u03c3<\/code> is the sigmoid function.<\/p>\n<p>2. The new hidden state <code>h_t<\/code> is computed as follows.<\/p>\n<pre><code>h_t = (1 - z_t) * h_{t-1} + z_t * tanh(W_h * x_t + U_h * (r_t * h_{t-1}))<\/code><\/pre>\n<p>Here, <code>W_h<\/code>, <code>U_h<\/code> are additional weights.<\/p>\n<h2>2. Advantages of GRU<\/h2>\n<ul>\n<li>With a simpler structure, it has fewer parameters than LSTM, allowing for faster training.<\/li>\n<li>Due to its ability to learn long-term dependencies well, it performs excellently in various NLP tasks.<\/li>\n<\/ul>\n<h2>3. Implementing GRU Cell<\/h2>\n<p>Now, let&#8217;s implement the GRU cell using PyTorch. The sample code below demonstrates the basic operation of a GRU clearly.<\/p>\n<h3>3.1 GRU Cell Implementation<\/h3>\n<pre><code>import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nclass GRUSimple(nn.Module):\n    def __init__(self, input_size, hidden_size):\n        super(GRUSimple, self).__init__()\n        self.input_size = input_size\n        self.hidden_size = hidden_size\n        \n        # Weight initialization\n        self.Wz = nn.Parameter(torch.Tensor(hidden_size, input_size))\n        self.Uz = nn.Parameter(torch.Tensor(hidden_size, hidden_size))\n        self.Wr = nn.Parameter(torch.Tensor(hidden_size, input_size))\n        self.Ur = nn.Parameter(torch.Tensor(hidden_size, hidden_size))\n        self.Wh = nn.Parameter(torch.Tensor(hidden_size, input_size))\n        self.Uh = nn.Parameter(torch.Tensor(hidden_size, hidden_size))\n\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        for param in self.parameters():\n            stdv = 1.0 \/ param.size(0) ** 0.5\n            param.data.uniform_(-stdv, stdv)\n\n    def forward(self, x_t, h_prev):\n        r_t = torch.sigmoid(self.Wr @ x_t + self.Ur @ h_prev)\n        z_t = torch.sigmoid(self.Wz @ x_t + self.Uz @ h_prev)\n        h_hat_t = torch.tanh(self.Wh @ x_t + self.Uh @ (r_t * h_prev))\n        \n        h_t = (1 - z_t) * h_prev + z_t * h_hat_t\n        return h_t\n<\/code><\/pre>\n<p>The code above implements the structure of a simple GRU cell. The <code>__init__<\/code> method initializes the input size and hidden state size, defining the weight parameters. The <code>reset_parameters<\/code> method initializes the weights. In the <code>forward<\/code> method, the new hidden state is calculated based on the input and the previous state.<\/p>\n<h3>3.2 Testing GRU Cell<\/h3>\n<p>Now, let&#8217;s write a sample code to test the GRU cell.<\/p>\n<pre><code>input_size = 5\nhidden_size = 3\nx_t = torch.randn(input_size)  # Generate random input\nh_prev = torch.zeros(hidden_size)  # Initial hidden state\n\ngru_cell = GRUSimple(input_size, hidden_size)\nh_t = gru_cell(x_t, h_prev)\n\nprint(\"Current hidden state h_t:\", h_t)\n<\/code><\/pre>\n<p>The above code allows us to check the operation of the GRU cell. It generates random input, sets the initial hidden state to 0, and then outputs the current hidden state <code>h_t<\/code> through the GRU cell.<\/p>\n<h2>4. RNN Model Using GRU<\/h2>\n<p>Now, let\u2019s build the RNN model as a whole using the GRU cell.<\/p>\n<pre><code>class GRUModel(nn.Module):\n    def __init__(self, input_size, hidden_size, output_size):\n        super(GRUModel, self).__init__()\n        self.gru = GRUSimple(input_size, hidden_size)\n        self.fc = nn.Linear(hidden_size, output_size)\n\n    def forward(self, x):\n        h_t = torch.zeros(self.gru.hidden_size)  # Initial hidden state\n\n        for t in range(x.size(0)):\n            h_t = self.gru(x[t], h_t)  # Use GRU at each time step\n        output = self.fc(h_t)  # Convert the last hidden state to output\n        return output\n<\/code><\/pre>\n<p>The <code>GRUModel<\/code> class above constructs a model that processes sequential data using the GRU cell. The <code>forward<\/code> method iterates through the input sequence and uses the GRU cell to update the hidden state. The last hidden state is used to generate the final output through a linear combination.<\/p>\n<h3>4.1 Testing RNN Model<\/h3>\n<p>Now, let&#8217;s test the GRU model.<\/p>\n<pre><code>input_size = 5\nhidden_size = 3\noutput_size = 2\nseq_length = 10\n\nx = torch.randn(seq_length, input_size)  # Generate random sequence data\n\nmodel = GRUModel(input_size, hidden_size, output_size)\noutput = model(x)\n\nprint(\"Model output:\", output)\n<\/code><\/pre>\n<p>The code above allows us to observe the process in which the GRU model generates output for the given sequence data.<\/p>\n<h2>5. Application of GRU<\/h2>\n<p>GRU is utilized in various fields. In particular, it is effectively used in natural language processing (NLP) tasks, including machine translation, sentiment analysis, text generation, and many other applications. Recurrent structures like GRU provide powerful advantages in modeling continuous temporal dependencies.<\/p>\n<p>Since GRU often demonstrates good performance while being simpler than LSTM, it is essential to make an appropriate choice based on the characteristics of the data and the nature of the problem.<\/p>\n<h2>6. Conclusion<\/h2>\n<p>In this post, we explored the fundamental concepts of GRU and its implementation of the GRU cell and RNN model using PyTorch. GRU is a useful structure for processing complex sequential data and can be integrated into various deep learning models to advance applications. Understanding GRU provides insights into natural language processing and time series analysis and helps in solving practical problems that may arise.<\/p>\n<p>Now, we hope you will also apply GRU to your projects!<\/p>\n<footer>\n<p>Author: Deep Learning Researcher<\/p>\n<p>Date: October 2023<\/p>\n<\/footer>\n<p><\/body><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In deep learning, Recurrent Neural Networks (RNNs) are widely used to model sequential data such as time series data or natural language processing. Among these, the Gated Recurrent Unit (GRU) is a variant of RNN developed to address the long-term dependency problem, and it has a similar structure to Long Short-Term Memory (LSTM). In this &hellip; <a href=\"https:\/\/atmokpo.com\/w\/36443\/\" class=\"more-link\">\ub354 \ubcf4\uae30<span class=\"screen-reader-text\"> &#8220;Deep Learning PyTorch Course, Implementing GRU Cell&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[149],"tags":[],"class_list":["post-36443","post","type-post","status-publish","format-standard","hentry","category-pytorch-study"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Deep Learning PyTorch Course, Implementing GRU Cell - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/atmokpo.com\/w\/36443\/\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deep Learning PyTorch Course, Implementing GRU Cell - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"og:description\" content=\"In deep learning, Recurrent Neural Networks (RNNs) are widely used to model sequential data such as time series data or natural language processing. Among these, the Gated Recurrent Unit (GRU) is a variant of RNN developed to address the long-term dependency problem, and it has a similar structure to Long Short-Term Memory (LSTM). In this &hellip; \ub354 \ubcf4\uae30 &quot;Deep Learning PyTorch Course, Implementing GRU Cell&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/atmokpo.com\/w\/36443\/\" \/>\n<meta property=\"og:site_name\" content=\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-01T09:48:32+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-11-01T11:53:10+00:00\" \/>\n<meta name=\"author\" content=\"root\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:site\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:label1\" content=\"\uae00\uc4f4\uc774\" \/>\n\t<meta name=\"twitter:data1\" content=\"root\" \/>\n\t<meta name=\"twitter:label2\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data2\" content=\"5\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/atmokpo.com\/w\/36443\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36443\/\"},\"author\":{\"name\":\"root\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\"},\"headline\":\"Deep Learning PyTorch Course, Implementing GRU Cell\",\"datePublished\":\"2024-11-01T09:48:32+00:00\",\"dateModified\":\"2024-11-01T11:53:10+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36443\/\"},\"wordCount\":607,\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"articleSection\":[\"PyTorch Study\"],\"inLanguage\":\"ko-KR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/atmokpo.com\/w\/36443\/\",\"url\":\"https:\/\/atmokpo.com\/w\/36443\/\",\"name\":\"Deep Learning PyTorch Course, Implementing GRU Cell - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#website\"},\"datePublished\":\"2024-11-01T09:48:32+00:00\",\"dateModified\":\"2024-11-01T11:53:10+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36443\/#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/atmokpo.com\/w\/36443\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/atmokpo.com\/w\/36443\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"\ud648\",\"item\":\"https:\/\/atmokpo.com\/w\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Deep Learning PyTorch Course, Implementing GRU Cell\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/atmokpo.com\/w\/#website\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/atmokpo.com\/w\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"contentUrl\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"width\":400,\"height\":400,\"caption\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\"},\"image\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/bebubo4\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\",\"name\":\"root\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"caption\":\"root\"},\"sameAs\":[\"http:\/\/atmokpo.com\/w\"],\"url\":\"https:\/\/atmokpo.com\/w\/author\/root\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Deep Learning PyTorch Course, Implementing GRU Cell - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/atmokpo.com\/w\/36443\/","og_locale":"ko_KR","og_type":"article","og_title":"Deep Learning PyTorch Course, Implementing GRU Cell - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","og_description":"In deep learning, Recurrent Neural Networks (RNNs) are widely used to model sequential data such as time series data or natural language processing. Among these, the Gated Recurrent Unit (GRU) is a variant of RNN developed to address the long-term dependency problem, and it has a similar structure to Long Short-Term Memory (LSTM). In this &hellip; \ub354 \ubcf4\uae30 \"Deep Learning PyTorch Course, Implementing GRU Cell\"","og_url":"https:\/\/atmokpo.com\/w\/36443\/","og_site_name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","article_published_time":"2024-11-01T09:48:32+00:00","article_modified_time":"2024-11-01T11:53:10+00:00","author":"root","twitter_card":"summary_large_image","twitter_creator":"@bebubo4","twitter_site":"@bebubo4","twitter_misc":{"\uae00\uc4f4\uc774":"root","\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"5\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/atmokpo.com\/w\/36443\/#article","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/36443\/"},"author":{"name":"root","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7"},"headline":"Deep Learning PyTorch Course, Implementing GRU Cell","datePublished":"2024-11-01T09:48:32+00:00","dateModified":"2024-11-01T11:53:10+00:00","mainEntityOfPage":{"@id":"https:\/\/atmokpo.com\/w\/36443\/"},"wordCount":607,"publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"articleSection":["PyTorch Study"],"inLanguage":"ko-KR"},{"@type":"WebPage","@id":"https:\/\/atmokpo.com\/w\/36443\/","url":"https:\/\/atmokpo.com\/w\/36443\/","name":"Deep Learning PyTorch Course, Implementing GRU Cell - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/#website"},"datePublished":"2024-11-01T09:48:32+00:00","dateModified":"2024-11-01T11:53:10+00:00","breadcrumb":{"@id":"https:\/\/atmokpo.com\/w\/36443\/#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/atmokpo.com\/w\/36443\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/atmokpo.com\/w\/36443\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"\ud648","item":"https:\/\/atmokpo.com\/w\/en\/"},{"@type":"ListItem","position":2,"name":"Deep Learning PyTorch Course, Implementing GRU Cell"}]},{"@type":"WebSite","@id":"https:\/\/atmokpo.com\/w\/#website","url":"https:\/\/atmokpo.com\/w\/","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","description":"","publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/atmokpo.com\/w\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/atmokpo.com\/w\/#organization","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","url":"https:\/\/atmokpo.com\/w\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/","url":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","contentUrl":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","width":400,"height":400,"caption":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8"},"image":{"@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/bebubo4"]},{"@type":"Person","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7","name":"root","image":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","caption":"root"},"sameAs":["http:\/\/atmokpo.com\/w"],"url":"https:\/\/atmokpo.com\/w\/author\/root\/"}]}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36443","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/comments?post=36443"}],"version-history":[{"count":1,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36443\/revisions"}],"predecessor-version":[{"id":36444,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36443\/revisions\/36444"}],"wp:attachment":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/media?parent=36443"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/categories?post=36443"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/tags?post=36443"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}