{"id":36409,"date":"2024-11-01T09:48:16","date_gmt":"2024-11-01T09:48:16","guid":{"rendered":"http:\/\/atmokpo.com\/w\/?p=36409"},"modified":"2024-11-01T11:00:01","modified_gmt":"2024-11-01T11:00:01","slug":"using-pytorch-for-gan-deep-learning-transformers","status":"publish","type":"post","link":"https:\/\/atmokpo.com\/w\/36409\/","title":{"rendered":"Using PyTorch for GAN Deep Learning, Transformers"},"content":{"rendered":"<p>The advancement of deep learning has significantly impacted various fields such as artists, researchers, and developers over the past few years. In particular, Generative Adversarial Networks (GANs) and Transformer architectures are widely used, and the combination of these two technologies is producing remarkable results. In this article, we will explain in detail how to implement GANs and Transformers using PyTorch.<\/p>\n<h2>1. Basics of GAN<\/h2>\n<p>GAN consists of two neural networks: a Generator and a Discriminator. The Generator aims to produce fake images, while the Discriminator tries to distinguish between real images and fake ones. These two networks compete with each other, and eventually, the Generator creates increasingly realistic images.<\/p>\n<h3>1.1 How GAN Works<\/h3>\n<p>The training process of GAN is as follows:<\/p>\n<ol>\n<li>A fake image is generated based on random noise.<\/li>\n<li>The generated fake image and the real image are fed into the Discriminator.<\/li>\n<li>The Discriminator assesses the authenticity of the two images and labels each image as real (1) or fake (0).<\/li>\n<li>Based on the Discriminator&#8217;s output, the loss for the Generator is calculated and used to update the Generator.<\/li>\n<li>This process is repeated so that the Generator produces increasingly realistic images.<\/li>\n<\/ol>\n<h3>1.2 Implementing GAN<\/h3>\n<p>Below is a basic example code for implementing GAN using PyTorch:<\/p>\n<pre><code>import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torchvision\nimport torchvision.transforms as transforms\n\n# Hyperparameters\nlatent_size = 64\nbatch_size = 128\nlearning_rate = 0.0002\nnum_epochs = 50\n\n# Device configuration\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n\n# Load MNIST dataset\ntransform = transforms.Compose([\n    transforms.ToTensor(),\n    transforms.Normalize((0.5,), (0.5,))\n])\nmnist = torchvision.datasets.MNIST(root='.\/data', train=True, transform=transform, download=True)\ndata_loader = torch.utils.data.DataLoader(mnist, batch_size=batch_size, shuffle=True)\n\n# Create the Generator model\nclass Generator(nn.Module):\n    def __init__(self):\n        super(Generator, self).__init__()\n        self.model = nn.Sequential(\n            nn.Linear(latent_size, 256),\n            nn.ReLU(),\n            nn.Linear(256, 512),\n            nn.ReLU(),\n            nn.Linear(512, 1024),\n            nn.ReLU(),\n            nn.Linear(1024, 784),\n            nn.Tanh()\n        )\n\n    def forward(self, z):\n        return self.model(z).view(-1, 1, 28, 28)\n\n# Create the Discriminator model\nclass Discriminator(nn.Module):\n    def __init__(self):\n        super(Discriminator, self).__init__()\n        self.model = nn.Sequential(\n            nn.Flatten(),\n            nn.Linear(784, 1024),\n            nn.LeakyReLU(0.2),\n            nn.Linear(1024, 512),\n            nn.LeakyReLU(0.2),\n            nn.Linear(512, 256),\n            nn.LeakyReLU(0.2),\n            nn.Linear(256, 1),\n            nn.Sigmoid()\n        )\n\n    def forward(self, img):\n        return self.model(img)\n\n# Initialize the models\ngenerator = Generator().to(device)\ndiscriminator = Discriminator().to(device)\n\n# Loss and optimizer\ncriterion = nn.BCELoss()\noptimizer_g = optim.Adam(generator.parameters(), lr=learning_rate)\noptimizer_d = optim.Adam(discriminator.parameters(), lr=learning_rate)\n\n# Training the GAN\nfor epoch in range(num_epochs):\n    for i, (imgs, _) in enumerate(data_loader):\n        # Configure input\n        imgs = imgs.to(device)\n        batch_size = imgs.size(0)\n\n        # Labels for real and fake images\n        real_labels = torch.ones(batch_size, 1).to(device)\n        fake_labels = torch.zeros(batch_size, 1).to(device)\n\n        # Train the Discriminator\n        optimizer_d.zero_grad()\n        outputs = discriminator(imgs)\n        d_loss_real = criterion(outputs, real_labels)\n\n        z = torch.randn(batch_size, latent_size).to(device)\n        fake_imgs = generator(z)\n        outputs = discriminator(fake_imgs.detach())\n        d_loss_fake = criterion(outputs, fake_labels)\n\n        d_loss = d_loss_real + d_loss_fake\n        d_loss.backward()\n        optimizer_d.step()\n\n        # Train the Generator\n        optimizer_g.zero_grad()\n        outputs = discriminator(fake_imgs)\n        g_loss = criterion(outputs, real_labels)\n        g_loss.backward()\n        optimizer_g.step()\n    \n    print(f'Epoch [{epoch+1}\/{num_epochs}], d_loss: {d_loss.item():.4f}, g_loss: {g_loss.item():.4f}')\n\n# Save generated images from the generator\n<\/code><\/pre>\n<h2>2. Basics of Transformer<\/h2>\n<p>The Transformer is a model used in natural language processing (NLP) and various other fields, demonstrating powerful performance in understanding the relationships in data. One of its advantages is the ability to process in parallel regardless of the sequence length. The core of the Transformer model is the Attention Mechanism.<\/p>\n<h3>2.1 Components of Transformer<\/h3>\n<p>The Transformer consists of an input Encoder and an output Decoder. The Encoder processes the input information, while the Decoder generates the final output based on the Encoder&#8217;s output.<\/p>\n<h3>2.2 Attention Mechanism<\/h3>\n<p>The Attention Mechanism evaluates the importance of input data to process it. It is useful when all parts of the input need to be attended to.<\/p>\n<h3>2.3 Implementing Transformer<\/h3>\n<p>Below is an example code for implementing a simple Transformer model using PyTorch:<\/p>\n<pre><code>class MultiHeadAttention(nn.Module):\n    def __init__(self, embed_size, heads):\n        super(MultiHeadAttention, self).__init__()\n        self.embed_size = embed_size\n        self.heads = heads\n        self.head_dim = embed_size \/\/ heads\n\n        assert (\n            self.head_dim * heads == embed_size\n        ), \"Embedding size needs to be divisible by heads\"\n\n        self.values = nn.Linear(embed_size, embed_size, bias=False)\n        self.keys = nn.Linear(embed_size, embed_size, bias=False)\n        self.queries = nn.Linear(embed_size, embed_size, bias=False)\n        self.fc_out = nn.Linear(embed_size, embed_size)\n\n    def forward(self, query, key, value, mask):\n        N = query.shape[0]\n        value_len, key_len, query_len = value.shape[1], key.shape[1], query.shape[1]\n\n        # Split the embedding into multiple heads\n        value = self.values(value).view(N, value_len, self.heads, self.head_dim)\n        key = self.keys(key).view(N, key_len, self.heads, self.head_dim)\n        query = self.queries(query).view(N, query_len, self.heads, self.head_dim)\n\n        # Transpose to get dimensions N x heads x query_len x head_dim\n        value = value.permute(0, 2, 1, 3)  # N x heads x value_len x head_dim\n        key = key.permute(0, 2, 1, 3)      # N x heads x key_len x head_dim\n        query = query.permute(0, 2, 1, 3)  # N x heads x query_len x head_dim\n\n        # Calculate the energy scores\n        energy = torch.einsum(\"nqhd,nkhd-&gt;nqkh\", [query, key])\n\n        if mask is not None:\n            energy += (mask * -1e10)\n\n        attention = torch.softmax(energy, dim=3)\n\n        # Weighted sum of the values\n        out = torch.einsum(\"nqkh,nvhd-&gt;nqhd\", [attention, value]).reshape(\n            N, query_len, self.heads * self.head_dim\n        )\n\n        return self.fc_out(out)\n\n# For complete transformer implementation, we would add the Encoder, Decoder, and complete model as well.\n<\/code><\/pre>\n<h2>3. Integration of GAN and Transformer<\/h2>\n<p>The integration of GAN and Transformer presents several new potential applications. For example, Transformers can be utilized as the Generator or Discriminator of a GAN. This approach can be particularly useful when dealing with sequence data.<\/p>\n<h3>3.1 Transformer GAN<\/h3>\n<p>Using a Transformer instead of a Generator in a GAN allows for modeling more complex data structures. This can be especially effective for image generation.<\/p>\n<h3>3.2 Real Example: Implementing Transformer GAN<\/h3>\n<p>The basic structure of a model that integrates a Transformer into a GAN is as follows:<\/p>\n<pre><code>class TransformerGenerator(nn.Module):\n    def __init__(self):\n        super(TransformerGenerator, self).__init__()\n        # Define your transformer architecture here\n\n    def forward(self, z):\n        # Define forward pass\n        return transformed_output\n\nclass TransformerDiscriminator(nn.Module):\n    def __init__(self):\n        super(TransformerDiscriminator, self).__init__()\n        # Define your discriminator architecture here\n\n    def forward(self, img):\n        # Define forward pass\n        return discriminator_output\n<\/code><\/pre>\n<h2>4. Conclusion<\/h2>\n<p>In this article, we explained how to implement GANs and Transformers using PyTorch. GANs are powerful tools for generating images, while Transformers are useful for understanding relationships in data. The combination of these two technologies can lead to higher quality data generation and will continue to drive innovation in the field of deep learning.<\/p>\n<p>Please try implementing GANs and Transformers using the example code provided. Through more experiments and research, we hope you can develop even more advanced models!<\/p>\n<h2>References<\/h2>\n<ul>\n<li>Ian Goodfellow et al., &#8220;Generative Adversarial Networks&#8221;, 2014.<\/li>\n<li>Ashish Vaswani et al., &#8220;Attention is All You Need&#8221;, 2017.<\/li>\n<li>PyTorch Documentation: https:\/\/pytorch.org\/docs\/stable\/index.html<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>The advancement of deep learning has significantly impacted various fields such as artists, researchers, and developers over the past few years. In particular, Generative Adversarial Networks (GANs) and Transformer architectures are widely used, and the combination of these two technologies is producing remarkable results. In this article, we will explain in detail how to implement &hellip; <a href=\"https:\/\/atmokpo.com\/w\/36409\/\" class=\"more-link\">\ub354 \ubcf4\uae30<span class=\"screen-reader-text\"> &#8220;Using PyTorch for GAN Deep Learning, Transformers&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[113],"tags":[],"class_list":["post-36409","post","type-post","status-publish","format-standard","hentry","category-gan-deep-learning-course"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Using PyTorch for GAN Deep Learning, Transformers - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/atmokpo.com\/w\/36409\/\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Using PyTorch for GAN Deep Learning, Transformers - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"og:description\" content=\"The advancement of deep learning has significantly impacted various fields such as artists, researchers, and developers over the past few years. In particular, Generative Adversarial Networks (GANs) and Transformer architectures are widely used, and the combination of these two technologies is producing remarkable results. In this article, we will explain in detail how to implement &hellip; \ub354 \ubcf4\uae30 &quot;Using PyTorch for GAN Deep Learning, Transformers&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/atmokpo.com\/w\/36409\/\" \/>\n<meta property=\"og:site_name\" content=\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-01T09:48:16+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-11-01T11:00:01+00:00\" \/>\n<meta name=\"author\" content=\"root\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:site\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:label1\" content=\"\uae00\uc4f4\uc774\" \/>\n\t<meta name=\"twitter:data1\" content=\"root\" \/>\n\t<meta name=\"twitter:label2\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data2\" content=\"6\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/atmokpo.com\/w\/36409\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36409\/\"},\"author\":{\"name\":\"root\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\"},\"headline\":\"Using PyTorch for GAN Deep Learning, Transformers\",\"datePublished\":\"2024-11-01T09:48:16+00:00\",\"dateModified\":\"2024-11-01T11:00:01+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36409\/\"},\"wordCount\":533,\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"articleSection\":[\"GAN deep learning course\"],\"inLanguage\":\"ko-KR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/atmokpo.com\/w\/36409\/\",\"url\":\"https:\/\/atmokpo.com\/w\/36409\/\",\"name\":\"Using PyTorch for GAN Deep Learning, Transformers - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#website\"},\"datePublished\":\"2024-11-01T09:48:16+00:00\",\"dateModified\":\"2024-11-01T11:00:01+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36409\/#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/atmokpo.com\/w\/36409\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/atmokpo.com\/w\/36409\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"\ud648\",\"item\":\"https:\/\/atmokpo.com\/w\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Using PyTorch for GAN Deep Learning, Transformers\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/atmokpo.com\/w\/#website\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/atmokpo.com\/w\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"contentUrl\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"width\":400,\"height\":400,\"caption\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\"},\"image\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/bebubo4\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\",\"name\":\"root\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"caption\":\"root\"},\"sameAs\":[\"http:\/\/atmokpo.com\/w\"],\"url\":\"https:\/\/atmokpo.com\/w\/author\/root\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Using PyTorch for GAN Deep Learning, Transformers - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/atmokpo.com\/w\/36409\/","og_locale":"ko_KR","og_type":"article","og_title":"Using PyTorch for GAN Deep Learning, Transformers - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","og_description":"The advancement of deep learning has significantly impacted various fields such as artists, researchers, and developers over the past few years. In particular, Generative Adversarial Networks (GANs) and Transformer architectures are widely used, and the combination of these two technologies is producing remarkable results. In this article, we will explain in detail how to implement &hellip; \ub354 \ubcf4\uae30 \"Using PyTorch for GAN Deep Learning, Transformers\"","og_url":"https:\/\/atmokpo.com\/w\/36409\/","og_site_name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","article_published_time":"2024-11-01T09:48:16+00:00","article_modified_time":"2024-11-01T11:00:01+00:00","author":"root","twitter_card":"summary_large_image","twitter_creator":"@bebubo4","twitter_site":"@bebubo4","twitter_misc":{"\uae00\uc4f4\uc774":"root","\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"6\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/atmokpo.com\/w\/36409\/#article","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/36409\/"},"author":{"name":"root","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7"},"headline":"Using PyTorch for GAN Deep Learning, Transformers","datePublished":"2024-11-01T09:48:16+00:00","dateModified":"2024-11-01T11:00:01+00:00","mainEntityOfPage":{"@id":"https:\/\/atmokpo.com\/w\/36409\/"},"wordCount":533,"publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"articleSection":["GAN deep learning course"],"inLanguage":"ko-KR"},{"@type":"WebPage","@id":"https:\/\/atmokpo.com\/w\/36409\/","url":"https:\/\/atmokpo.com\/w\/36409\/","name":"Using PyTorch for GAN Deep Learning, Transformers - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/#website"},"datePublished":"2024-11-01T09:48:16+00:00","dateModified":"2024-11-01T11:00:01+00:00","breadcrumb":{"@id":"https:\/\/atmokpo.com\/w\/36409\/#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/atmokpo.com\/w\/36409\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/atmokpo.com\/w\/36409\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"\ud648","item":"https:\/\/atmokpo.com\/w\/en\/"},{"@type":"ListItem","position":2,"name":"Using PyTorch for GAN Deep Learning, Transformers"}]},{"@type":"WebSite","@id":"https:\/\/atmokpo.com\/w\/#website","url":"https:\/\/atmokpo.com\/w\/","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","description":"","publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/atmokpo.com\/w\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/atmokpo.com\/w\/#organization","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","url":"https:\/\/atmokpo.com\/w\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/","url":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","contentUrl":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","width":400,"height":400,"caption":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8"},"image":{"@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/bebubo4"]},{"@type":"Person","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7","name":"root","image":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","caption":"root"},"sameAs":["http:\/\/atmokpo.com\/w"],"url":"https:\/\/atmokpo.com\/w\/author\/root\/"}]}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36409","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/comments?post=36409"}],"version-history":[{"count":1,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36409\/revisions"}],"predecessor-version":[{"id":36410,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36409\/revisions\/36410"}],"wp:attachment":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/media?parent=36409"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/categories?post=36409"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/tags?post=36409"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}