{"id":36353,"date":"2024-11-01T09:47:45","date_gmt":"2024-11-01T09:47:45","guid":{"rendered":"http:\/\/atmokpo.com\/w\/?p=36353"},"modified":"2024-11-01T11:00:15","modified_gmt":"2024-11-01T11:00:15","slug":"deep-learning-and-reinforcement-learning-using-pytorch","status":"publish","type":"post","link":"https:\/\/atmokpo.com\/w\/36353\/","title":{"rendered":"Deep Learning and Reinforcement Learning using PyTorch"},"content":{"rendered":"<p><body><\/p>\n<h2>1. Introduction<\/h2>\n<p>Generative Adversarial Networks (GANs) are models proposed by Ian Goodfellow in 2014 that generate data through competition between two neural networks. GANs are widely used particularly in image generation, style transfer, and data augmentation. In this post, we will introduce the basic structure of GANs, how to implement them using PyTorch, the basic concepts of reinforcement learning, and various applications.<\/p>\n<h2>2. Basic Structure of GANs<\/h2>\n<p>GANs consist of two neural networks: a Generator and a Discriminator. The Generator takes random noise as input and generates new data, while the Discriminator distinguishes whether the input data is real or generated. These two networks learn by competing with each other.<\/p>\n<h3>2.1 Generator<\/h3>\n<p>The Generator takes a noise vector and produces data that looks real. The goal is to deceive the Discriminator.<\/p>\n<h3>2.2 Discriminator<\/h3>\n<p>The Discriminator assesses the authenticity of the input data. It outputs 1 for real data and 0 for generated data.<\/p>\n<h3>2.3 Loss Function of GANs<\/h3>\n<p>The loss function of GANs is defined as follows:<\/p>\n<pre><code>min_G max_D V(D, G) = E[log(D(x))] + E[log(1 - D(G(z)))]<\/code><\/pre>\n<p>Here, <code>E<\/code> represents expectation, <code>x<\/code> is real data, and <code>G(z)<\/code> is the data generated by the Generator. The Generator tries to minimize the loss while the Discriminator tries to maximize the loss.<\/p>\n<h2>3. Implementing GANs Using PyTorch<\/h2>\n<p>Now, let\u2019s implement a GAN using PyTorch. We will use the MNIST handwritten digits dataset as the dataset.<\/p>\n<h3>3.1 Preparing the Dataset<\/h3>\n<pre><code>import torch\nimport torchvision\nfrom torchvision import datasets, transforms\n\n# Data transformation and download\ntransform = transforms.Compose([\n    transforms.ToTensor(),\n    transforms.Normalize((0.5,), (0.5,))\n])\n\n# MNIST dataset\ntrain_dataset = datasets.MNIST(root='.\/data', train=True, download=True, transform=transform)\ntrain_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)<\/code><\/pre>\n<h3>3.2 Defining the Generator Model<\/h3>\n<pre><code>import torch.nn as nn\n\nclass Generator(nn.Module):\n    def __init__(self):\n        super(Generator, self).__init__()\n        self.layer1 = nn.Sequential(\n            nn.Linear(100, 256),\n            nn.ReLU(True)\n        )\n        self.layer2 = nn.Sequential(\n            nn.Linear(256, 512),\n            nn.ReLU(True)\n        )\n        self.layer3 = nn.Sequential(\n            nn.Linear(512, 1024),\n            nn.ReLU(True)\n        )\n        self.layer4 = nn.Sequential(\n            nn.Linear(1024, 28*28),\n            nn.Tanh()  # Pixel values are between -1 and 1\n        )\n    \n    def forward(self, z):\n        out = self.layer1(z)\n        out = self.layer2(out)\n        out = self.layer3(out)\n        out = self.layer4(out)\n        return out.view(-1, 1, 28, 28)  # Reshape to image format<\/code><\/pre>\n<h3>3.3 Defining the Discriminator Model<\/h3>\n<pre><code>class Discriminator(nn.Module):\n    def __init__(self):\n        super(Discriminator, self).__init__()\n        self.layer1 = nn.Sequential(\n            nn.Linear(28*28, 1024),\n            nn.LeakyReLU(0.2, inplace=True)\n        )\n        self.layer2 = nn.Sequential(\n            nn.Linear(1024, 512),\n            nn.LeakyReLU(0.2, inplace=True)\n        )\n        self.layer3 = nn.Sequential(\n            nn.Linear(512, 256),\n            nn.LeakyReLU(0.2, inplace=True)\n        )\n        self.layer4 = nn.Sequential(\n            nn.Linear(256, 1),\n            nn.Sigmoid()  # Output value is between 0 and 1\n        )\n    \n    def forward(self, x):\n        out = self.layer1(x.view(-1, 28*28))  # Flatten\n        out = self.layer2(out)\n        out = self.layer3(out)\n        out = self.layer4(out)\n        return out<\/code><\/pre>\n<h3>3.4 Model Training<\/h3>\n<pre><code>import torch.optim as optim\n\n# Initialize models\ngenerator = Generator()\ndiscriminator = Discriminator()\n\n# Set loss function and optimizers\ncriterion = nn.BCELoss()  # Binary Cross Entropy Loss\noptimizer_g = optim.Adam(generator.parameters(), lr=0.0002)\noptimizer_d = optim.Adam(discriminator.parameters(), lr=0.0002)\n\n# Training\nnum_epochs = 200\nfor epoch in range(num_epochs):\n    for i, (images, _) in enumerate(train_loader):\n        # Real data labels\n        real_labels = torch.ones(images.size(0), 1)\n        fake_labels = torch.zeros(images.size(0), 1)\n\n        # Train Discriminator\n        optimizer_d.zero_grad()\n        outputs = discriminator(images)\n        d_loss_real = criterion(outputs, real_labels)\n        d_loss_real.backward()\n        \n        z = torch.randn(images.size(0), 100)\n        fake_images = generator(z)\n        outputs = discriminator(fake_images.detach())\n        d_loss_fake = criterion(outputs, fake_labels)\n        d_loss_fake.backward()\n        \n        optimizer_d.step()\n        \n        # Train Generator\n        optimizer_g.zero_grad()\n        outputs = discriminator(fake_images)\n        g_loss = criterion(outputs, real_labels)\n        g_loss.backward()\n        optimizer_g.step()\n    \n    if (epoch+1) % 10 == 0:\n        print(f'Epoch [{epoch+1}\/{num_epochs}], d_loss: {d_loss_real.item() + d_loss_fake.item():.4f}, g_loss: {g_loss.item():.4f}')<\/code><\/pre>\n<h3>3.5 Visualizing the Results<\/h3>\n<pre><code>import matplotlib.pyplot as plt\n\n# Function to visualize generated images\ndef plot_generated_images(generator, n=10):\n    z = torch.randn(n, 100)\n    with torch.no_grad():\n        generated_images = generator(z).cpu()\n    generated_images = generated_images.view(-1, 28, 28)\n    \n    plt.figure(figsize=(10, 1))\n    for i in range(n):\n        plt.subplot(1, n, i+1)\n        plt.imshow(generated_images[i], cmap='gray')\n        plt.axis('off')\n    plt.show()\n\n# Generate images\nplot_generated_images(generator)<\/code><\/pre>\n<h2>4. Basic Concepts of Reinforcement Learning<\/h2>\n<p>Reinforcement Learning (RL) is a field of machine learning where an agent learns optimal actions through interaction with the environment. The agent observes states, selects actions, receives rewards, and learns the optimal policy.<\/p>\n<h3>4.1 Components of Reinforcement Learning<\/h3>\n<ul>\n<li><strong>State:<\/strong> Information representing the current environment for the agent.<\/li>\n<li><strong>Action:<\/strong> The task that the agent can perform in the current state.<\/li>\n<li><strong>Reward:<\/strong> Feedback received from the environment after the agent performs an action.<\/li>\n<li><strong>Policy:<\/strong> The probability distribution of the actions the agent can take in each state.<\/li>\n<\/ul>\n<h3>4.2 Reinforcement Learning Algorithms<\/h3>\n<ul>\n<li><strong>Q-Learning:<\/strong> A value-based method that learns Q values to derive optimal policies.<\/li>\n<li><strong>Policy Gradient:<\/strong> A method that directly learns policies.<\/li>\n<li><strong>Actor-Critic:<\/strong> A method that learns value functions and policies simultaneously.<\/li>\n<\/ul>\n<h3>4.3 Implementing Reinforcement Learning Using PyTorch<\/h3>\n<p>We will use OpenAI&#8217;s Gym library for a simple reinforcement learning implementation. Here, we will address the CartPole environment.<\/p>\n<h4>4.3.1 Setting up the Gym Environment<\/h4>\n<pre><code>import gym\n\n# Create Gym environment\nenv = gym.make('CartPole-v1')  # CartPole environment<\/code><\/pre>\n<h4>4.3.2 Defining the DQN Model<\/h4>\n<pre><code>class DQN(nn.Module):\n    def __init__(self, input_size, num_actions):\n        super(DQN, self).__init__()\n        self.fc1 = nn.Linear(input_size, 24)\n        self.fc2 = nn.Linear(24, 24)\n        self.fc3 = nn.Linear(24, num_actions)\n\n    def forward(self, x):\n        x = F.relu(self.fc1(x))\n        x = F.relu(self.fc2(x))\n        return self.fc3(x)<\/code><\/pre>\n<h4>4.3.3 Model Training<\/h4>\n<pre><code>def train_dqn(env, num_episodes):\n    model = DQN(input_size=env.observation_space.shape[0], num_actions=env.action_space.n)\n    optimizer = optim.Adam(model.parameters())\n    criterion = nn.MSELoss()\n\n    for episode in range(num_episodes):\n        state = env.reset()\n        state = torch.FloatTensor(state)\n        done = False\n        total_reward = 0\n\n        while not done:\n            q_values = model(state)\n            action = torch.argmax(q_values).item()  # or use epsilon-greedy policy\n\n            next_state, reward, done, _ = env.step(action)\n            next_state = torch.FloatTensor(next_state)\n\n            total_reward += reward\n\n            # Add DQN update logic here\n\n            state = next_state\n\n        print(f'Episode {episode+1}, Total Reward: {total_reward}')  \n\n    return model\n\n# Start DQN training\ntrain_dqn(env, num_episodes=1000)<\/code><\/pre>\n<h2>5. Conclusion<\/h2>\n<p>In this post, we explored the basic concepts of GANs and reinforcement learning as well as implementation methods using PyTorch. GANs are very useful models for data generation, and reinforcement learning is a technique that helps agents learn optimal policies. These technologies can be applied in various fields, and future research and development are expected.<\/p>\n<h2>6. References<\/h2>\n<ul>\n<li>Ian Goodfellow et al. (2014). <a href=\"https:\/\/arxiv.org\/abs\/1406.2661\">Generative Adversarial Nets<\/a><\/li>\n<li>OpenAI Gym: <a href=\"https:\/\/gym.openai.com\/\">OpenAI Gym<\/a><\/li>\n<li>PyTorch Documentation: <a href=\"https:\/\/pytorch.org\/docs\/stable\/index.html\">PyTorch Documentation<\/a><\/li>\n<\/ul>\n<p><\/body><\/p>\n","protected":false},"excerpt":{"rendered":"<p>1. Introduction Generative Adversarial Networks (GANs) are models proposed by Ian Goodfellow in 2014 that generate data through competition between two neural networks. GANs are widely used particularly in image generation, style transfer, and data augmentation. In this post, we will introduce the basic structure of GANs, how to implement them using PyTorch, the basic &hellip; <a href=\"https:\/\/atmokpo.com\/w\/36353\/\" class=\"more-link\">\ub354 \ubcf4\uae30<span class=\"screen-reader-text\"> &#8220;Deep Learning and Reinforcement Learning using PyTorch&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[113],"tags":[],"class_list":["post-36353","post","type-post","status-publish","format-standard","hentry","category-gan-deep-learning-course"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Deep Learning and Reinforcement Learning using PyTorch - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/atmokpo.com\/w\/36353\/\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deep Learning and Reinforcement Learning using PyTorch - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"og:description\" content=\"1. Introduction Generative Adversarial Networks (GANs) are models proposed by Ian Goodfellow in 2014 that generate data through competition between two neural networks. GANs are widely used particularly in image generation, style transfer, and data augmentation. In this post, we will introduce the basic structure of GANs, how to implement them using PyTorch, the basic &hellip; \ub354 \ubcf4\uae30 &quot;Deep Learning and Reinforcement Learning using PyTorch&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/atmokpo.com\/w\/36353\/\" \/>\n<meta property=\"og:site_name\" content=\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-01T09:47:45+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-11-01T11:00:15+00:00\" \/>\n<meta name=\"author\" content=\"root\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:site\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:label1\" content=\"\uae00\uc4f4\uc774\" \/>\n\t<meta name=\"twitter:data1\" content=\"root\" \/>\n\t<meta name=\"twitter:label2\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data2\" content=\"5\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/atmokpo.com\/w\/36353\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36353\/\"},\"author\":{\"name\":\"root\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\"},\"headline\":\"Deep Learning and Reinforcement Learning using PyTorch\",\"datePublished\":\"2024-11-01T09:47:45+00:00\",\"dateModified\":\"2024-11-01T11:00:15+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36353\/\"},\"wordCount\":464,\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"articleSection\":[\"GAN deep learning course\"],\"inLanguage\":\"ko-KR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/atmokpo.com\/w\/36353\/\",\"url\":\"https:\/\/atmokpo.com\/w\/36353\/\",\"name\":\"Deep Learning and Reinforcement Learning using PyTorch - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#website\"},\"datePublished\":\"2024-11-01T09:47:45+00:00\",\"dateModified\":\"2024-11-01T11:00:15+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36353\/#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/atmokpo.com\/w\/36353\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/atmokpo.com\/w\/36353\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"\ud648\",\"item\":\"https:\/\/atmokpo.com\/w\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Deep Learning and Reinforcement Learning using PyTorch\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/atmokpo.com\/w\/#website\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/atmokpo.com\/w\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"contentUrl\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"width\":400,\"height\":400,\"caption\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\"},\"image\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/bebubo4\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\",\"name\":\"root\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"caption\":\"root\"},\"sameAs\":[\"http:\/\/atmokpo.com\/w\"],\"url\":\"https:\/\/atmokpo.com\/w\/author\/root\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Deep Learning and Reinforcement Learning using PyTorch - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/atmokpo.com\/w\/36353\/","og_locale":"ko_KR","og_type":"article","og_title":"Deep Learning and Reinforcement Learning using PyTorch - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","og_description":"1. Introduction Generative Adversarial Networks (GANs) are models proposed by Ian Goodfellow in 2014 that generate data through competition between two neural networks. GANs are widely used particularly in image generation, style transfer, and data augmentation. In this post, we will introduce the basic structure of GANs, how to implement them using PyTorch, the basic &hellip; \ub354 \ubcf4\uae30 \"Deep Learning and Reinforcement Learning using PyTorch\"","og_url":"https:\/\/atmokpo.com\/w\/36353\/","og_site_name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","article_published_time":"2024-11-01T09:47:45+00:00","article_modified_time":"2024-11-01T11:00:15+00:00","author":"root","twitter_card":"summary_large_image","twitter_creator":"@bebubo4","twitter_site":"@bebubo4","twitter_misc":{"\uae00\uc4f4\uc774":"root","\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"5\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/atmokpo.com\/w\/36353\/#article","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/36353\/"},"author":{"name":"root","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7"},"headline":"Deep Learning and Reinforcement Learning using PyTorch","datePublished":"2024-11-01T09:47:45+00:00","dateModified":"2024-11-01T11:00:15+00:00","mainEntityOfPage":{"@id":"https:\/\/atmokpo.com\/w\/36353\/"},"wordCount":464,"publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"articleSection":["GAN deep learning course"],"inLanguage":"ko-KR"},{"@type":"WebPage","@id":"https:\/\/atmokpo.com\/w\/36353\/","url":"https:\/\/atmokpo.com\/w\/36353\/","name":"Deep Learning and Reinforcement Learning using PyTorch - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/#website"},"datePublished":"2024-11-01T09:47:45+00:00","dateModified":"2024-11-01T11:00:15+00:00","breadcrumb":{"@id":"https:\/\/atmokpo.com\/w\/36353\/#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/atmokpo.com\/w\/36353\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/atmokpo.com\/w\/36353\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"\ud648","item":"https:\/\/atmokpo.com\/w\/en\/"},{"@type":"ListItem","position":2,"name":"Deep Learning and Reinforcement Learning using PyTorch"}]},{"@type":"WebSite","@id":"https:\/\/atmokpo.com\/w\/#website","url":"https:\/\/atmokpo.com\/w\/","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","description":"","publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/atmokpo.com\/w\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/atmokpo.com\/w\/#organization","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","url":"https:\/\/atmokpo.com\/w\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/","url":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","contentUrl":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","width":400,"height":400,"caption":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8"},"image":{"@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/bebubo4"]},{"@type":"Person","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7","name":"root","image":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","caption":"root"},"sameAs":["http:\/\/atmokpo.com\/w"],"url":"https:\/\/atmokpo.com\/w\/author\/root\/"}]}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36353","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/comments?post=36353"}],"version-history":[{"count":1,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36353\/revisions"}],"predecessor-version":[{"id":36354,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36353\/revisions\/36354"}],"wp:attachment":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/media?parent=36353"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/categories?post=36353"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/tags?post=36353"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}