{"id":36321,"date":"2024-11-01T09:47:31","date_gmt":"2024-11-01T09:47:31","guid":{"rendered":"http:\/\/atmokpo.com\/w\/?p=36321"},"modified":"2024-11-01T11:00:24","modified_gmt":"2024-11-01T11:00:24","slug":"using-pytorch-for-gan-deep-learning-drawing-monets-paintings-with-cyclegan","status":"publish","type":"post","link":"https:\/\/atmokpo.com\/w\/36321\/","title":{"rendered":"Using PyTorch for GAN Deep Learning, Drawing Monet&#8217;s Paintings with CycleGAN"},"content":{"rendered":"<p><body><\/p>\n<p>The field of deep learning has made significant achievements thanks to advancements in data and computational power. Among them, GAN (Generative Adversarial Network) is one of the most innovative models. In this article, we will introduce how to train the CycleGAN model using PyTorch, one of the deep learning frameworks, to generate paintings in the style of Monet.<\/p>\n<h2>1. Overview of CycleGAN<\/h2>\n<p>CycleGAN is a type of GAN used for transformation between two domains. For instance, it can be used to transform real photos into artistic styles or to convert daytime scenes into nighttime scenes. A key feature of CycleGAN is maintaining the consistency of transformations between the two given domains through &#8216;cycle consistency&#8217; learning.<\/p>\n<h3>1.1 CycleGAN Structure<\/h3>\n<p>CycleGAN consists of two generators and two discriminators. Each generator transforms an image from one domain to another while the discriminator&#8217;s role is to distinguish whether the generated image is real or fake.<\/p>\n<ul>\n<li><strong>Generator G:<\/strong> Transforms from domain X (e.g., photos) to domain Y (e.g., Monet-style paintings)<\/li>\n<li><strong>Generator F:<\/strong> Transforms from domain Y to domain X<\/li>\n<li><strong>Discriminator D_X:<\/strong> Distinguishes between real and generated images in domain X<\/li>\n<li><strong>Discriminator D_Y:<\/strong> Distinguishes between real and generated images in domain Y<\/li>\n<\/ul>\n<h3>1.2 Loss Function<\/h3>\n<p>The training process of CycleGAN consists of the following loss function compositions.<\/p>\n<ul>\n<li><strong>Adversarial Loss:<\/strong> The loss evaluated by the discriminator on how real the generated images are<\/li>\n<li><strong>Cycle Consistency Loss:<\/strong> The loss when transforming an image back to the original after transformation<\/li>\n<\/ul>\n<p>The total loss is defined as follows:<\/p>\n<pre><code>L = L<sub>GAN<\/sub>(G, D<sub>Y<\/sub>, X, Y) + L<sub>GAN<\/sub>(F, D<sub>X<\/sub>, Y, X) + \u03bb(CycleLoss(G, F) + CycleLoss(F, G))<\/code><\/pre>\n<h2>2. Environment Setup<\/h2>\n<p>For this project, Python, PyTorch, and the necessary libraries (e.g., NumPy, Matplotlib) must be installed. The command to install the required libraries is as follows:<\/p>\n<pre><code>pip install torch torchvision numpy matplotlib<\/code><\/pre>\n<h2>3. Dataset Preparation<\/h2>\n<p>You will need a dataset of Monet-style paintings and photographs. For instance, the <strong>Monet Style<\/strong> paintings can be downloaded from the <a href=\"https:\/\/www.kaggle.com\/c\/monet-style\/images\" target=\"_blank\" rel=\"noopener\">Kaggle Monet Style Dataset<\/a>. Additionally, general photograph images can be obtained from various public image databases.<\/p>\n<p>Once the image datasets are prepared, they need to be loaded and preprocessed in the appropriate format.<\/p>\n<h3>3.1 Data Loading and Preprocessing<\/h3>\n<pre><code>import os\nimport glob\nimport random\nfrom PIL import Image\nimport torchvision.transforms as transforms\n\ndef load_data(image_path, image_size=(256, 256)):\n    images = glob.glob(os.path.join(image_path, '*.jpg'))\n    dataset = []\n    for img in images:\n        image = Image.open(img).convert('RGB')\n        transform = transforms.Compose([\n            transforms.Resize(image_size),\n            transforms.ToTensor(),\n        ])\n        image = transform(image)\n        dataset.append(image)\n    return dataset\n\n# Set the image paths\nmonet_path = '.\/data\/monet\/'\nphoto_path = '.\/data\/photos\/'\n\nmonet_images = load_data(monet_path)\nphoto_images = load_data(photo_path)\n<\/code><\/pre>\n<h2>4. Building the CycleGAN Model<\/h2>\n<p>To build the CycleGAN model, we will define basic generators and discriminators.<\/p>\n<h3>4.1 Generator Definition<\/h3>\n<p>Here, we define a generator based on the U-Net architecture.<\/p>\n<pre><code>import torch\nimport torch.nn as nn\n\nclass UNetGenerator(nn.Module):\n    def __init__(self):\n        super(UNetGenerator, self).__init__()\n        self.encoder1 = self.contracting_block(3, 64)\n        self.encoder2 = self.contracting_block(64, 128)\n        self.encoder3 = self.contracting_block(128, 256)\n        self.encoder4 = self.contracting_block(256, 512)\n        self.decoder1 = self.expansive_block(512, 256)\n        self.decoder2 = self.expansive_block(256, 128)\n        self.decoder3 = self.expansive_block(128, 64)\n        self.decoder4 = nn.ConvTranspose2d(64, 3, kernel_size=3, stride=1, padding=1)\n\n    def contracting_block(self, in_channels, out_channels):\n        return nn.Sequential(\n            nn.Conv2d(in_channels, out_channels, kernel_size=4, stride=2, padding=1),\n            nn.BatchNorm2d(out_channels),\n            nn.ReLU(inplace=True)\n        )\n    \n    def expansive_block(self, in_channels, out_channels):\n        return nn.Sequential(\n            nn.ConvTranspose2d(in_channels, out_channels, kernel_size=4, stride=2, padding=1),\n            nn.BatchNorm2d(out_channels),\n            nn.ReLU(inplace=True)\n        )\n    \n    def forward(self, x):\n        e1 = self.encoder1(x)\n        e2 = self.encoder2(e1)\n        e3 = self.encoder3(e2)\n        e4 = self.encoder4(e3)\n        d1 = self.decoder1(e4)\n        d2 = self.decoder2(d1 + e3)  # Skip connection\n        d3 = self.decoder3(d2 + e2)  # Skip connection\n        output = self.decoder4(d3 + e1)  # Skip connection\n        return output\n<\/code><\/pre>\n<h3>4.2 Discriminator Definition<\/h3>\n<p>The discriminator is defined using a patch-based structure.<\/p>\n<pre><code>class PatchDiscriminator(nn.Module):\n    def __init__(self):\n        super(PatchDiscriminator, self).__init__()\n        self.model = nn.Sequential(\n            nn.Conv2d(3, 64, kernel_size=4, stride=2, padding=1),\n            nn.LeakyReLU(0.2, inplace=True),\n            nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1),\n            nn.BatchNorm2d(128),\n            nn.LeakyReLU(0.2, inplace=True),\n            nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1),\n            nn.BatchNorm2d(256),\n            nn.LeakyReLU(0.2, inplace=True),\n            nn.Conv2d(256, 512, kernel_size=4, stride=2, padding=1),\n            nn.BatchNorm2d(512),\n            nn.LeakyReLU(0.2, inplace=True),\n            nn.Conv2d(512, 1, kernel_size=4, stride=1, padding=1)\n        )\n\n    def forward(self, x):\n        return self.model(x)\n<\/code><\/pre>\n<h2>5. Implementing the Loss Function<\/h2>\n<p>We will implement the loss functions for the CycleGAN, considering both the generator&#8217;s loss and the discriminator&#8217;s loss.<\/p>\n<pre><code>def compute_gan_loss(predictions, targets):\n    return nn.BCEWithLogitsLoss()(predictions, targets)\n\ndef compute_cycle_loss(real_image, cycled_image, lambda_cycle):\n    return lambda_cycle * nn.L1Loss()(real_image, cycled_image)\n\ndef compute_total_loss(real_images_X, real_images_Y, \n                       fake_images_Y, fake_images_X, \n                       cycled_images_X, cycled_images_Y, \n                       D_X, D_Y, lambda_cycle):\n    loss_GAN_X = compute_gan_loss(D_Y(fake_images_Y), torch.ones_like(fake_images_Y))\n    loss_GAN_Y = compute_gan_loss(D_X(fake_images_X), torch.ones_like(fake_images_X))\n    loss_cycle = compute_cycle_loss(real_images_X, cycled_images_X, lambda_cycle) + \\\n                compute_cycle_loss(real_images_Y, cycled_images_Y, lambda_cycle)\n    return loss_GAN_X + loss_GAN_Y + loss_cycle\n<\/code><\/pre>\n<h2>6. Training Process<\/h2>\n<p>Now it&#8217;s time to train the model. Set up the data loader, initialize the model, and perform loss storage and updates.<\/p>\n<pre><code>from torch.utils.data import DataLoader\n\ndef train_cyclegan(monet_loader, photo_loader, epochs=200, lambda_cycle=10):\n    G = UNetGenerator()\n    F = UNetGenerator()\n    D_X = PatchDiscriminator()\n    D_Y = PatchDiscriminator()\n\n    # Set up optimizers\n    optimizer_G = torch.optim.Adam(G.parameters(), lr=0.0002, betas=(0.5, 0.999))\n    optimizer_F = torch.optim.Adam(F.parameters(), lr=0.0002, betas=(0.5, 0.999))\n    optimizer_D_X = torch.optim.Adam(D_X.parameters(), lr=0.0002, betas=(0.5, 0.999))\n    optimizer_D_Y = torch.optim.Adam(D_Y.parameters(), lr=0.0002, betas=(0.5, 0.999))\n\n    for epoch in range(epochs):\n        for real_images_X, real_images_Y in zip(monet_loader, photo_loader):\n            # Train generator\n            fake_images_Y = G(real_images_X)\n            cycled_images_X = F(fake_images_Y)\n\n            optimizer_G.zero_grad()\n            optimizer_F.zero_grad()\n            total_loss = compute_total_loss(real_images_X, real_images_Y, \n                                             fake_images_Y, fake_images_X, \n                                             cycled_images_X, cycled_images_Y, \n                                             D_X, D_Y, lambda_cycle)\n            total_loss.backward()\n            optimizer_G.step()\n            optimizer_F.step()\n\n            # Train discriminator\n            optimizer_D_X.zero_grad()\n            optimizer_D_Y.zero_grad()\n            loss_D_X = compute_gan_loss(D_X(real_images_X), torch.ones_like(real_images_X)) + \\\n                        compute_gan_loss(D_X(fake_images_X.detach()), torch.zeros_like(fake_images_X))\n            loss_D_Y = compute_gan_loss(D_Y(real_images_Y), torch.ones_like(real_images_Y)) + \\\n                        compute_gan_loss(D_Y(fake_images_Y.detach()), torch.zeros_like(fake_images_Y))\n            loss_D_X.backward()\n            loss_D_Y.backward()\n            optimizer_D_X.step()\n            optimizer_D_Y.step()\n\n        print(f'Epoch [{epoch+1}\/{epochs}], Loss: {total_loss.item()}')\n<\/code><\/pre>\n<h2>7. Generating Results<\/h2>\n<p>Once the model has finished training, you can proceed to generate new images. Let&#8217;s check the generated Monet-style paintings using test images.<\/p>\n<pre><code>def generate_images(test_loader, model_G):\n    model_G.eval()\n    for real_images in test_loader:\n        with torch.no_grad():\n            fake_images = model_G(real_images)\n            # Add code to save or visualize the images\n<\/code><\/pre>\n<p>We will add built-in functions to visualize the images:<\/p>\n<pre><code>import matplotlib.pyplot as plt\n\ndef visualize_results(real_images, fake_images):\n    plt.figure(figsize=(10, 5))\n    plt.subplot(1, 2, 1)\n    plt.title('Real Images')\n    plt.imshow(real_images.permute(1, 2, 0).numpy())\n    \n    plt.subplot(1, 2, 2)\n    plt.title('Fake Images (Monet Style)')\n    plt.imshow(fake_images.permute(1, 2, 0).numpy())\n    plt.show()\n<\/code><\/pre>\n<h2>8. Conclusion<\/h2>\n<p>In this article, we explored the process of generating Monet-style paintings using CycleGAN. This methodology has many applications and can be used to address more domain transformation problems in the future. The cycle consistency characteristic of CycleGAN can also be applied to various GAN variations, making the future research directions exciting.<\/p>\n<p>We hope that this example has helped you grasp the basics of implementing CycleGAN in PyTorch. GANs hold a lot of potential for generating high-quality images, and the advancement of this technology is likely to find applications in many more fields.<\/p>\n<p><\/body><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The field of deep learning has made significant achievements thanks to advancements in data and computational power. Among them, GAN (Generative Adversarial Network) is one of the most innovative models. In this article, we will introduce how to train the CycleGAN model using PyTorch, one of the deep learning frameworks, to generate paintings in the &hellip; <a href=\"https:\/\/atmokpo.com\/w\/36321\/\" class=\"more-link\">\ub354 \ubcf4\uae30<span class=\"screen-reader-text\"> &#8220;Using PyTorch for GAN Deep Learning, Drawing Monet&#8217;s Paintings with CycleGAN&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[113],"tags":[],"class_list":["post-36321","post","type-post","status-publish","format-standard","hentry","category-gan-deep-learning-course"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Using PyTorch for GAN Deep Learning, Drawing Monet&#039;s Paintings with CycleGAN - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/atmokpo.com\/w\/36321\/\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Using PyTorch for GAN Deep Learning, Drawing Monet&#039;s Paintings with CycleGAN - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"og:description\" content=\"The field of deep learning has made significant achievements thanks to advancements in data and computational power. Among them, GAN (Generative Adversarial Network) is one of the most innovative models. In this article, we will introduce how to train the CycleGAN model using PyTorch, one of the deep learning frameworks, to generate paintings in the &hellip; \ub354 \ubcf4\uae30 &quot;Using PyTorch for GAN Deep Learning, Drawing Monet&#8217;s Paintings with CycleGAN&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/atmokpo.com\/w\/36321\/\" \/>\n<meta property=\"og:site_name\" content=\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-01T09:47:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-11-01T11:00:24+00:00\" \/>\n<meta name=\"author\" content=\"root\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:site\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:label1\" content=\"\uae00\uc4f4\uc774\" \/>\n\t<meta name=\"twitter:data1\" content=\"root\" \/>\n\t<meta name=\"twitter:label2\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data2\" content=\"7\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/atmokpo.com\/w\/36321\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36321\/\"},\"author\":{\"name\":\"root\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\"},\"headline\":\"Using PyTorch for GAN Deep Learning, Drawing Monet&#8217;s Paintings with CycleGAN\",\"datePublished\":\"2024-11-01T09:47:31+00:00\",\"dateModified\":\"2024-11-01T11:00:24+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36321\/\"},\"wordCount\":563,\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"articleSection\":[\"GAN deep learning course\"],\"inLanguage\":\"ko-KR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/atmokpo.com\/w\/36321\/\",\"url\":\"https:\/\/atmokpo.com\/w\/36321\/\",\"name\":\"Using PyTorch for GAN Deep Learning, Drawing Monet's Paintings with CycleGAN - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#website\"},\"datePublished\":\"2024-11-01T09:47:31+00:00\",\"dateModified\":\"2024-11-01T11:00:24+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36321\/#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/atmokpo.com\/w\/36321\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/atmokpo.com\/w\/36321\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"\ud648\",\"item\":\"https:\/\/atmokpo.com\/w\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Using PyTorch for GAN Deep Learning, Drawing Monet&#8217;s Paintings with CycleGAN\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/atmokpo.com\/w\/#website\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/atmokpo.com\/w\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"contentUrl\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"width\":400,\"height\":400,\"caption\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\"},\"image\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/bebubo4\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\",\"name\":\"root\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"caption\":\"root\"},\"sameAs\":[\"http:\/\/atmokpo.com\/w\"],\"url\":\"https:\/\/atmokpo.com\/w\/author\/root\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Using PyTorch for GAN Deep Learning, Drawing Monet's Paintings with CycleGAN - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/atmokpo.com\/w\/36321\/","og_locale":"ko_KR","og_type":"article","og_title":"Using PyTorch for GAN Deep Learning, Drawing Monet's Paintings with CycleGAN - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","og_description":"The field of deep learning has made significant achievements thanks to advancements in data and computational power. Among them, GAN (Generative Adversarial Network) is one of the most innovative models. In this article, we will introduce how to train the CycleGAN model using PyTorch, one of the deep learning frameworks, to generate paintings in the &hellip; \ub354 \ubcf4\uae30 \"Using PyTorch for GAN Deep Learning, Drawing Monet&#8217;s Paintings with CycleGAN\"","og_url":"https:\/\/atmokpo.com\/w\/36321\/","og_site_name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","article_published_time":"2024-11-01T09:47:31+00:00","article_modified_time":"2024-11-01T11:00:24+00:00","author":"root","twitter_card":"summary_large_image","twitter_creator":"@bebubo4","twitter_site":"@bebubo4","twitter_misc":{"\uae00\uc4f4\uc774":"root","\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"7\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/atmokpo.com\/w\/36321\/#article","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/36321\/"},"author":{"name":"root","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7"},"headline":"Using PyTorch for GAN Deep Learning, Drawing Monet&#8217;s Paintings with CycleGAN","datePublished":"2024-11-01T09:47:31+00:00","dateModified":"2024-11-01T11:00:24+00:00","mainEntityOfPage":{"@id":"https:\/\/atmokpo.com\/w\/36321\/"},"wordCount":563,"publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"articleSection":["GAN deep learning course"],"inLanguage":"ko-KR"},{"@type":"WebPage","@id":"https:\/\/atmokpo.com\/w\/36321\/","url":"https:\/\/atmokpo.com\/w\/36321\/","name":"Using PyTorch for GAN Deep Learning, Drawing Monet's Paintings with CycleGAN - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/#website"},"datePublished":"2024-11-01T09:47:31+00:00","dateModified":"2024-11-01T11:00:24+00:00","breadcrumb":{"@id":"https:\/\/atmokpo.com\/w\/36321\/#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/atmokpo.com\/w\/36321\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/atmokpo.com\/w\/36321\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"\ud648","item":"https:\/\/atmokpo.com\/w\/en\/"},{"@type":"ListItem","position":2,"name":"Using PyTorch for GAN Deep Learning, Drawing Monet&#8217;s Paintings with CycleGAN"}]},{"@type":"WebSite","@id":"https:\/\/atmokpo.com\/w\/#website","url":"https:\/\/atmokpo.com\/w\/","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","description":"","publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/atmokpo.com\/w\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/atmokpo.com\/w\/#organization","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","url":"https:\/\/atmokpo.com\/w\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/","url":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","contentUrl":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","width":400,"height":400,"caption":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8"},"image":{"@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/bebubo4"]},{"@type":"Person","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7","name":"root","image":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","caption":"root"},"sameAs":["http:\/\/atmokpo.com\/w"],"url":"https:\/\/atmokpo.com\/w\/author\/root\/"}]}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36321","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/comments?post=36321"}],"version-history":[{"count":1,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36321\/revisions"}],"predecessor-version":[{"id":36322,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36321\/revisions\/36322"}],"wp:attachment":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/media?parent=36321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/categories?post=36321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/tags?post=36321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}