{"id":36495,"date":"2024-11-01T09:48:55","date_gmt":"2024-11-01T09:48:55","guid":{"rendered":"http:\/\/atmokpo.com\/w\/?p=36495"},"modified":"2024-11-01T11:52:58","modified_gmt":"2024-11-01T11:52:58","slug":"deep-learning-pytorch-course-markov-reward-process","status":"publish","type":"post","link":"https:\/\/atmokpo.com\/w\/36495\/","title":{"rendered":"Deep Learning PyTorch Course, Markov Reward Process"},"content":{"rendered":"<p><body><\/p>\n<article>\n<section>\n<p>\n                This course will cover the basics of deep learning and introduce the Markov Decision Process (MDP),<br \/>\n                explaining how to implement it using PyTorch. MDP is a crucial concept in the field of reinforcement<br \/>\n                learning and serves as an important mathematical model for finding optimal actions to achieve goals.\n            <\/p>\n<h2>1. What is a Markov Decision Process?<\/h2>\n<p>\n                A Markov Decision Process (MDP) is a mathematical framework that defines the elements an agent (the acting entity)<br \/>\n                should consider in order to make optimal decisions in a given environment. An MDP consists of the following five<br \/>\n                key elements:\n            <\/p>\n<ul>\n<li><strong>State Set (S)<\/strong>: A set that represents all possible states of the environment.<\/li>\n<li><strong>Action Set (A)<\/strong>: A set of all possible actions that the agent can take in each state.<\/li>\n<li><strong>Transition Probability (P)<\/strong>: Represents the probability of transitioning to the next state after taking a specific action in the current state.<\/li>\n<li><strong>Reward Function (R)<\/strong>: Defines the reward obtained through a specific action in a specific state.<\/li>\n<li><strong>Discount Factor (\u03b3)<\/strong>: A value that determines how important future rewards are compared to current rewards.<\/li>\n<\/ul>\n<\/section>\n<section>\n<h2>2. Mathematical Definition of MDP<\/h2>\n<p>\n                An MDP is generally defined as a tuple (S, A, P, R, \u03b3), and agents learn policies (rules for selecting better actions) based on this information. The goal of an MDP is to find the optimal policy that maximizes long-term rewards.\n            <\/p>\n<h3>Relationship Between States and Actions<\/h3>\n<p>\n                When taking action a \u2208 A in state s \u2208 S, the probability of transitioning to the next state s&#8217; \u2208 S is represented as P(s&#8217;|s, a). The reward function is expressed as R(s, a), which signifies the immediate reward received by the agent for taking action a in state s.\n            <\/p>\n<h3>Policy \u03c0<\/h3>\n<p>\n                The policy \u03c0 defines the probability of taking action a in state s. This allows the agent to choose the optimal action for a given state.\n            <\/p>\n<\/section>\n<section>\n<h2>3. Implementing MDP with PyTorch<\/h2>\n<p>\n                Now, let&#8217;s implement the Markov Decision Process using PyTorch. The code below defines the MDP and shows the process<br \/>\n                in which the agent learns the optimal policy. In this example, we simulate the agent&#8217;s journey to reach the goal<br \/>\n                point in a simple grid environment.\n            <\/p>\n<h3>Installing Required Libraries<\/h3>\n<pre>\n                <code>\n                pip install torch numpy matplotlib\n                <\/code>\n            <\/pre>\n<h3>Code Example<\/h3>\n<pre>\n                <code>\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport matplotlib.pyplot as plt\n\n# Environment Definition\nclass GridWorld:\n    def __init__(self, grid_size):\n        self.grid_size = grid_size\n        self.state = (0, 0)  # Initial state\n        self.goal = (grid_size - 1, grid_size - 1)  # Goal state\n        self.actions = [(0, 1), (0, -1), (1, 0), (-1, 0)]  # Right, Left, Down, Up\n\n    def step(self, action):\n        next_state = (self.state[0] + action[0], self.state[1] + action[1])\n        # If exceeding boundaries, state remains unchanged\n        if 0 <= next_state[0] < self.grid_size and 0 <= next_state[1] < self.grid_size:\n            self.state = next_state\n        \n        # Reward and completion condition\n        if self.state == self.goal:\n            return self.state, 1, True  # Goal reached\n        return self.state, 0, False\n\n    def reset(self):\n        self.state = (0, 0)\n        return self.state\n\n# Q-Network Definition\nclass QNetwork(nn.Module):\n    def __init__(self, input_dim, output_dim):\n        super(QNetwork, self).__init__()\n        self.fc1 = nn.Linear(input_dim, 24)  # First hidden layer\n        self.fc2 = nn.Linear(24, 24)  # Second hidden layer\n        self.fc3 = nn.Linear(24, output_dim)  # Output layer\n\n    def forward(self, x):\n        x = nn.functional.relu(self.fc1(x))\n        x = nn.functional.relu(self.fc2(x))\n        return self.fc3(x)\n\n# Q-learning Learner\nclass QLearningAgent:\n    def __init__(self, state_space, action_space):\n        self.q_network = QNetwork(state_space, action_space)\n        self.optimizer = optim.Adam(self.q_network.parameters(), lr=0.001)\n        self.criterion = nn.MSELoss()\n        self.gamma = 0.99  # Discount factor\n        self.epsilon = 1.0  # Exploration rate\n        self.epsilon_min = 0.01\n        self.epsilon_decay = 0.995\n\n    def choose_action(self, state):\n        if np.random.rand() <= self.epsilon:\n            return np.random.randint(0, 4)  # Random action\n        q_values = self.q_network(torch.FloatTensor(state)).detach().numpy()\n        return np.argmax(q_values)  # Return optimal action\n\n    def train(self, state, action, reward, next_state, done):\n        target = reward\n        if not done:\n            target = reward + self.gamma * np.max(self.q_network(torch.FloatTensor(next_state)).detach().numpy())\n        \n        target_f = self.q_network(torch.FloatTensor(state)).detach().numpy()\n        target_f[action] = target\n\n        # Learning\n        self.optimizer.zero_grad()\n        output = self.q_network(torch.FloatTensor(state))\n        loss = self.criterion(output, torch.FloatTensor(target_f))\n        loss.backward()\n        self.optimizer.step()\n\n        # Decay exploration rate\n        if self.epsilon > self.epsilon_min:\n            self.epsilon *= self.epsilon_decay\n\n# Main Loop\ndef main():\n    env = GridWorld(grid_size=5)\n    agent = QLearningAgent(state_space=2, action_space=4)\n    episodes = 1000\n    rewards = []\n\n    for episode in range(episodes):\n        state = env.reset()\n        done = False\n        total_reward = 0\n        \n        while not done:\n            action = agent.choose_action(state)\n            next_state, reward, done = env.step(env.actions[action])\n            agent.train(state, action, reward, next_state, done)\n            state = next_state\n            total_reward += reward\n        \n        rewards.append(total_reward)\n\n    # Visualization of results\n    plt.plot(rewards)\n    plt.xlabel('Episode')\n    plt.ylabel('Reward')\n    plt.title('Training Rewards over Episodes')\n    plt.show()\n\nif __name__ == \"__main__\":\n    main()\n                <\/code>\n            <\/pre>\n<\/section>\n<section>\n<h2>4. Code Explanation<\/h2>\n<p>\n                The above code is an example of implementing MDP in a 5&#215;5 grid environment.<br \/>\n                The <strong>GridWorld<\/strong> class defines the grid environment in which the agent can move. The agent moves<br \/>\n                based on the provided action set and receives rewards when reaching the goal point.\n            <\/p>\n<p>\n<strong>QNetwork<\/strong> class defines a deep neural network model used in Q-learning.<br \/>\n                It takes the state dimension as input and returns the Q-values for each action as output.<br \/>\n                The <strong>QLearningAgent<\/strong> class represents the agent that performs the learning process in reinforcement learning.<br \/>\n                This agent uses policies to choose actions and updates Q-values.\n            <\/p>\n<p>\n<strong>The main<\/strong> function initializes the environment and contains the main loop executing the episodes.<br \/>\n                In each episode, the agent selects actions based on the given state, receives rewards through the next state of the environment,<br \/>\n                and learns accordingly. Upon completion of training, the rewards can be visualized to assess the agent\u2019s performance.\n            <\/p>\n<\/section>\n<section>\n<h2>5. Analysis of Learning Results<\/h2>\n<p>\n                Observing the learning process, we find that the agent effectively navigates the map by exploring the environment<br \/>\n                to reach the goal. The trend of rewards visualized through graphs shows how rewards change as training progresses.<br \/>\n                Ideally, the agent learns to achieve higher rewards over time.\n            <\/p>\n<\/section>\n<section>\n<h2>6. Conclusion and Future Directions<\/h2>\n<p>\n                In this course, we have explained the basic concepts of deep learning, PyTorch,<br \/>\n                and the Markov Decision Process. Through practical implementation of MDP using PyTorch,<br \/>\n                participants could gain a deeper understanding of the related concepts.<br \/>\n                Reinforcement learning is an extensive field with various algorithms and applicable environments.<br \/>\n                Future courses will cover more complex environments and diverse policy learning algorithms (e.g., DQN, Policy Gradients).\n            <\/p>\n<\/section>\n<\/article>\n<p><\/body><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This course will cover the basics of deep learning and introduce the Markov Decision Process (MDP), explaining how to implement it using PyTorch. MDP is a crucial concept in the field of reinforcement learning and serves as an important mathematical model for finding optimal actions to achieve goals. 1. What is a Markov Decision Process? &hellip; <a href=\"https:\/\/atmokpo.com\/w\/36495\/\" class=\"more-link\">\ub354 \ubcf4\uae30<span class=\"screen-reader-text\"> &#8220;Deep Learning PyTorch Course, Markov Reward Process&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[149],"tags":[],"class_list":["post-36495","post","type-post","status-publish","format-standard","hentry","category-pytorch-study"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Deep Learning PyTorch Course, Markov Reward Process - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/atmokpo.com\/w\/36495\/\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deep Learning PyTorch Course, Markov Reward Process - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"og:description\" content=\"This course will cover the basics of deep learning and introduce the Markov Decision Process (MDP), explaining how to implement it using PyTorch. MDP is a crucial concept in the field of reinforcement learning and serves as an important mathematical model for finding optimal actions to achieve goals. 1. What is a Markov Decision Process? &hellip; \ub354 \ubcf4\uae30 &quot;Deep Learning PyTorch Course, Markov Reward Process&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/atmokpo.com\/w\/36495\/\" \/>\n<meta property=\"og:site_name\" content=\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-01T09:48:55+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-11-01T11:52:58+00:00\" \/>\n<meta name=\"author\" content=\"root\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:site\" content=\"@bebubo4\" \/>\n<meta name=\"twitter:label1\" content=\"\uae00\uc4f4\uc774\" \/>\n\t<meta name=\"twitter:data1\" content=\"root\" \/>\n\t<meta name=\"twitter:label2\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data2\" content=\"2\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/atmokpo.com\/w\/36495\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36495\/\"},\"author\":{\"name\":\"root\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\"},\"headline\":\"Deep Learning PyTorch Course, Markov Reward Process\",\"datePublished\":\"2024-11-01T09:48:55+00:00\",\"dateModified\":\"2024-11-01T11:52:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36495\/\"},\"wordCount\":630,\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"articleSection\":[\"PyTorch Study\"],\"inLanguage\":\"ko-KR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/atmokpo.com\/w\/36495\/\",\"url\":\"https:\/\/atmokpo.com\/w\/36495\/\",\"name\":\"Deep Learning PyTorch Course, Markov Reward Process - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"isPartOf\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#website\"},\"datePublished\":\"2024-11-01T09:48:55+00:00\",\"dateModified\":\"2024-11-01T11:52:58+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/atmokpo.com\/w\/36495\/#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/atmokpo.com\/w\/36495\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/atmokpo.com\/w\/36495\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"\ud648\",\"item\":\"https:\/\/atmokpo.com\/w\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Deep Learning PyTorch Course, Markov Reward Process\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/atmokpo.com\/w\/#website\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/atmokpo.com\/w\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/atmokpo.com\/w\/#organization\",\"name\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\",\"url\":\"https:\/\/atmokpo.com\/w\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"contentUrl\":\"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png\",\"width\":400,\"height\":400,\"caption\":\"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8\"},\"image\":{\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/bebubo4\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7\",\"name\":\"root\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g\",\"caption\":\"root\"},\"sameAs\":[\"http:\/\/atmokpo.com\/w\"],\"url\":\"https:\/\/atmokpo.com\/w\/author\/root\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Deep Learning PyTorch Course, Markov Reward Process - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/atmokpo.com\/w\/36495\/","og_locale":"ko_KR","og_type":"article","og_title":"Deep Learning PyTorch Course, Markov Reward Process - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","og_description":"This course will cover the basics of deep learning and introduce the Markov Decision Process (MDP), explaining how to implement it using PyTorch. MDP is a crucial concept in the field of reinforcement learning and serves as an important mathematical model for finding optimal actions to achieve goals. 1. What is a Markov Decision Process? &hellip; \ub354 \ubcf4\uae30 \"Deep Learning PyTorch Course, Markov Reward Process\"","og_url":"https:\/\/atmokpo.com\/w\/36495\/","og_site_name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","article_published_time":"2024-11-01T09:48:55+00:00","article_modified_time":"2024-11-01T11:52:58+00:00","author":"root","twitter_card":"summary_large_image","twitter_creator":"@bebubo4","twitter_site":"@bebubo4","twitter_misc":{"\uae00\uc4f4\uc774":"root","\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"2\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/atmokpo.com\/w\/36495\/#article","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/36495\/"},"author":{"name":"root","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7"},"headline":"Deep Learning PyTorch Course, Markov Reward Process","datePublished":"2024-11-01T09:48:55+00:00","dateModified":"2024-11-01T11:52:58+00:00","mainEntityOfPage":{"@id":"https:\/\/atmokpo.com\/w\/36495\/"},"wordCount":630,"publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"articleSection":["PyTorch Study"],"inLanguage":"ko-KR"},{"@type":"WebPage","@id":"https:\/\/atmokpo.com\/w\/36495\/","url":"https:\/\/atmokpo.com\/w\/36495\/","name":"Deep Learning PyTorch Course, Markov Reward Process - \ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","isPartOf":{"@id":"https:\/\/atmokpo.com\/w\/#website"},"datePublished":"2024-11-01T09:48:55+00:00","dateModified":"2024-11-01T11:52:58+00:00","breadcrumb":{"@id":"https:\/\/atmokpo.com\/w\/36495\/#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/atmokpo.com\/w\/36495\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/atmokpo.com\/w\/36495\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"\ud648","item":"https:\/\/atmokpo.com\/w\/en\/"},{"@type":"ListItem","position":2,"name":"Deep Learning PyTorch Course, Markov Reward Process"}]},{"@type":"WebSite","@id":"https:\/\/atmokpo.com\/w\/#website","url":"https:\/\/atmokpo.com\/w\/","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","description":"","publisher":{"@id":"https:\/\/atmokpo.com\/w\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/atmokpo.com\/w\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/atmokpo.com\/w\/#organization","name":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8","url":"https:\/\/atmokpo.com\/w\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/","url":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","contentUrl":"https:\/\/atmokpo.com\/w\/wp-content\/uploads\/2024\/11\/logo.png","width":400,"height":400,"caption":"\ub77c\uc774\ube0c\uc2a4\ub9c8\ud2b8"},"image":{"@id":"https:\/\/atmokpo.com\/w\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/bebubo4"]},{"@type":"Person","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/91b6b3b138fbba0efb4ae64b1abd81d7","name":"root","image":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/atmokpo.com\/w\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/708197b41fc6435a7ce22d951b25d4a47e9e904270cb1f04682d4f025066f80c?s=96&d=mm&r=g","caption":"root"},"sameAs":["http:\/\/atmokpo.com\/w"],"url":"https:\/\/atmokpo.com\/w\/author\/root\/"}]}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36495","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/comments?post=36495"}],"version-history":[{"count":1,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36495\/revisions"}],"predecessor-version":[{"id":36496,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/posts\/36495\/revisions\/36496"}],"wp:attachment":[{"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/media?parent=36495"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/categories?post=36495"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/atmokpo.com\/w\/wp-json\/wp\/v2\/tags?post=36495"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}