OpenAI Gym 中级教程——深入强化学习算法

2024-01-30 10:44:02 浏览数 (1)

Python OpenAI Gym 中级教程:深入强化学习算法

OpenAI Gym 是一个用于开发和比较强化学习算法的工具包,提供了多个环境,包括经典的控制问题和 Atari 游戏。本篇博客将深入介绍 OpenAI Gym 中的强化学习算法,包括深度 Q 网络(Deep Q Network, DQN)和深度确定性策略梯度(Deep Deterministic Policy Gradient, DDPG)。

1. 安装 OpenAI Gym

首先,确保你已经安装了 OpenAI Gym:

代码语言:javascript复制
pip install gym
2. 强化学习简介

强化学习是一种机器学习的分支,其目标是通过智能体(Agent)与环境的交互学习,以获得最优的动作策略。在 OpenAI Gym 中,智能体在环境中执行动作,观察环境的反馈,并根据反馈调整策略。

3. 深度 Q 网络(DQN)

DQN 是一种用于解决离散动作空间问题的强化学习算法。下面是一个简单的 DQN 示例,使用 Gym 中的 CartPole 环境:

代码语言:javascript复制
import gym
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam

# 创建 CartPole 环境
env = gym.make('CartPole-v1')

# 定义深度 Q 网络模型
model = Sequential()
model.add(Dense(24, input_shape=(env.observation_space.shape[0],), activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dense(env.action_space.n, activation='linear'))
model.compile(optimizer=Adam(), loss='mse')

# 定义 DQN 算法
class DQNAgent:
    def __init__(self, model, gamma=0.99, epsilon=1.0, epsilon_decay=0.995, epsilon_min=0.01):
        self.model = model
        self.gamma = gamma
        self.epsilon = epsilon
        self.epsilon_decay = epsilon_decay
        self.epsilon_min = epsilon_min

    def act(self, state):
        if np.random.rand() <= self.epsilon:
            return np.random.choice(env.action_space.n)
        q_values = self.model.predict(state)
        return np.argmax(q_values[0])

    def train(self, state, action, reward, next_state, done):
        target = reward
        if not done:
            target = reward   self.gamma * np.amax(self.model.predict(next_state)[0])
        target_f = self.model.predict(state)
        target_f[0][action] = target
        self.model.fit(state, target_f, epochs=1, verbose=0)
        if self.epsilon > self.epsilon_min:
            self.epsilon *= self.epsilon_decay

# 初始化 DQN Agent
dqn_agent = DQNAgent(model)

# 训练 DQN
for episode in range(1000):
    state = env.reset()
    state = np.reshape(state, [1, env.observation_space.shape[0]])
    for time in range(500):
        # env.render()
        action = dqn_agent.act(state)
        next_state, reward, done, _ = env.step(action)
        reward = reward if not done else -10
        next_state = np.reshape(next_state, [1, env.observation_space.shape[0]])
        dqn_agent.train(state, action, reward, next_state, done)
        state = next_state
        if done:
            print(f"Episode: {episode 1}, Score: {time 1}, Epsilon: {dqn_agent.epsilon}")
            break

env.close()

在这个例子中,我们使用 Keras 构建了一个简单的深度 Q 网络模型,并实现了一个 DQN Agent。Agent 根据 epsilon-greedy 策略选择动作,并通过 Q-learning 更新模型。

4. 深度确定性策略梯度(DDPG)

DDPG 是一种用于解决连续动作空间问题的强化学习算法。下面是一个简单的 DDPG 示例,使用 Gym 中的 Pendulum 环境:

代码语言:javascript复制
import gym
import numpy as np
from keras.models import Sequential, Model
from keras.layers import Dense, Input, concatenate
from keras.optimizers import Adam
from keras import backend as K

# 创建 Pendulum 环境
env = gym.make('Pendulum-v0')

# 定义深度确定性策略梯度(DDPG)模型
class ActorCritic:
    def __init__(self, state_size, action_size):
        self.state_size = state_size
        self.action_size = action_size
        self.action_low = env.action_space.low
        self.action_high = env.action_space.high
        self.actor = self.build_actor()
        self.critic = self.build_critic()

    def build_actor(self):
        state_input = Input(shape=(self.state_size,))
        h = Dense(24, activation='relu')(state_input)
        h = Dense(48, activation='relu')(h)
        h = Dense(24, activation='relu')(h)
        output = Dense(self.action_size, activation='tanh')(h)
        output = Lambda(lambda x: x * (self.action_high - self.action_low) / 2   (self.action_high   self.action_low) / 2)(output)
        model = Model(inputs=state_input, outputs=output)
        return model

    def build_critic(self):
        state_input = Input(shape=(self.state_size,))
        action_input = Input(shape=(self.action_size,))
        state_h = Dense(24, activation='relu')(state_input)
        state_h = Dense(48)(state_h)
        action_h = Dense(48)(action_input)
        h = concatenate([state_h, action_h])
        h = Dense(24, activation='relu')(h)
        output = Dense(1, activation='linear')(h)
        model = Model(inputs=[state_input, action_input], outputs=output)
        return model

    def act(self, state):
        return self.actor.predict(state)

    def train(self, states, actions, rewards, next_states, dones):
        target_actions = self.actor.predict(next_states)
        target_q_values = self.critic.predict([next_states, target_actions])
        targets = rewards   0.99 * target_q_values * (1 - dones)
        self.critic.train_on_batch([states, actions], targets)
        action_gradients = np.reshape(self.critic.get_gradients([states, actions, 0]), (-1, self.action_size))
        self.actor.train_fn([states, action_gradients, 1])

# 初始化 DDPG Agent
ddpg_agent = ActorCritic(env.observation_space.shape[0], env.action_space.shape[0])

# 训练 DDPG
for episode in range(1000):
    state = env.reset()
    state = np.reshape(state, [1, env.observation_space.shape[0]])
    total_reward = 0
    for time in range(500):
        # env.render()
        action = ddpg_agent.act(state)
        next_state, reward, done, _ = env.step(action)
        next_state = np.reshape(next_state, [1, env.observation_space.shape[0]])
        ddpg_agent.train(state, action, reward, next_state, done)
        state = next_state
        total_reward  = reward
        if done:
            print(f"Episode: {episode 1}, Total Reward: {total_reward}")
            break

env.close()

在这个例子中,我们定义了一个 Actor 和一个 Critic,使用 Keras 构建了一个简单的 DDPG 模型。Agent 根据模型选择动作,并通过训练 Actor 和 Critic 来优化策略。

5. 总结

本篇博客介绍了在 OpenAI Gym 中应用深度 Q 网络(DQN)和深度确定性策略梯度(DDPG)算法的示例。这些算法为解决离散和连续动作空间的强化学习问题提供了基础。在实际应用中,需要根据具体问题调整网络结构和超参数,并进行大量的训练以获得良好的性能。希望这篇博客能够帮助你更深入地理解 OpenAI Gym 中的强化学习算法。

0 人点赞