功能描述:
近端策略优化算法(Proximal Policy Optimization,简称PPO) ,它的性能堪比目前最先进的相关算法,同时极易实现和调参。得益于其优异的易用性和超高表现,PPO已经成为了OpenAI的默认增强学习算法。
PPO使得我们可以在相当复杂的环境中训练AI的运动策略,比如上面所显示的 Roboschool 项目,该项目中期望训练一个机器人(agent)去到达一个目标(场地上的粉球),机器人需要自己去掌握行走、跑动和转弯等动作,并通过调整自身的动量从轻微击打(视频中可以看到飞行的白色小方块,打到人物模型后模型会晃动甚至倒地。——译者注)中恢复,以及学习如何在被击倒之后从地面上重新站起。
策略梯度算法 是一种很有效的算法,无论是 视频游戏测试 、 3D运动控制 还是 围棋AI ,我们在针对这些领域的深度神经网络研究上所获得的突破都是基于策略梯度算法。但是,企图通过策略梯度算法来获得更好的结果是很难的,这是因为这种方法对于步长(stepsize)的选择很敏感——步长太小,计算过程会慢到令人发指;而步长太大,信号又很容易被噪声影响而导致无法收敛,或是可能出现严重的性能骤降问题。采用策略梯度算法的采样效率往往很难令人满意,我们进场会为了训练几个简单的任务二而耗费成百上千个步长时间。
学者们期望通过一些方法来修补这些瑕疵,比如通过限制或是优化策略更新幅度来提升效率, TRPO 和 ACER 就是一些著名的尝试。这些方法通常会有它们自定义的权衡方法——ACER需要额外的程序代码来做off-policy纠正以及经验回放(replay buffer),因而它比PPO复杂得多,而且根据雅达利评测模型(Atari benchmark)的测试,这种复杂仅仅带来了些微的优势;而对于TRPO,虽然它对于处理连续性的控制任务极为有用,但对于一些需要在策略和价值函数或者辅助损失过程之间传递参数的算法而言,TRPO的兼容性并不理想,而这些算法在解决雅达利问题或是其他主要依赖图像输入的问题中相当普遍。
我们已经使用PPO所训练的运动策略来构建了一些可互动机器人——我们可以在Roboschool提供的环境中 使用键盘 来为机器人设置一些新的目标位置,即使输入序列与模型的训练过程并不一致,机器人也能设法完成任务。
我们同样使用PPO来训练一个复杂的仿真机器人学会行走,比如下面的来自波士顿动力学(Boston Dynamics)的亚特兰斯('Atlas')模型。这个模型拥有30个关节,相对的,上面的人形机器人只有17个关节。 一些其他的研究者 表示使用PPO训练的仿真机器人在跨越障碍时甚至会采用一些令人惊讶的跑酷动作。
相对而言,监督学习是很容易的,我们只要实现一个价值函数,对此采用梯度下降法,几乎不用做什么进一步的参数调优就能很骄傲地宣称自己得到了一个很优秀的训练结果。可惜的是,在增强学习中这一切并不会如此简单——增强学习的算法有着太多难以调试的可变部分,我们必须花费大把的精力才能得到一个良好的训练结果。而为了平衡易于实现、样本复杂度以及易于优化这几个问题,并力求在每一次收敛步骤中都得到策略更新,同时保证与之前的策略偏差相对较小,我们必须寻找一种新的优化方法,这就是PPO算法。
import os
import shutil
import time
from time import sleep
from ppo.renderthread import RenderThread
from ppo.models import *
from ppo.trainer import Trainer
from agents import GymEnvironment
# ## Proximal Policy Optimization (PPO)
# Contains an implementation of PPO as described [here](https://arxiv.org/abs/1707.06347).
# Algorithm parameters
# batch-size=<n> How many experiences per gradient descent update step [default: 64].
batch_size = 128
# beta=<n> Strength of entropy regularization [default: 2.5e-3].
beta = 2.5e-3
# buffer-size=<n> How large the experience buffer should be before gradient descent [default: 2048].
buffer_size = batch_size * 32
# epsilon=<n> Acceptable threshold around ratio of old and new policy probabilities [default: 0.2].
epsilon = 0.2
# gamma=<n> Reward discount rate [default: 0.99].
gamma = 0.99
# hidden-units=<n> Number of units in hidden layer [default: 64].
hidden_units = 128
# lambd=<n> Lambda parameter for GAE [default: 0.95].
lambd = 0.95
# learning-rate=<rate> Model learning rate [default: 3e-4].
learning_rate = 4e-5
# max-steps=<n> Maximum number of steps to run environment [default: 1e6].
max_steps = 15e6
# normalize Activate state normalization for this many steps and freeze statistics afterwards.
normalize_steps = 0
# num-epoch=<n> Number of gradient descent steps per batch of experiences [default: 5].
num_epoch = 10
# num-layers=<n> Number of hidden layers between state/observation and outputs [default: 2].
num_layers = 1
# time-horizon=<n> How many steps to collect per agent before adding to buffer [default: 2048].
time_horizon = 2048
# General parameters
# keep-checkpoints=<n> How many model checkpoints to keep [default: 5].
keep_checkpoints = 5
# load Whether to load the model or randomly initialize [default: False].
load_model = True
# run-path=<path> The sub-directory name for model and summary statistics.
summary_path = './PPO_summary'
model_path = './models'
# summary-freq=<n> Frequency at which to save training statistics [default: 10000].
summary_freq = buffer_size * 5
# save-freq=<n> Frequency at which to save model [default: 50000].
save_freq = summary_freq
# train Whether to train model, or only run inference [default: False].
train_model = False
# render environment to display progress
render = True
# save recordings of episodes
record = True
os.environ["CUDA_VISIBLE_DEVICES"] = "-1" # GPU is not efficient here
env_name = 'RocketLander-v0'
env = GymEnvironment(env_name=env_name, log_path="./PPO_log", skip_frames=6)
env_render = GymEnvironment(env_name=env_name, log_path="./PPO_log_render", render=True, record=record)
fps = env_render.env.metadata.get('video.frames_per_second', 30)
print(str(env))
brain_name = env.external_brain_names[0]
tf.reset_default_graph()
ppo_model = create_agent_model(env, lr=learning_rate,
h_size=hidden_units, epsilon=epsilon,
beta=beta, max_step=max_steps,
normalize=normalize_steps, num_layers=num_layers)
is_continuous = env.brains[brain_name].action_space_type == "continuous"
use_observations = False
use_states = True
if not load_model:
shutil.rmtree(summary_path, ignore_errors=True)
if not os.path.exists(model_path):
os.makedirs(model_path)
if not os.path.exists(summary_path):
os.makedirs(summary_path)
tf.set_random_seed(np.random.randint(1024))
init = tf.global_variables_initializer()
saver = tf.train.Saver(max_to_keep=keep_checkpoints)
with tf.Session() as sess:
# Instantiate model parameters
if load_model:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(model_path)
if ckpt is None:
print('The model {0} could not be found. Make sure you specified the right --run-path'.format(model_path))
saver.restore(sess, ckpt.model_checkpoint_path)
else:
sess.run(init)
steps, last_reward = sess.run([ppo_model.global_step, ppo_model.last_reward])
summary_writer = tf.summary.FileWriter(summary_path)
info = env.reset()[brain_name]
trainer = Trainer(ppo_model, sess, info, is_continuous, use_observations, use_states, train_model)
trainer_monitor = Trainer(ppo_model, sess, info, is_continuous, use_observations, use_states, False)
render_started = False
while steps <= max_steps or not train_model:
if env.global_done:
info = env.reset()[brain_name]
trainer.reset_buffers(info, total=True)
# Decide and take an action
if train_model:
info = trainer.take_action(info, env, brain_name, steps, normalize_steps, stochastic=True)
trainer.process_experiences(info, time_horizon, gamma, lambd)
else:
sleep(1)
if len(trainer.training_buffer['actions']) > buffer_size and train_model:
if render:
renderthread.pause()
print("Optimizing...")
t = time.time()
# Perform gradient descent with experience buffer
trainer.update_model(batch_size, num_epoch)
print("Optimization finished in {:.1f} seconds.".format(float(time.time() - t)))
if render:
renderthread.resume()
if steps % summary_freq == 0 and steps != 0 and train_model:
# Write training statistics to tensorboard.
trainer.write_summary(summary_writer, steps)
if steps % save_freq == 0 and steps != 0 and train_model:
# Save Tensorflow model
save_model(sess=sess, model_path=model_path, steps=steps, saver=saver)
if train_model:
steps += 1
sess.run(ppo_model.increment_step)
if len(trainer.stats['cumulative_reward']) > 0:
mean_reward = np.mean(trainer.stats['cumulative_reward'])
sess.run(ppo_model.update_reward, feed_dict={ppo_model.new_reward: mean_reward})
last_reward = sess.run(ppo_model.last_reward)
if not render_started and render:
renderthread = RenderThread(sess=sess, trainer=trainer_monitor,
environment=env_render, brain_name=brain_name, normalize=normalize_steps, fps=fps)
renderthread.start()
render_started = True
# Final save Tensorflow model
if steps != 0 and train_model:
save_model(sess=sess, model_path=model_path, steps=steps, saver=saver)
env.close()
export_graph(model_path, env_name)
os.system("shutdown")
联系:highspeedlogic
QQ :1224848052
微信:HuangL1121
邮箱:1224848052@qq.com
网站:http://www.mat7lab.com/
网站:http://www.hslogic.com/
微信扫一扫: