OpenAI has [announced] that it will use an algorithm called PPO (Proximal Policy Optimization) as the standard algorithm of the organization (https://blog.openai.com/openai-baselines-ppo/). The code has also been released so I'll give it a try. It seems that it is included in the reinforcement learning package called baselines.
I tried it on OSX 10.11.6, Python 3.5.1, TensorFlow 1.2.1.
The installation procedure will be described later, and we will try it first. The sample is run_atari.py from here.
python run_atari.py
I started running something, but it seems that it will take time in the atari environment with my MacBook Pro, so it is something light as usual Inverted pendulum Let's. Use Pendulmn-v0 of OpenAI gym. You might be told how much you like the inverted pendulum swing, but it's easy but it's just right in the sense of accomplishment. .. ..
That's fine, but there is no code that is the most fun from the user's point of view, such as saving the learning results and experimenting with trained agents. Well let's write. .. .. Since handling the coefficients is troublesome, I did it with a rough method of saving and restoring the entire TensorFlow session. The code is given here [https://github.com/ashitani/PPO_pendulmn).
python run_pendulmn.py train
python run_pendulmn.py replay
Learn and play with the trained model respectively.
The learning process is written to monitor.json, so let's plot the transition of reward. The horizontal axis is the number of iterations, and the vertical axis is reward.
python plot_log.py
Spits out png with.
Humutty. As usual, the behavior after achieving the highest record is not stable. Well, reinforcement learning applied to unstable systems is up to that point, but it would be nice to be able to leave agents with higher reward rankings.
Let's do something with hyperparameters to see if we can make it even a little better. There is a schedule in the argument of learn (), and this time I tried to attenuate the learning rate linearly with schedule = "linear", but if it is linear, it does not have the effect of calming down for a while after it has fallen completely. Therefore, the custom attenuation factor is set as follows. I'll wait for a while after it becomes extremely small. This area is done in a file called pposgd_simple.py, so I modified it.
cur_lrmult = max(1.0 - float(timesteps_so_far) / (max_timesteps/2), 0)
if cur_lrmult<1e-5:
cur_lrmult = 1e-5
How about now?
Yes. It's a little better. I wonder if I have to do it for a longer time. Also, Pendulmn-v0 seems to have a random initial state, and I think that is also affecting it.
Let's replay the learning results.
it is a good feeling. Unlike my previous entry, this agent sometimes outputs a continuous amount, which is especially beautiful after rest.
I can only thank you for the code release because I was able to execute the PPO paper without reading a single line. However, I think it would be better to make it a little more accessible if it is to be lifted to the standard algorithm. Well, I wonder from now on.
I think the same OpenAI gym is a great job in terms of unifying the interface on the environment side. Isn't it possible for the agent side to create a unified interface in the same way? Roughly speaking, even DQN and PPO of baseline are not unified (well, I understand that generalization is difficult).
I haven't benchmarked with DQN, but I think it will make a difference if I don't do it with a more difficult problem.
By the way, the following is the installation procedure as of today (July 22, 2017). Eventually, one pip will pass. ..
First, you need TensorFlow 1.0.0 or higher. To install TensorFlow, see Documentation.
pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py3-none-any.whl
Then install the latest version of baseline from git. It seems that you can go with pip, but as of today there seems to be an inconsistency.
git clone https://github.com/openai/baselines.git
Add \ _ \ _ init \ _ \ _. Py with the following contents to baselines / baselines / pposgd /.
from baselines.pposgd import *
It is an installation.
cd baselines
python setup.py install
Install any other dependencies.
brew install openmpi
pip install mpi4py
pip install atari_py
At least the sample run_atari.py has passed.
Recommended Posts