This article was first published on:Walker AI
Policy Optimization is a kind of algorithm in reinforcement learning. Its basic idea is different from value-based algorithm. Therefore, many textbooks divide model-free RL into two categories, Policy Optimization and value-based. This blog series will refer to OpenAI’s how-to tutorial Spinning Up 1. The Spinning Up series is a great tutorial for getting started with Policy Optimization, especially for beginners. Policy Gradient (PG) algorithm is the core concept of strategy optimization. In this chapter, we will start from the simplest PG derivation and reveal the mystery of strategy optimization algorithm step by step.
1. Intuitive understanding
An intuitive explanation of strategy gradient can be expressed in one sentence: “If an action increases the final return, increase the probability of the occurrence of the action, and conversely, decrease the probability of the occurrence of the action”. This sentence conveys two meanings:
- We’re thinking about the effect of the action on the return, not the state or anything else.
- We adjust the probability of the occurrence of an action instead of assigning a score to an action, which is different from value-based algorithms.
2. Strategy gradient derivation
In this section, we will deduce the basic formula of strategy gradient step by step. This section is very important. If you understand the derivation process, you will basically understand the core idea of strategy gradient. Therefore, be patient to understand the content of this section, preferably to reach the point of derivation.
- Maximizing return function
We use a parameterized neural network to represent our strategy πθ\pi_ thetaπθ, so that our goal can be expressed as adjusting θ\theta theta to maximize the expected return, expressed by the formula:
In the formula (1), tau \ tau tau said a complete path from start to finish. In general, for maximization problems, we can use the gradient ascent algorithm to find the maximum.
To be able to get the optimal parameters step by step, we need to get ∇ J(πθ)\nabla_{theta} J\left(\pi_{theta} right)∇ J(πθ), and then use the gradient ascent algorithm, the core idea is so simple.
- Policy gradient
The key is to find the gradient of the final return function J(πθ)J(\pi_\theta)J(πθ) with respect to θ\thetaθ. This is the policy gradient. The algorithm to solve RL problem by optimizing the strategy gradient is called the strategy gradient algorithm. TRPO all belong to strategy gradient algorithm. ∇ J(πθ)\nabla_{theta} J\left(\pi_{theta} right)∇ J(πθ), which is the most core of this blog.
In the above derivation, the log derivative technique is used: the derivative of logx\log xlogx with respect to XXX is 1x\frac{1}{x}x1. Therefore, we can get the following formula:
Hence, formula (5) to Formula (6), and then we expand formula (7) further, mainly ∇ logP(τ∣θ)\nabla_{\theta} \log P(\tau \mid \theta)∇ logP(τ∣θ). Let’s look at P(τ∣θ)P(tau \mid \theta)P(τ∣θ)
Add log to multiply to add:
Compute the gradient of the log function and cancel out some constants:
Therefore, combining formula (7) with formula (9), we get the final expression
Formula (10) is the core expression of PG algorithm. It can be seen from this formula that the strategy gradient we require is actually an expectation, and the specific engineering implementation can adopt Monte Carlo idea to obtain the expectation, that is, sampling to obtain the mean to approximate the expectation. We collect a series of D={τ I} I =1… N \ mathcal {D} = \ left \ {\ tau_ {I} \ right \} _ {I = 1, \ ldots, N} D = {I} tau I = 1,… N, each trajectory is obtained by agent using strategy πθ\pi_{\theta}πθ interactive sampling with the environment, then the strategy gradient can be expressed as:
Where ∣ D ∣ | \ mathcal {D} | ∣ D ∣ says the number of sampling of the trajectory. Now that we have completed the detailed derivation of the strategy gradient, we can breathe a sigh of relief and make some modifications on the basis of Formula (10).
Before making simple changes, let’s summarize formula (10), which is the core formula of PG algorithm after all:
- Compared with our common supervised learning algorithms, we will define loss function, then the loss function differentiates the parameters, and the gradient descent algorithm is used to continuously minimize Loss. For the PG algorithm, our “loss function” is actually the logarithm of the expected return, and our goal is to maximize the expected return, so the gradient rise algorithm is used here.
- In general supervised learning algorithms, the distributions of training samples and test samples are the same. Loss function is obtained from the samples with fixed distribution and is independent of the parameters we want to optimize. For PG algorithm, however, we have a sampling process, based on the strategy of the existing policy is different, the sampling to get samples of the different, lead to the final calculated loss also has bigger difference, this makes the network easily fitting, I will be back when it comes to more advanced Actor – Critic framework, using the confrontation mentality, to solve this problem.
- For general supervised learning, loss is better as small as possible. Loss is also a very effective indicator to evaluate whether training is completed. However, for PG algorithm, the “Loss function” here is of little significance, mainly because the expected return here only applies to the data set generated by the current strategy. Therefore, it does not mean that the model performs better when loss is reduced.
- We can think of R(τ)R(tau)R(τ) as the weight of logπθ(at∣st)log\pi_\theta(a_t \mid s_t)logπθ(at∣st) with small rewards, It shows that the effect of action ATA_TAT under STS_TST is not good, and the probability of atA_TAT under STS_TST state is reduced; on the contrary, the probability of action occurrence is increased if the reward is larger, so as to achieve the purpose of selecting the most appropriate action.
3. Improve the return function
If we look at formula (10), R(τ)R(tau)R(τ), which means the return of the entire trajectory, does not make sense. Applying the same reward to all actions in a trajectory is equivalent to assigning the same weight to each action in the trajectory. Obviously, there are good and bad actions in the action sequence, and they all take the same reward, which cannot achieve the purpose of reward and punishment. So how do we represent the reward for performing an action in a certain state?
A more intuitive idea is that the current action will affect the subsequent state and get an immediate reward. Then we only need to use the accumulated discount return to represent the return of the current action, which can be expressed by the formula:
This is called reward to go in spinning up, so formula (10) can be expressed as:
Of course, the weight allocation using reward to Go is still quite preliminary, so we can use a more advanced weight allocation method to further reduce the variance of reward allocation. Due to space, we will talk about it later.
4. To summarize
In this chapter, we spent a lot of time deriving the core formula of strategy gradient (PG) and obtained the key expression (10). Understanding this formula is very helpful for our subsequent understanding of the whole PG algorithm family. We hope you can understand the derivation process of this formula carefully.
PS: more dry technology, pay attention to the public, | xingzhe_ai 】, and walker to discuss together!
- OpenAI Spinning Up spinningup.openai.com/ ↩