1. Trang chủ
  2. » Công Nghệ Thông Tin

A Crash Course on Reinforcement Learning

40 3 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 40
Dung lượng 1,14 MB

Nội dung

A Crash Course on Reinforcement Learning Farnaz Adib Yaghmaie ∗ Department of Electrical Engineering, Linköping University, Linköping, Sweden Lennart Ljung† Department of Electrical Engineering, Linkö.

A Crash Course on Reinforcement Learning Farnaz Adib Yaghmaie ∗ arXiv:2103.04910v1 [cs.LG] Mar 2021 Department of Electrical Engineering, Linköping University, Linköping, Sweden Lennart Ljung† Department of Electrical Engineering, Linköping University, Linköping, Sweden March 9, 2021 Abstract The emerging field of Reinforcement Learning (RL) has led to impressive results in varied domains like strategy games, robotics, etc This handout aims to give a simple introduction to RL from control perspective and discuss three possible approaches to solve an RL problem: Policy Gradient, Policy Iteration, and Model-building Dynamical systems might have discrete actionspace like cartpole where two possible actions are +1 and -1 or continuous action space like linear Gaussian systems Our discussion covers both cases Introduction Machine Learning (ML) has surpassed human performance in many challenging tasks like pattern recognition [1] and playing video games [2] By recent progress in ML, specifically using deep networks, there is a renewed interest in applying ML techniques to control dynamical systems interacting with a physical environment [3, 4] to more demanding tasks like autonomous driving, agile robotics [5], solving decision-making problems [6], etc Reinforcement Learning (RL) is one of the main branches of Machine Learning which has led to impressive results in varied domains like strategy games, robotics, etc RL concerned with intelligent decision making in a complex environment in order to maximize some notion of reward Because of its generality, RL is studied in many disciplines such as control theory [7–10] and multi-agent systems [11–15, 15–20], etc RL algorithm have shown impressive performances in many challenging problems including playing Atari games [2], robotics [5,21–23], control of continuous-time systems [3,7,8,24–31], and distributed control of multi-agent systems [11–13, 17] From control theory perspective, a closely related topic to RL is adaptive control theory which studies data-driven approaches for control of unknown dynamical systems [32,33] If we consider some notion of optimality along with adaptivity, we end up in the RL setting where it is desired to control an unknown system adaptively and optimally The history of RL dates back decades [34, 35] but by recent progress in ML, specifically using deep networks, the RL field is also reinvented In a typical RL setting, the model of the system is unknown and the aim is to learn how to react with the system to optimize the performance There are three possible approaches to solve ∗ email: † email: farnaz.adib.yaghmaie@liu.se lennart.ljung@liu.se an RL problem [9] 1- Dynamic Programming (DP)-based solutions: This approach relies on the principle of optimal control and the celebrated Q-learning [36] algorithm is an example of this category 2- Policy Gradient: The most ambitious method of solving an RL problem is to directly optimize the performance index [37] 3- Model-building RL: The idea is to estimate a model (possibly recursively) [38] and then the optimal control problem is solved for the estimated model This concept is known as adaptive control [33] in the control community, and there is vast literature around it In RL setting, it is important to distinguish between systems with discrete and continuous action spaces A system with discrete action space has a finite number of actions in each state An example is the cartpole environment where a pole is attached by an un-actuated joint to a cart [39] The system is controlled by applying a force of +1 or -1 to the cart A system with continuous action space has an infinite number of possible actions in each state Linear quadratic (LQ) control is a well studied example where continuous actions space can be considered [24,25] The finiteness or infiniteness of the number of possible actions makes the RL formulation different for these two categories and as such it is not straightforward to use an approach for one to another directly In this document, we give a simple introduction to RL from control perspective and discuss three popular approaches to solve RL problems: Policy Gradient, Q-learning (as an example of Dynamic Programming-based approach) and model-building method Our discussion covers both systems with discrete and continuous action spaces while usually the formulation is done for one of these cases Complementary to this document is a repository called A Crash Course on RL, where one can run the policy gradient and Q-learning algorithms on the cartpole and linear quadratic problems 1.1 How to use this handout? This handout aims to acts as a simple document to explain possible approaches for RL We not give expressions and equations in their most exact and elegant mathematical forms Instead, we try to focus on the main concepts so the equations and expressions may seem sloppy If you are interested in contributing to the RL field, please consider this handout as a start and deploy exact notation in excellent RL references like [34, 40] An important part of understanding RL is the ability to translate concepts to code In this document, we provide some sample codes (given in shaded areas) to illustrate how a concept/function is coded Except for one example in the model-building approach on page 23 which is given in MATLAB syntax (since it uses System Identification toolbox in MATLAB), the coding language in this report is Python The reason is that Python is currently the most popular programming language in RL We use TensorFlow (TF2) and Keras for the Machine Learning platforms TensorFlow is an end-to-end, open-source machine learning platform and Keras is the high-level API of TensorFlow 2: an approchable, highly-productive interface for solving machine learning problems, with a focus on modern deep learning Keras empowers engineers and researchers to take full advantage of the scalability and cross-platform capabilities of TensorFlow The best reference for understanding the deep learning elements in this handout is Keras API reference We use OpenAI Gym library which is a toolkit for developing and comparing reinforcement learning algorithms [41] in Python The python codes provided in this document are actually parts of a repository called A Crash Course on RL https://github.com/FarnazAdib/Crash_course_on_RL You can run the codes either in your web browser or in a Python IDE like PyCharm How to run the codes in web browser? Jupyter notebook is a free and interactive web tool known as a computational notebook, which researchers can use to combine python code and text One can run Jupyter notebooks (ended with *.ipynb) on Google Colab using web browser You can run the code by following the steps below: Go to https://colab.research.google.com/notebooks/intro.ipynb and sign in with a Google account Click “File", and select “Upload Notebook" If you get the webpage in Swedish, click “Arkiv" and then “Ladda upp anteckningsbok" Then, a window will pop up Select Github, paste the following link and click search https://github.com/FarnazAdib/Crash_course_on_RL Then, a list of files with type ipynb appears They are Jupyter notebooks Jupyter notebooks can have both text and code and it is possible to run the code As an example, scroll down and open “pg_on_cartpole_notebook.ipynb" The file contains some cells with text and come cells with code The cells which contain code have [ ] on the left If you move your mouse over [ ], a play box appears You can click on it to run the cell Make sure not to miss a cell as it causes fatal errors You can continue like this and run all code cells one by one up to the end How to run the codes in PyCharm? You can follow these steps to run the code in a Python IDE (preferably PyCharm) Go to https://github.com/FarnazAdib/Crash_course_on_RL and clone the project Open PyCharm From PyCharm Click File and open project Then, navigate to the project folder Follow Preparation.ipynb notebook in “A Crash Course on RL” repository to build a virtual environment and import required libraries Run the python file (ended with py) you want 1.2 Important notes to the reader It is important to keep in mind that, the code provided in this document is for illustration purpose; for example, how a concept/function is coded So not get lost in Python-related details Try to focus on how a function is written: what are the inputs? what are the outputs? how this concept is coded? and so on The complete code can be found in A Crash Course on RL repository The repository contains coding for two classical control problems The first problem is the cartpole environment which is an example of systems with discrete action space [39] The second problem is Linear Quadratic problem which is an example of systems with continuous action space [24, 25] Take the Linear Quadratic problem as a simple example where you can the mathematical derivations by some simple (but careful) hand-writing Summaries and simple implementation of the discussed RL algorithms for the cartpole and LQ problem are given in Appendices A-B The appendices are optional, you can skip reading them and study the code directly We have summarized the frequently used notations in Table Table 1: Notation General: [.]† Transpose operator < S, A, P, R, γ > A Markov Decision Process with state set S, action set A, transition probability set P, immediate reward set R and discount factor γ ns Number of states for discrete state space or dimension of states in continuous action space na Number of actions for discrete action space or the dimension of action in continuous action space θ The parameter vector to be learned π(θ) Deterministic policy or probability density function of the policy (with parameter vector θ) The subscript t The time step st , at The state and action at time t rt = r(st , at ) The immediate reward ct = −rt The immediate cost R(T ) Total reward in form of discounted (3), undiscounted (6) or averaged (4) τ, T A trajectory and the trajectory length P (τ |θ) Probability of trajectory τ conditioned on θ p(at |θ) evaluation of the parametric pdf πθ at at (likelihood) V, Q The value function and the Q-function G vecs(G) = [g11 , , g1n , g22 , The kernel of quadratic Q = z † Gz Policy Gradient: Qlearning: The vectorization of the upper-triangular part of a symmetric matrix G ∈ Rn×n ., g2n , , gnn ]† vecv(v) = [v12 , 2v1 v2 , , 2v1 , The quadratic vector of the vector v ∈ Rn v22 , , 2v2 , , vn2 ]† Figure 1: An RL framework Photo Credit: @ https://en.wikipedia.org/wiki/Reinforcement_learning What is Reinforcement Learning Machine learning can be divided into three categories: 1- Supervised learning, 2- Unsupervised learning, and 3- Reinforcement Learning (RL) Reinforcement Learning (RL) is concerned with decision making problem The main thing that makes RL different from supervised and unsupervised learning is that data has a dynamic nature in contrast to static data sets in supervised and unsupervised learning The dynamic nature of data means that data is generated by a system and the new data depends on the previous actions that the system has received The most famous definition of RL is given by Sutton and Barto [34] “Finding suitable actions to take in a given situation in order to maximize a reward" The idea can be best described by Fig We start a loop from the agent The agent selects an action and applies it to the environment As a result of this action, the environment changes and reveals a new state, a representation of its internal behavior The environment reveals a reward which quantifies how good was the action in the given state The agent receives the state and the reward and tries to select a better action to receive a maximum total of rewards in future This loop continues forever or the environment reveals a final state, in which the environment will not move anymore As we noticed earlier, there are three main components in an RL problem: Environment, reward, and the agent In the sequel, we introduce these terms briefly 2.1 Environment Environment is our dynamical system that produces data Examples of environments are robots, linear and nonlinear dynamical systems (in control theory terminology), and games like Atari and Go The environment receives an action as the input and generates a variable; namely state; based on its own rules The rules govern the dynamical model and it is assumed to be unknown An environment is usually represented by a Markov Decision Process (MDP) In the next section, we will define MDP 2.2 Reward Along with each state-action pair, the environment reveals a reward rt Reward is a scalar measurement that shows how good was the action at the state In RL, we aim to maximize some notion of reward; for example, the total reward where ≤ γ ≤ is the discount or forgetting factor T γ t rt R= t=1 2.3 Agent Agent is what we code It is the decision-making center that produces the action The agent receives the state and the reward and produces the action based on some rules We call such rules policy and the agent updates the rules to have a better one 2.3.1 Agent’s components An RL agent can have up to three main components Note that the agent need not have all but at least one • Policy: The policy is the agent’s rule to select action in a given state So, the policy is a map π : S → A from the set of states S to set of actions A Though not conceptually correct, it is common to use the terms “Agent" and “Policy" interchangeably • Value function: The value function quantifies the performance of the given policy It quantifies the expected total reward if we start in a state and always act according to policy • Model: The agent’s interpretation of the environment 2.3.2 Categorizing RL agent There are many ways to categorize an RL agent, like model-free and model-based, online or offline agents, and so on One possible approach is to categorize RL agents based on the main components that the RL agent is built upon Then, we will have the following classification • Policy gradient • Dynamic Programming (DP)-based solutions • Model building Policy gradient approaches are built upon defining a policy for the agent, DP-based solutions require estimating value functions and model-building approaches try to estimate a model of the environment This is a coarse classification of approaches; indeed by combining different features of the approaches, we get many useful variations which we not discuss in this handout All aforementioned approaches reduce to some sort of function approximation from data obtained from the dynamical systems In policy gradient, we fit a function to the policy; i.e we consider policy as a function of state π = network(state) In DP-based approach, we fit a model to the value function to characterize the cost-to-go In the model-building approach, we fit a model to the state transition of the environment As you can see, in all approaches, there is a modeling assumption The thing which makes one approach different from another is “where” to put the modeling assumption: policy, value function or dynamical system The reader should not be confused by the term “model-free” and think that no model is built in RL The term “model-free” in RL community is simply used to describe the situation where no model of the dynamical system is built Markov Decision Process A Markov decision process (MDP) provides a mathematical framework for modeling decision making problems MDPs are commonly used to describe dynamical systems and represent environment in the RL framework An MDP is a tuple < S, A, P, R, γ > • S: The set of states • A: The set of actions • P: The set of transition probability • R: The set of immediate rewards associated with the state-action pairs • ≤ γ ≤ 1: Discount factor 3.1 States It is difficult to define the concept of state but we can say that a state describes the internal status of the MDP Let S represent the set of states If the MDP has a finite number of states, |S| = ns denotes the number of states Otherwise, if the MDP has a continuous action space, ns denote the dimension of the state vector In RL, it is common to define a Boolean variable done for each state s visited in the MDP done(s) = T rue, s is the final state or the MDP needs to be restarted after s F alse, Otherwise This variable is True only if the state is a final state in the MDP: if the MDP goes to this state, the MDP stays there forever or the MDP needs to be restarted The variable done is False otherwise Defining done comes handy in developing RL algorithms 3.2 Actions Actions are possible choices in each state If there is no choice at all to make, then we have a Markov Process Let A represent the set of actions If the MDP has a finite number of actions, |A| = na denotes the number of actions Otherwise, if the MDP has a continuous action space, na denotes the dimension of the actions In RL, it is crucial to distinguish between MDPs with discrete or continuous action spaces as the methodology to solve will be different 3.3 Transition probability The transition probability describes the dynamics of the MDP It shows the transition probability from all states s to all successor states s for each action a P is the set of transition probability with na matrices each of dimension ns × ns where the s, s entry reads [P a ]ss = p[st+1 = s |st = s, at = a] (1) One can verify that the row sum is equal to one 3.4 Reward The immediate reward or reward in short is measure of goodness of action at at state st and it is represented by rt = E[r(st , at )] (2) where t is the time index and the expectation is calculated over the possible rewards R represent the set of immediate rewards associated with all state-action pairs In the sequel, we give an example where r(st , at ) is stochastic but throughout this handout, we assume that the immediate reward is deterministic and no expectation is involved in (2) The total reward is defined as T γ t rt , R(T ) = t=1 where γ is the discount factor which will be introduced shortly (3) Figure 2: A Markov Decision Process The photo is a modified version of the photo in @ https://en.wikipedia.org/ wiki/Markov_decision_process 3.5 Discount factor The discount factor ≤ γ ≤ quantifies how much we care about the immediate rewards and future rewards We have two extreme cases where γ → and γ → • γ → 0: We only care about the current reward not what we’ll receive in future • γ → 1: We care all rewards equally The discounting factor might be given or we might select it ourselves in the RL problem Usually, we consider < γ < and more closely to one We can select γ = in two cases 1) There exists an absorbing state in the MDP such that if the MDP is in the absorbing state, it will never move from it 2) We care about the average cost; i.e the average of energy consumed in a robotic system In that case, we can define the average cost as T →∞ T T (4) rt R(T ) = lim t=1 Example 3.1 Consider the MDP in Fig This MDP has three states S = {s0 , s1 , s2 } and two actions A = {a0 , a1 } The rewards for some of the transitions are shown by orange arrows For example, if we start at state s1 and take action a0 , we will end up at one of the following cases • With probability 0.1, the reward is −1 and the next state is s1 • With probability 0.7, the reward is +5 and the next state is s0 • With probability 0.2, the reward is +5 and the next state is s2 As a result, the reward for state s1 and action a0 reads E[r(s1 , a0 )] = 0.1 × (−1) + 0.7 × (5) + +0.2 × (5) = 4.4 The transition probability matrices are  0.5   P a0 = 0.7 0.1  0.4 given by   0.5     a 0.2 , P =    0.6 0.3 Observe that the sum of each row in P a0 , P a1 equals to one 0.95 0.3    0.05  0.4 3.6 Revisiting the agents component again Now that we have defined MDP, we can revisit the agents components and define them better As we mentioned an RL agent can have up to three main components • Policy: The policy is the agent’s rule to select action in a given state So, the policy is a map π : S → A We can have Deterministic policy a = π(s) or stochastic policy defined by a pdf π(a|s) = P [at = a|st = s] • Value function: The value function quantifies the performance of the given policy in the states V (s) = E rt + γrt+1 + γ rt+2 + |st = s • Model: The agent’s interpretation of the environment [P a ]ss which might be different from the true value We categorize possible approaches to solve an RL problem based on the main component on which the agent is built upon We start with the policy gradient approach in the next section which relies on building/estimating policy Policy Gradient The most ambitious method of solving an RL problem is to directly learn the policy from optimizing the total reward We not build a model of environment and we not appeal to the Bellman equation Indeed our modeling assumption is in considering a parametric probability density function for the policy and we aim to learn the parameter to maximize the expected total reward J = Eτ ∼πθ [R(T )] (5) where • πθ is the probability density function (pdf) of the policy and θ is the parameter vector • τ is a trajectory obtained from sampling the policy and it is given by τ = (s1 , a1 , r1 , s2 , a2 , r2 , s3 , , sT +1 ) where st , at , rt are the state, action, reward at time t and T is the trajectory length τ ∼ πθ means that trajectory τ is generated by sampling actions from the pdf πθ • R(T ) is undiscounted finite-time total reward T R(T ) = rt (6) t=1 • Expectation is defined over the probability of the trajectory We would like to directly optimize the policy by a gradient approach So, we aim to obtain the gradient of J with respect to parameter θ ∇θ J The algorithms that optimizes the policy in this way are called Policy Gradient (PG) algorithms The log-derivative trick helps us to obtain the policy gradient ∇θ J The trick depends on the simple math rule ∇p log p = Assume that p is a function of θ Then, using chain rule, we have p ∇θ log p = ∇p log p∇θ p = ∇θ p p Rearranging the above equation ∇θ p = p∇θ log p (7) Equation (7) is called the log-derivative trick and helps us to get rid of dynamics in PG You will see an application of (7) in Subsection 4.3 In the sequel, we define the main components in PG 4.1 Defining probability density function for the policy In PG, we consider the class of stochastic policies One may ask why we consider stochastic policies when we know that the optimal policy for MDP is deterministic [9, 42]? The reason is that in PG, no value function and no model of the dynamics are built The only way to evaluate a policy is to deviate from it and see the total reward So, the burden of the optimization is shifted onto sampling the policy: By perturbing the policy and observing the result, we can improve policy parameters If we consider a deterministic policy in PG, the agent gets trapped in a local minimum The reason is that the agent has “no” way of examining other possible actions and furthermore, there is no value function to show how “good” the current policy is Considering a stochastic policy is essential in PG As a result, our modeling assumption in PG is in considering a probability density function (pdf) for the policy As we can see in Fig the pdf is defined differently for discrete and continuous random variables For discrete random variables, the pdf is given as probability for all possible outcomes while for continuous random variables it is given as a function This tiny technical point makes coding completely different for the discrete and continuous action space cases So we treat discrete and continuous action spaces differently in the sequel Figure 3: Pdf for discrete and continuous reandom variables Photo Credit: https://towardsdatascience.com/probability-distributions-discrete-and-continuous-7a94ede66dc0 4.1.1 @ Discrete action space As we said earlier, our modeling assumption in PG is in considering a parametric pdf for the policy We represent the pdf with πθ where θ is the parameter The pdf πθ maps from the state to the probability of each action So, if there are na actions, the policy network has na outputs, each representing the probability of an action Note that the outputs should sum to 10 Here, we bring a simple class of implementing PG for an environment with discrete action space in python c l a s s PG: def init ( s e l f , hparams ) : s e l f hparams = hparams np random s e e d ( hparams [ ’ Rand_Seed ’ ] ) t f random s e t _ s e e d ( hparams [ ’ Rand_Seed ’ ] ) # The p o l i c y n e t w o r k s e l f network = k e r a s S e q u e n t i a l ( [ k e r a s l a y e r s Dense ( s e l f hparams [ ’ h i d d e n _ s i z e ’ ] , input_dim= s e l f hparams [ ’ num_state ’ ] , a c t i v a t i o n= ’ r e l u ’ , k e r n e l _ i n i t i a l i z e r= k e r a s i n i t i a l i z e r s he_normal ( ) , dtype= ’ f l o a t ’ ) , k e r a s l a y e r s Dense ( s e l f hparams [ ’ h i d d e n _ s i z e ’ ] , a c t i v a t i o n= ’ r e l u ’ , k e r n e l _ i n i t i a l i z e r= k e r a s i n i t i a l i z e r s he_normal ( ) , dtype= ’ f l o a t ’ ) , k e r a s l a y e r s Dense ( s e l f hparams [ ’ num_actions ’ ] , a c t i v a t i o n= ’ softmax ’ , dtype= ’ f l o a t ’ ) ] ) s e l f network compile ( l o s s= ’ c a t e g o r i c a l _ c r o s s e n t r o p y ’ , o p t i m i z e r=k e r a s o p t i m i z e r s Adam( e p s i l o n= s e l f hparams [ ’ adam_eps ’ ] , l e a r n i n g _ r a t e= s e l f hparams [ ’ learning_rate_adam ’ ] ) ) def g e t _ a c t i o n ( s e l f , s t a t e , env ) : # Building the pdf for the given s t a t e softmax_out = s e l f network ( s t a t e r e s h a p e ( ( , −1))) # Sampling an a c t i o n a c c o r d i n g t o t h e p d f s e l e c t e d _ a c t i o n = np random c h o i c e ( s e l f hparams [ ’ num_actions ’ ] , p=softmax_out numpy ( ) [ ] ) return s e l e c t e d _ a c t i o n def update_network ( s e l f , s t a t e s , a c t i o n s , r e w a r d s ) : reward_sum = rewards_to_go = [ ] f o r reward in r e w a r d s [ : : − ] : # r e v e r s e b u f f e r r reward_sum = reward + s e l f hparams [ ’GAMMA’ ] ∗ reward_sum rewards_to_go append ( reward_sum ) 26 rewards_to_go r e v e r s e ( ) rewards_to_go = np a r r a y ( rewards_to_go ) # s t a n d a r d i s e the rewards rewards_to_go −= np mean ( rewards_to_go ) rewards_to_go /= np s t d ( rewards_to_go ) s t a t e s = np v s t a c k ( s t a t e s ) target_actions = t f keras u t i l s to_categorical ( np a r r a y ( a c t i o n s ) , s e l f hparams [ ’ num_actions ’ ] ) l o s s = s e l f network train_on_batch ( states , target_actions , sample_weight=rewards_to_go ) return l o s s You can take a look at the integrated implementation of PG on the cartpole problem in Crash Course on RL A.3 Q-learning algorithm for the cartpole problem Here is a summary of Q-learning algorithm for the cartpole problem (and it can be used for any other RL problem with discrete action space): We build a network to represent Q(s, a), see subsection 5.1.1 and assign a mean-square-error loss function, see subsection 5.2.1 network = k e r a s S e q u e n t i a l ( [ k e r a s l a y e r s Dense ( , input_dim=n_s , a c t i v a t i o n= ’ r e l u ’ ) , k e r a s l a y e r s Dense ( , a c t i v a t i o n= ’ r e l u ’ ) , k e r a s l a y e r s Dense ( , a c t i v a t i o n= ’ r e l u ’ ) , k e r a s l a y e r s Dense (n_a ) ] ) network compile ( l o s s= ’ mean_squared_error ’ , o p t i m i z e r=k e r a s o p t i m i z e r s Adam ( ) ) Then, we iteratively improve the network In each iteration of the algorithm, we the following We sample a trajectory from the environment to collect data for Q-learning by following these steps: (a) We initialize empty histories for states=[], actions=[], rewards=[], next_states=[], dones=[] (b) We observe the state s and select action a according to (see subsection 5.3.1) i f np random random ( )

Ngày đăng: 09/09/2022, 10:18

w