木叶下

  • 编程算法
  • 深度学习
  • 微小工作
  • 善用软件
  • 杂记
  • 诗人远方
南国羽说
文字记录生活
  1. 首页
  2. 深度学习
  3. 正文

Natural Computation Methods for Machine Learning Note 07

2020年2月14日 4715点热度 0人点赞 0条评论

Natural Computation Methods for Machine Learning Note 07

Contents hide
1 Natural Computation Methods for Machine Learning Note 07
1.1 MENACE
1.2 General issues
1.2.1 state
1.3 Reward (reinforcement signal, grade)
1.4 Action
1.5 The Agent
1.6 Delayed vs Immediate RL
1.7 Comparison to supervised learning
1.8 Goals, values and policies
1.8.1 Goal

From today, we are going to introduce reinforcement learning(RL).

Forget ANNs for a while - RL is a field in its own right, independent from ANNs. However, as we will see, ANNs are sometimes used in RL.

Definition: Learning by interaction with an environment, to maximize some long-term scalar value (or to minimize a cost). Trial-and-error.

RL

Some classic RL problems: Robot navigation, pole balancing, TD-gammon, AlphaGo, Crite’s elevators.

MENACE

MENACE

Learns to play the game Tic–Tac–Toe (Noughts and Crosses), using a number of matchboxes and coloured beads.

The machine gets no feedback until the game ends.

It is very difficult to decide which individual moves during a game which were the most responsible for the result. Here: All moves made during a game are considered good, if we won, and bad if we lost.

All moves are credited/blamed by the same amount, disregarding how early or late they were made during the game. We would perhaps want give more credit to recent moves than to early ones. This can be done by initiating the boxes with a different number of beads of each colour, depending on the distance from the starting position.

The rules are given as a prior (by the initialization procedure). MENACE can therefore focus on learning, not how to play, but how to play well.

All beads of a certain colour may be removed in a box.

General issues

state

A description of the current state of the environment.

(Menace: the current board position)

Must contain enough information for the agent to solve its task, but does not have to be (seldom is) a complete description of the environment. (Think of it as a vector of sensor values)

Reward (reinforcement signal, grade)

A quality measure of the environment's state. Only an indirect measure of the agent’s state (maybe incomplete) and actions (since the environment may be affected by other things as well). This is what we want to minimize/ maximize.

Note: Don’t assume that negative rewards are bad and positive rewards are good – the reward is simply a scalar to be maximized. The maximum can be negative!

Action

Affects the environment and hence, indirectly, the next state and reward. [The environment may be affected by other factors as well, e.g. other agents] Discrete or continuous (finite or infinite number of possible actions for each state). (Menace: discrete, Learning to steer a car: continuous).

The Agent

Action selection should be stochastic! The agent must be allowed to select and the must be allowed to explore, i.e. sometimes make actions which may seem sub-optimal at the time.

Agent

Common example: Epsilon-greedy: With probability ​, select a random action (i.e. ignore the suggestions from the learning system). Otherwise, be greedy.

Greedy, in this context, means to trust the learning system - to select the action that is currently believed to be the best with ​(Menace: the action which has the largest number of associated beads in the box)

Delayed vs Immediate RL

Rewards should be associated with a sequence of actions (delayed RL), not only the latest one (immediate RL), since earlier decisions may also have contributed to the result. (Menace: delayed).

The difference is not only in how often we get rewarded, but also in how the rewards are used/interpreted.

Core issue: How to distribute credit or blame on the sequence of actions. Temporal credit assignment problem (Menace: give more credit/blame to recent moves than earlier ones)

Comparison to supervised learning

Supervised learning imitates (and can never exceed the performance of its teacher). RL explores (and therefore can). Can be combined, as in AlphaGo.

Supervised learning does not (usually) interact. RL does.

Feedback in supervised learning is specific and instructive (w.r.t. the agent’s actions). In RL it is not. The reward does not say which other actions would have been better (which is why we must explore).

Goals, values and policies

Goal

To maximize the long term reward, i.e. to select actions which maximize expected future(long term) rewards. E.g., at time t to maximize

​R_{t}=E\left(\sum_{k=0}^{h} r_{t+k}\right)

(discrete time, finite horizon, looks h steps ahead)

Problem: We often need to do this for an infinite horizon (​\(h=\infty\)). Then, we must ensure that the sum still is finite (since it is supposed to be maximized).

Solution: Maximize the value, V – the expected value of a discounted (exponentially decreasing) sum of future rewards

​V_{t}=E\left(\sum_{k=0}^{\infty} \gamma^{k} r_{t+k}\right), \gamma \in[0,1) \text{is a discount factor}

Continuous time algorithms exist but is out of scope for this course.

A policy is a function ​ \(\pi(s)\) from states to actions (i.e. the learning system).

Goal (reformulated): Find an optimal policy, ​ \(\pi^*(s)\)​, which for each state selects the action which maximizes V.

These type of optimization problems are called Markov Decision Problems (MDPs) and most methods to solve them are closely related to Dynamic Programming (DP). RL can be seen as a generalization of DP, applicable to much larger problems.

标签: machine learning Natural Computation Reinforcement learning
最后更新:2020年2月14日

Dong Wang

I am a PhD student of TU Graz in Austria. My research interests include Embedded/Edge AI, efficient machine learning, model sparsity, deep learning, computer vision, and IoT. I would like to understand the foundational problems in deep learning.

点赞
< 上一篇
下一篇 >

文章评论

razz evil exclaim smile redface biggrin eek confused idea lol mad twisted rolleyes wink cool arrow neutral cry mrgreen drooling persevering
取消回复

这个站点使用 Akismet 来减少垃圾评论。了解你的评论数据如何被处理。

文章目录
  • Natural Computation Methods for Machine Learning Note 07
    • MENACE
    • General issues
      • state
    • Reward (reinforcement signal, grade)
    • Action
    • The Agent
    • Delayed vs Immediate RL
    • Comparison to supervised learning
    • Goals, values and policies
      • Goal

COPYRIGHT © 2013-2024 nanguoyu.com. ALL RIGHTS RESERVED.

Theme Kratos Made By Seaton Jiang

陕ICP备14007751号-1