木叶下

  • 编程算法
  • 深度学习
  • 微小工作
  • 善用软件
  • 杂记
  • 诗人远方
南国羽说
文字记录生活
  1. 首页
  2. 深度学习
  3. 正文

Natural Computation Methods for Machine Learning Note 02

2020年2月4日 5193点热度 0人点赞 0条评论

Neural basics

Contents hide
1 Neural basics
1.1 Training strategies
1.1.1 Supervised learning
1.1.2 Reinforcement learning
1.1.3 Unsupervised learning (UL)
1.2 Connection strategies (architectures)
1.2.1 Feedforward networks (focus of this course)
1.2.2 Recurrent networks
1.2.3 Fully interconnected recurrent networks

A question, what is AI?

Artificial Intelligence: It is not the art of making computers behave as they do in the movies. It is motivated by the brain, but do not overdo it. We always regard it as a black box.  After opening this box, we can see a network of simple processing units(nodes/neurons) working in parallel. Here we should concern about the ' parallel'.  According to the 100 step rule. parallelism important, not individual speed.

Artificial neuron consists of ,

  • Equation
  • Activation function / Step function (it can be discrete/continuous)

For example, binary neuron is something like  this,

neuron

where (f) is a binary activation function, x_1,x_2, \cdots x_n are  inputs, w_1,w_2, \cdots w_n are  weights. Binary neuron may very well have more than  2 inputs.

The activation function can be,

f(S)=
\begin{cases}
0& \text{S<0}\
1& \text{S=0}
\end{cases}

//TODO

AND OR NAND neuron node structure figures.

  • Common properties of ANNS
  • Information is stored in the connections (as weights), not in the nodes.
  • ANN’s are trained (by modifying the weights), not programmed. [Motivate the advantages of this (e.g. first lab, Volvo's car engines)]
  • Ability to generalize, i.e. to work in situations slightly different than
  • before (without retraining).
  • Adaptivity, i.e. ability to adapt to new circumstances (by retraining).
  • Parallelism
  • Fault tolerance

Training strategies

Two example: Hebb's rule. If two nodes are active, then reinforce the connection between them. Rosenblatt's Perceptron Convergence Procedure.

Supervised learning

Learning to imitate

Examples: PCP(BackPropagation), intro lab, learning to walk by copying a teacher's gait.

Reinforcement learning

supervisedLearning

Learning to trail-and-error

Example,Q-learning. playing a game (you may learn the
rules by a teacher, but you learn to play well by playing over and over again)

ReinforcementLearning

Unsupervised learning (UL)

Self-organization, clustering
Examples: Hebb, recognizing similarities, topological maps

unsupervisedLearning

Connection strategies (architectures)

Feedforward networks (focus of this course)

Description, information flow

Applications: Classification, function approximation, perception

Training: Most often supervised using some variant of backprop (overview).

Common issues: Dimensioning, weight information

//TODO need figure

Recurrent networks

Layered networks with recurrent connections between layers

Share term memory, also used for sequential problem

LSTMs (Long Short-Term Memory), commonly used now, are also recurrent

but not layered in quite the same way

Applications: Recognizing/generating sequences of patterns. Linguistics.

Training: Supervised

Fully interconnected recurrent networks

Description, information flow

Applications: associate memories, combinatorial optimization problems

Training. Often some version of Hebb

Common issues: Convergence, capacity.

 

Question: Why neural networks?

Why not use statistics or some rule based expert systems?

  • ANN is a statistical method! (not "model free" though, as sometimes said)
  • Currently, neural networks outperform other methods for many applications, but they have been used for a long time for other reasons as well:
    • Speed (at least if implemented in hardware)
    • Economical reasons: Projects, interviewing experts, etc. (Example NETTALK: Three months vs several years for DecTalk). Prototyping
标签: Artificial Intelligence Feedforward networks Fully interconnected recurrent networks neural networks Recurrent networks Reinforcement learning Supervised learning Unsupervised learning
最后更新:2020年3月4日

Dong Wang

I am a PhD student of TU Graz in Austria. My research interests include Embedded/Edge AI, efficient machine learning, model sparsity, deep learning, computer vision, and IoT. I would like to understand the foundational problems in deep learning.

点赞
< 上一篇
下一篇 >

文章评论

razz evil exclaim smile redface biggrin eek confused idea lol mad twisted rolleyes wink cool arrow neutral cry mrgreen drooling persevering
取消回复

这个站点使用 Akismet 来减少垃圾评论。了解你的评论数据如何被处理。

文章目录
  • Neural basics
    • Training strategies
      • Supervised learning
      • Reinforcement learning
      • Unsupervised learning (UL)
    • Connection strategies (architectures)
      • Feedforward networks (focus of this course)
      • Recurrent networks
      • Fully interconnected recurrent networks

COPYRIGHT © 2013-2024 nanguoyu.com. ALL RIGHTS RESERVED.

Theme Kratos Made By Seaton Jiang

陕ICP备14007751号-1