2007 02 01b janecek perceptron kho tài liệu bách khoa

37 37 0
2007 02 01b janecek perceptron kho tài liệu bách khoa

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

The Simple Perceptron Artificial Neural Network ● ● ● ● Information processing architecture loosely modelled on the brain Consist of a large number of interconnected processing units (neurons) Work in parallel to accomplish a global task Generally used to model relationships between inputs and outputs or find patterns in data Artificial Neural Network ● Types of layers Single Processing Unit Activation Functions ● Function which takes the total input and produces an output for the node given some threshold Network Structure ● Two main network structures Feed-Forward Network Recurrent Network Network Structure ● Two main network structures Feed-Forward Network Recurrent Network Learning Paradigms ● Supervised Learning: Given training data consisting of pairs of inputs/outputs, find a function which correctly matches them ● Unsupervised Learning: Given a data set, the network finds patterns and categorizes into data groups ● Reinforcement Learning: No data given Agent interacts with the environment calculating cost of actions Simple Perceptron ● The perceptron is a single layer feed-forward neural network Simple Perceptron ● ● Simplest output function Used to classify patterns said to be linearly separable Learning Example η = 0.2  w = 0.2 1.1 x1 = -2, x2 = w = w 00.2∗1 w = w 10.2∗−2 w = w 20.2∗1 Learning Example η = 0.2  w = 0.2 1.1 x1 = 1.5, x2 = -0.5 w = w 00.2∗1 w = w 10.2∗1.5 w = w 20.2∗−0.5 Learning Example η = 0.2  0.2 w = 0.5 x1 = 1.5, x2 = -0.5 w = w 00.2∗1 w = w 10.2∗1.5 w = w 20.2∗−0.5 Perceptron Convergence Theorem The theorem states that for any data set which is linearly separable, the perceptron learning rule is guaranteed to find a solution in a finite number of iterations ● Idea behind the proof: Find upper & lower bounds on the length of the weight vector to show finite number of iterations ● Perceptron Convergence Theorem Let's assume that the input variables come from two linearly separable classes C1 & C2 Let T1 & T2 be subsets of training vectors which belong to the classes C1 & C2 respectively Then T1 U T2 is the complete training set Perceptron Convergence Theorem As we have seen, the learning algorithms purpose is to find a weight vector w such that w⋅x  ∀ x∈C w⋅x ≤ ∀ x∈C (x is an input vector) If the kth member of the training set, x(k), is correctly classified by the weight vector w(k) computed at the kth iteration of the algorithm, then we not adjust the weight vector However, if it is incorrectly classified, we use the modifier w k 1=w k ηd k  x k  Perceptron Convergence Theorem So we get w k 1 = w k −ηx k  if w k ⋅x k   0, x k ∈C w k 1 = w k ηx k  if w k ⋅x k  ≤ 0, x k ∈C We can set η = 1, as for η ≠ (>0) just scales the vectors We can also set the initial condition w(0) = 0, as any non-zero value will still converge, just decrease or increase the number of iterations Perceptron Convergence Theorem Suppose that w(k)⋅x(k) < for k = 1, 2, where x(k) ∈ T1, so with an incorrect classification we get w k 1=w k  x k  x k ∈C By expanding iteratively, we get w k 1= x k w k  =x k  x k – 1w k – 1 =x k  x 1w 0 Perceptron Convergence Theorem As we assume linear separability, ∃ a solution w* where w⋅x(k) > 0, x(1) x(k) ∈ T1 Multiply both sides by the solution w* to get ∗ ∗ ∗ w ⋅w k 1 = w ⋅x 1 w ⋅x k  These are all > 0, hence all >= α, where Thus we get ∗ α=min w ⋅xk  ∗ w ⋅w k 1 ≥ kα Perceptron Convergence Theorem Now we make use of the Cauchy-Schwarz inequality which states that for any two vectors A, B 2 ∥A∥ ∥B∥ =  A⋅B Applying this we get ∗ ∗ 2 ∥w ∥ ∥w k 1∥ ≥ w ⋅w k1 From the previous slide we know Thus, it follow that w∗⋅w k 1 ≥ kα 2 k α ∥w k 1∥ ≥ ∗ ∥w ∥ Perceptron Convergence Theorem We continue the proof by going down another route w  j1 = w  jx  j  for j=1, , k with x  j ∈T We square the Euclidean norm on both sides 2 ∥w  j1∥ =∥w  j x  j∥ 2 = ∥w  j∥ ∥x  j∥ 2w  j⋅x  j Thus we get ∥w  j1∥2−∥w  j∥2 ≤ ∥x  j∥2 incorrectly classified, so < Perceptron Convergence Theorem Summing both sides for all j 2 ∥w  j1∥ −∥w  j∥ ≤ ∥x  j∥ ∥w  j ∥2−∥w  j−1∥2 ≤ ∥x  j−1∥2 We get 2 ∥w 1∥ −∥w 0∥ ≤∥x 1∥ k ∥w k 1∥ ≤ ∑ ∥x  j∥ 2 j=1 ≤ kβ β = max ∥x  j∥ Perceptron Convergence Theorem But now we have a conflict between the equations, for sufficiently large values of k 2 k α ∥w k 1∥ ≥ ∗ ∥w ∥ 2 ∥w k 1∥ ≤ kβ So, we can state that k cannot be larger than some value kmax for which the two equations are both satisfied k max β = k 2 max ∗ α ∥w ∥ ∗ ⇒ β∥w ∥ k max = α Perceptron Convergence Theorem Thus it is proved that for ηk = 1, ∀ k, w(0) = 0, given that a solution vector w* exists, the perceptron learning rule will terminate after at most kmax iterations The End ... with the environment calculating cost of actions Simple Perceptron ● The perceptron is a single layer feed-forward neural network Simple Perceptron ● ● Simplest output function Used to classify... the slope of the line The weight vector is perpendicular to the plane Perceptron Learning Algorithm ● ● ● We want to train the perceptron to classify inputs correctly Accomplished by adjusting the... properly handle linearly separable sets Perceptron Learning Algorithm ● ● ● ● We have a “training set” which is a set of input vectors used to train the perceptron During training both wi and

Ngày đăng: 08/11/2019, 18:02

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan