1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Ch8 learning kho tài liệu bách khoa

42 43 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 42
Dung lượng 1,95 MB

Nội dung

Learning from Observations 5/3/2013 Outline • Learning agents • Inductive learning • Decision tree learning 5/3/2013 Learning? "Learning is making useful changes in our minds." Marvin Minsky 5/3/2013 Learning? "Learning is constructing or modifying representations of what is being experienced." Ryszard Michalski 5/3/2013 Learning? "Learning denotes changes in a system that enable a system to the same task more efficiently the next time." 1916 - 2001 5/3/2013 Herbert Simon Learning • Learning is essential for unknown environments, – i.e., when designer lacks omniscience • Learning is useful as a system construction method, – i.e., expose the agent to reality rather than trying to write it down • Learning modifies the agent's decision mechanisms to improve performance 5/3/2013 Why machine learning? • Understand and improve efficiency of human learning – use to improve methods for teaching and tutoring people, as done in CAI Computer-aided instruction • Discover new things or structure that is unknown to humans – Data mining • Fill in skeletal or incomplete specifications about a domain – Large, complex AI systems cannot be completely derived by hand and require dynamic updating to incorporate new information – Learning new characteristics expands the domain or expertise and lessens the "brittleness" of the system 5/3/2013 Components of a Old Agent Agent Environment Sensors Model of World (being updated) Prior Knowledge about the World Reasoning & Decisions Making Effectors 5/3/2013 List of Possible Actions Goals/Utility Learning agents 5/3/2013 Components of a Learning Agent Environment Sensors Performance Element Effectors 5/3/2013 10 Decision trees • One possible representation for hypotheses • E.g., here is the “true” tree for deciding whether to wait: 5/3/2013 28 Expressiveness • Decision trees can express any function of the input attributes • E.g., for Boolean func8ons, truth table row → path to leaf: • Trivially, there is a consistent decision tree for any training set with one path to leaf for each example (unless f nondeterministic in x) but it probably won't generalize to new examples • Prefer to find more compact decision trees 5/3/2013 29 Decision tree • We can always come up with some decision tree for a data set: – Pick any feature not used yet, branch on its values, continue – However, starting with a random feature may lead to a large, unmotivated tree • In general, we prefer short trees over larger ones – Why?! – Intuitively, a simple (consistent) hypothesis is more likely to be true 5/3/2013 30 Hypothesis spaces • How many distinct decision trees with n Boolean attributes? = number of Boolean functions = number of distinct truth tables with 2n rows = E.g., with Boolean attributes, there are 18,446,744,073,709,551,616 trees 2n • How many purely conjunctive hypotheses (e.g., Hungry ∧ ¬Rain)? – Each attribute can be in (positive), in (negative), or out ⇒ 3n distinct conjunctive hypotheses – More expressive hypothesis space • increases chance that target function can be expressed • increases number of hypotheses consistent with training set ⇒ may get worse predictions 5/3/2013 31 Decision tree learning • Aim: find a small tree consistent with the training examples • Idea: (recursively) choose "most significant" attribute as root of (sub)tree 5/3/2013 32 Choosing an attribute • Idea: a good attribute splits the examples into subsets that are (ideally) "all positive" or "all negative" • Patrons? is a better choice 5/3/2013 33 5/3/2013 34 Choosing an attribute • Finding the smallest decision tree turns out to be intractable • However, there are simple heuristics that a good job of finding small trees • Basic question is: Which attribute we split on next? • Idea: Using information theory – Define a statistical property, called information gain, to measure how good a feature is at separating the data according to the target 5/3/2013 35 Information theory - Entropy • Information Content (Entropy): – Suppose A is a random variable Then Entropy(A) = I(P(a1), … , P(an)) = Σi=1 -P(ai) log2 P(ai) Where – is a possible value of A – P(ai) is is the probability of A = • For a training set containing p positive examples and n negative – E.g., for 12 restaurant examples, wait p = 6, nowait n = p n p p n n , )=− log − log p+n p+n p+n p+n p+n p+n 6 6 = − log − log = − log = 12 12 12 12 Entropy( S ) = I ( 5/3/2013 36 Information gain • A chosen attribute A divides the training set E into subsets E1, … , Ev according to their values for A, where A has v distinct values Remainder(A) = (|Ei|/|E|) ì Entropy(Sai) Let Ei have pi positive and ni negative examples ⇒ I(pi/(pi+ni), ni/(pi+ni)) bits needed to classify a new example ⇒ expected number of bits per example over all branches is v remainder ( A) = ∑ i =1 5/3/2013 p i + ni pi ni I( , ) p + n pi + ni pi + ni 37 Information gain • Information Gain (IG) or reduction in entropy from the attribute test: p n IG ( A) = I ( , ) − remainder ( A) p+n p+n • Choose the attribute with the largest IG 5/3/2013 38 Information gain • For the training set, p = n = 6, I(6/12, 6/12) = bit • Consider the attributes Patrons and Type (and others too): I (0,1) + I (1,0) + I ( , )] = 0541 bits 12 12 12 6 1 1 2 2 IG (Type) = − [ I ( , ) + I ( , ) + I ( , ) + I ( , )] = bits 12 2 12 2 12 4 12 4 IG ( Patrons) = − [ • Patrons has the highest IG of all attributes and so is chosen by the DTL algorithm as the root 5/3/2013 39 Example contd • Decision tree learned from the 12 examples: • Substantially simpler than “true” tree -a more complex hypothesis isn’t justified by small amount of data 5/3/2013 40 Performance measurement • How we know that h ≈ f ? – Use theorems of computational/statistical learning theory – Try h on a new test set of examples • (use same distribution over example space as training set) • Learning curve = % correct on test set as a function of training set size 5/3/2013 41 Summary • Learning needed for unknown environments, lazy designers • Learning agent = performance element + learning element • For supervised learning, the aim is to find a simple hypothesis approximately consistent with training examples • Decision tree learning using information gain • Learning performance = prediction accuracy measured on test set 5/3/2013 42 ...Outline • Learning agents • Inductive learning • Decision tree learning 5/3/2013 Learning? "Learning is making useful changes in our minds." Marvin Minsky 5/3/2013 Learning? "Learning is constructing... Michalski 5/3/2013 Learning? "Learning denotes changes in a system that enable a system to the same task more efficiently the next time." 1916 - 2001 5/3/2013 Herbert Simon Learning • Learning is essential... from the LE Environment Learning Agent Sensors Critic Performance Element Effectors 5/3/2013 Learning Element Problem Generator 13 Components of a Learning Agent Environment Learning Agent Sensors

Ngày đăng: 08/11/2019, 18:09