1. Trang chủ
  2. » Giáo án - Bài giảng

TTNT_Chapter9.ppt

35 289 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 35
Dung lượng 1,71 MB

Nội dung

Artificial Intelligence Trí Tuệ Nhân Tạo Chapter 9 – Học Máy Lê Quân Hà Outline • Machine Learning introduction  How Does Machine Learning Work?  Types of Machine Learning • Learning by Decision trees  Procedure: Buildtree for Building good ones  Entropy  Attribute-based representations  Decision tree learning algorithm • Sub-symbolic learning  Perceptrons  Linear Regression  Neural networks  NN Supervised Learning  Learning Example  Backpropagation observations  Kohonen learning  Problems with NNs Machine Learning Introduction • Why is machine learning important?  Early AI systems were brittle, learning can improve such a system’s capabilities  AI systems require some form of knowledge acquisition, learning can reduce this effort  KBS research clearly shows that producing a KBS is extremely time consuming – dozens of man-years per system is the norm  in some cases, there is too much knowledge for humans to enter (e.g., common sense reasoning, natural language processing)  Some problems are not well understood but can be learned (e.g., speech recognition, visual recognition)  AI systems are often placed into real-world problem solving situations  the flexibility to learn how to solve new problem instances can be invaluable  A system can improve its problem solving accuracy (and possibly efficiency) by learning how to do something better How Does Machine Learning Work? • Learning in general breaks down into one of two forms  Learning something new  no prior knowledge of the domain/concept so no previous representation of that knowledge  in ML, this requires adding new information to the knowledge base  Learning something new about something you already knew  add to the knowledge base or refine the knowledge base  modification of the previous representation  new classes, new features, new connections between them  Learning how to do something better, either more efficiently or with more accuracy  previous problem solving instance (case, chain of logic) can be “chunked” into a new rule (also called memoizing)  previous knowledge can be modified – typically this is a parameter adjustment like a weight or probability in a network that indicates that this was more or less important than previously thought Types of Machine Learning • There are many ways to implement ML  Supervised vs. Unsupervised vs. Reinforcement  is there a “teacher” that rewards/punishes right/wrong answers?  Symbolic vs. Subsymbolic vs. Evolutionary  at what level is the representation?  subsymbolic is the fancy name for neural networks  evolutionary learning is actually a subtype of symbolic learning  Knowledge acquisition vs. Learning through problem solving vs. Explanation-based learning vs. Analogy • We can also focus on what is being learned  Learning functions  Learning rules  Parameter adjustment  Learning classifications  these are not mutually exclusive, for instance learning classification is often done by parameter adjustment Learning by Decision trees: issues • Constructing a decision tree is very difficult! • Difficulty: how can we extract a simplified decision tree?  This implies (among other things) establishing a preference order (bias) among alternative decision trees.  Finding the smallest one proves to be VERY hard. Improving over the trivial one is okay. Procedure: Buildtree If all of the training examples are in the same class, then quit, else 1. Choose an attribute to split the examples. 2. Create a new child node for each value of the attribute. 3. Redistribute the examples among the children according to the attribute values. 4. Apply buildtree to each child node. Is this a good decision tree? Maybe? How do we decide? Entropy Measures the (im) purity in collection S of examples Entropy(S) = - [ p + log 2 (p + ) + p - log 2 (p - ) ] • p + is the proportion of positive examples. • p - is the proportion of negative examples. N.B. This is not a fully general definition of entropy. Learning decision trees Problem: decide whether to wait for a table at a restaurant, based on the following attributes: 1. Alternate: is there an alternative restaurant nearby? 2. Bar: is there a comfortable bar area to wait in? 3. Fri/Sat: is today Friday or Saturday? 4. Hungry: are we hungry? 5. Patrons: number of people in the restaurant (None, Some, Full) 6. Price: price range ($, $$, $$$) 7. Raining: is it raining outside? 8. Reservation: have we made a reservation? 9. Type: kind of restaurant (French, Italian, Thai, Burger) 10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60) Attribute-based representations • Examples described by attribute values (Boolean, discrete, continuous) • E.g., situations where I will/won't wait for a table: • Classification of examples is positive (T) or negative (F)

Ngày đăng: 16/07/2014, 05:00

Xem thêm

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN