Question? Leave a message!




Learning from Observations

Learning from Observations
Learning from Observations Chapter 18 Sections 13 CS 3243 Learning 1 Outline   Learning   Hypothesis Spaces   Learning Algorithms   K Nearest Neighbors   Decision Trees   Naïve Bayes   Not really described in the text well   Training and Testing CS 3243 Learning 2 What is Learning   Memorizing something   Learning facts through observation and exploration   Generalizing a concept from experience “Learning denotes changes in the system that are adaptive in the sense that they enable the system to do the task or tasks drawn from the same population more efficiently and more effectively the next time” – Herb Simon CS 3243 Learning 3 Why is it necessary Three reasons:   Unknown environment – need to deploy an agent in an unfamiliar territory   Save labor – we may not have the resources to encode knowledge   Can’t explicitly encode knowledge – may lack the ability to articulate necessary knowledge. CS 3243 Learning 4 Learning agents CS 3243 Learning 5 Learning element   Design of a learning element is affected by   Which components of the performance element are to be learned   What feedback is available to learn these components   What representation is used for the components   Type of feedback:   Supervised learning: correct answers for each example   Unsupervised learning: correct answers not given   Reinforcement learning: occasional rewards CS 3243 Learning 6 Induction   Making predictions about the future based on the past. If asked why we believe the sun will rise tomorrow, we shall naturally answer, “Because it has always risen every day.” We have a firm belief that it will rise in the future, because it has risen in the past. – Bertrand Russell   Is induction sound Why believe that the future will look similar to the past CS 3243 Learning 7 Inductive learning   Simplest form: learn a function from examples f is the target function An example is a pair (x, f(x)) Problem: find a hypothesis h such that h ≈ f given a training set of examples This is a highly simplified model of real learning:   Ignores prior knowledge   Assumes examples are given CS 3243 Learning 8 Inductive learning method   Memorization   Noise   Unreliable function   Unreliable sensors CS 3243 Learning 9 Inductive learning method   Construct/adjust h to agree with f on training set   (h is consistent if it agrees with f on all examples)   E.g., curve fitting: CS 3243 Learning 10 Inductive learning method   Construct/adjust h to agree with f on training set   (h is consistent if it agrees with f on all examples)   E.g., curve fitting: CS 3243 Learning 11 Inductive learning method   Construct/adjust h to agree with f on training set   (h is consistent if it agrees with f on all examples)   E.g., curve fitting: CS 3243 Learning 12 Inductive learning method   Construct/adjust h to agree with f on training set   (h is consistent if it agrees with f on all examples)   E.g., curve fitting: CS 3243 Learning 13 Inductive learning method   Construct/adjust h to agree with f on training set   (h is consistent if it agrees with f on all examples)   E.g., curve fitting: CS 3243 Learning 14 Inductive learning method   Construct/adjust h to agree with f on training set   (h is consistent if it agrees with f on all examples)   E.g., curve fitting:   Ockham’s razor: prefer the simplest hypothesis consistent with data CS 3243 Learning 15 An application: Ad blocking CS 3243 Learning 16 height Learning Ad blocking   Width and height of image   Binary Classification: Ad or ¬Ad – – – – – – – – – – – + + – – – + + + – – width CS 3243 Learning 17 height Nearest Neighbor   A type of instance based learning   Remember all of the past instances   Use the nearest old data point as answer – – – – – – – – – – – + + – – – + + + – – width   Generalize to kNN, that is take the average class of the closest k neighbors. CS 3243 Learning 18 Application: Eating out Problem: Decide on a restaurant, based on the following attributes: 1.  Alternate: is there an alternative restaurant nearby 2.  Bar: is there a comfortable bar area to wait in 3.  Fri/Sat: is today Friday or Saturday 4.  Hungry: are we hungry 5.  Patrons: number of people in the restaurant (None, Some, Full) 6.  Price: price range (, , ) 7.  Raining: is it raining outside 8.  Reservation: have we made a reservation 9.  Type: kind of restaurant (French, Italian, Thai, Burger) 10.  WaitEstimate: estimated waiting time (010, 1030, 3060, 60) CS 3243 Learning 19 Attribute representation   Examples described by attribute or feature values (Boolean, discrete, continuous)   E.g., situations where I will/won't wait for a table:   Classification of examples is positive (T) or negative (F) CS 3243 Learning 20 Bayes Rule Which is shorthand for: CS 3243 Learning 21 Naïve Bayes Classifier   Calculate most probable function value V = argmax P(v a ,a , … , a ) map j 1 2 n = argmax P(a ,a , … , a v ) P(v ) 1 2 n j j P(a ,a , … , a ) 1 2 n = argmax P(a ,a , … , a v ) P(v ) 1 2 n j j Naïve assumption: P(a ,a , … , a ) = P(a )P 1 2 n 1 (a ) … P(a ) 2 n CS 3243 Learning 22 Naïve Bayes Algorithm NaïveBayesLearn(examples) For each target value v j P’(v ) ← estimate P(v ) j j For each attribute value a of each attribute a i P’(a v ) ← estimate P(a v ) i j i j ClassfyingNewInstance(x) v = argmax P’(v ) Π P’(a v ) nb j i j a ε x j v ε V j CS 3243 Learning 23 An Example (due to MIT’s open coursework slides) f f f f y 1 2 3 4 R (1,1) = 1/5: fraction of all positive 1 examples that have feature 1 = 1 0 1 1 0 1 R (0,1) = 4/5: fraction of all positive 1 0 0 1 1 1 examples that have feature 1 = 0 1 0 1 0 1 0 0 1 1 1 R (1,0) = 5/5: fraction of all negative 1 0 0 0 0 1 examples that have feature 1 = 1 R (0,0) = 0/5: fraction of all negative 1 1 0 0 1 0 examples that have feature 1 = 0 1 1 0 1 0 1 0 0 0 0 Continue calculation of R (1,0) … 2 1 1 0 1 0 1 0 1 1 0 CS 3243 Learning 24 An Example (due to MIT’s open coursework slides) f f f f y 1 2 3 4 (1,1) (0,1) (1,0) (0,0) 0 1 1 0 1 R = 1/5, 4/5, 5/5, 0/5 1 0 0 1 1 1 R = 1/5, 4/5, 2/5, 3/5 2 1 0 1 0 1 R = 4/5, 1/5, 1/5, 4/5 3 0 0 1 1 1 R = 2/5, 3/5, 4/5, 1/5 4 0 0 0 0 1 New x = 0, 0, 1, 1 1 0 0 1 0 1 1 0 1 0 S(1) = R (0,1)R (0,1)R (1,1)R (1,1) = .205 1 2 3 4 S(0) = R (0,0)R (0,0)R (1,0)R (1,0) = 0 1 0 0 0 0 1 2 3 4 S(1) S(0), so predict v = 1. 1 1 0 1 0 1 0 1 1 0 CS 3243 Learning 25 Decision trees   Developed simultaneously by statistics and AI   E.g., here is the “true” tree for deciding whether to wait: CS 3243 Learning 26 Expressiveness   Decision trees can express any function of the input attributes.   E.g., for Boolean functions, truth table row → path to leaf:   Trivially, there is a consistent decision tree for any training set with one path to leaf for each example (unless f nondeterministic in x) but it probably won't generalize to new examples   Prefer to find more compact decision trees CS 3243 Learning 27 Hypothesis spaces How many distinct decision trees with n Boolean attributes = number of Boolean functions n n 2 = number of distinct truth tables with 2 rows = 2   E.g., with 6 Boolean attributes, there are 18,446,744,073,709,551,616 trees CS 3243 Learning 28 Hypothesis spaces How many distinct decision trees with n Boolean attributes = number of Boolean functions n n 2 = number of distinct truth tables with 2 rows = 2   E.g., with 6 Boolean attributes, there are 18,446,744,073,709,551,616 trees How many purely conjunctive hypotheses (e.g., Hungry ∧ ¬Rain)   Each attribute can be in (positive), in (negative), or out n ⇒ 3 distinct conjunctive hypotheses   More expressive hypothesis space   increases chance that target function can be expressed   increases number of hypotheses consistent with training set ⇒ may get worse predictions CS 3243 Learning 29 The best hypothesis   Find best function that models given data.   How to define the best function   Fidelity to the data – error on existing data: E(h,D)   Simplicity – how complicated is the solution: C(h)   One measure: how many possible hypotheses for the class   Inevitable tradeoff between complexity of hypothesis and degree of fit to the data   Minimize α E(h,D) + (1α) C(h)   Where α is a tuning parameter CS 3243 Learning 30 Decision tree learning   Aim: find a small tree consistent with the training examples   Idea: (recursively) choose "most significant" attribute as root of (sub)tree CS 3243 Learning 31 Choosing an attribute   Idea: a good attribute splits the training set into subsets that are (ideally) "all positive" or "all negative"   Patrons is a better choice CS 3243 Learning 32 Information Content   Entropy measures purity of sets of examples   Normally denoted H(x)   Or as information content: the less you need to know (to determine class of new case), the more information you have T (total) = P + N   With two classes (P,N):   IC(S) = (p/t) log (p/t) (n/t) log (n/t) 2 2   E.g., p=9, n=5; IC(9,5) = (9/14) log (9/14) (5/14) log (5/14) 2 2 = 0.940   Also, IC(14,0)=0; IC(7,7)=1 CS 3243 Learning 33 Entropy curve   For p/t between 0 1, the 2 class entropy is   0 when p/(p+n) is 0 1   1 when p/(p+n) is 0.5   0 when p/(p+n) is 1   monotonically increasing between 0 and 0.5   monotonically decreasing between 0.5 0.5 and 1   When the data is pure, only need to send 1 bit CS 3243 Learning 34 Using information theory   To implement ChooseAttribute in the DTL algorithm   Entropy: I(P(v ), … , P(v )) = Σ P(v ) log P(v ) 1 n i=1 i 2 i   For a training set containing p positive examples and n negative examples: CS 3243 Learning 35 Information gain   A chosen attribute A divides the training set E into subsets E , … , E according to their values for A, 1 v where A has v distinct values.   Information Gain (IG) or reduction in entropy from the attribute test:   Choose the attribute with the largest IG CS 3243 Learning 36 Information gain For the training set, p = n = 6, I(6/12, 6/12) = 1 bit Consider the attributes Patrons and Type (and others too): Patrons has the highest IG of all attributes and so is chosen by the DTL algorithm as the root CS 3243 Learning 37 Example contd.   Decision tree learned from the 12 examples:   Substantially simpler than “true” treea more complex hypothesis isn’t justified by small amount of data CS 3243 Learning 38 Performance measurement How do we know that h ≈ f   Try h on a new test set of examples Learning curve = correct on test set as a function of training set size CS 3243 Learning 39 Training and testing sets   Where does the test set come from 1.  Collect a large set of examples 2.  Divide into training and testing data 3.  Train on training data, assess on testing 4.  Repeat 13 for different splits of the set.   Same distribution “Learning … enables the system to do the task or tasks drawn from the same population” – Herb Simon   To think about: Why CS 3243 Learning 40 Precision Overfitting   Better training 100 performance = test 80 performance   Nope. Why 60 1.  Hypothesis too specific 40 2.  Models noise 20   Pruning   Keep complexity of 0 hypothesis low DT Size   Stop splitting when: 1.  IC below a threshold Test performance 2.  Too few data points in Train performance node CS 3243 Learning 41 Summary   Learning needed for unknown environments, lazy designers   Learning agent = performance element + learning element   For supervised learning, the aim is to find a simple hypothesis approximately consistent with training examples   Decision tree learning using information gain   Learning performance = prediction accuracy measured on test set CS 3243 Learning 42