Learning from observations in Artificial intelligence

learning resulting from observation and imitation pdf and learning motion style synthesis from perceptual observations, learning journey observation examples pdf free download
HalfoedGibbs Profile Pic
HalfoedGibbs,United Kingdom,Professional
Published Date:02-08-2017
Your Website URL(Optional)
Comment
LEARNING FROM 1 OBSERVATIONS In which we describe agents that can improve their behavior through diligent study of their own experiences. The idea behind learning is that percepts should be used not only for acting, but also for improving the agent's ability to act in the future. Learning takes place as the agent observes its interactions with the world and its own decision-making processes. Learning can range from trivial memorization of experience, as exhibited by the wunlpus-world agent in Chapter 10, to the creation of entire scientific theories, as exhibited by Albert Einstein. This chapter describes inductive learning from observations. In particular, we describe how to learn simple theories in propositional logic. We also give a theoretical analysis that explains why inductive learning works. In Chapter 2, we saw that a learning agent can be thought of as containing a performance ele- ment that decides what actions to take and a learning element that modifies the performance element so that it makes better decisions. (See Figure 2.15.) Machine learning researchers have come up with a large variety of learning elements. To understand them, it will help to see how their design is affected by the context in which they will operate. The design of a learning element is affected by three major issues: Which components of the performance element are to be learned. What feedback is available to learn these components. What representation is used for the components. We now analyze each of these issues in turn. We have seen that there are many ways to build the performance element of an agent. Chapter 2 described several agent designs (Fig- ures 2.9, 2.1 1,2.13, and 2.14). The components of thes,e agents include the following: I. A direct mapping from conditions on the current state to actions. 2. A mealns to infer relevant properties of the world from the percept sequence. 650 Chapter 18. Learning from Observations 3. Information about the way the world evolves and about the results of possible actions the agent can take. 4. Utility information indicating the desirability of world states. 5. Action-value information indicating the desirability of actions. 6. Goals that describe classes of states whose achievement maximizes the agent's utility. Each of these components can be learned from appropriate feedback. Consider, for example, an agent training to become a taxi driver. Every time the instructor shouts "Brake" the agent can learn a condition-action rule for when to brake (component 1). By seeing many camera images that it is told contain buses, it can learn to recognize them (2). By trying actions and observing the results-for example, braking hard on a wet road-it can learn the effects of its actions (3). Then, when it receives no tip from passengers who have been thoroughly shaken up during the trip, it can learn a useful component of its overall utility function (4). The type of feedback available for learning is usually the most important factor in deter- mining the nature of the learning problem that the agent faces. The field of machine learning usually distinguishes three cases: supervised, unsupervised, and reinforcement learning. SUPERVISED The problem of supervised learning involves learning a function from examples of its LEARNING inputs and outputs. Cases (I), (2), and (3) are all instances of supervised learning problems. In (I), the agent learns condition-action rule for braking-this is a function from states to a Boolean output (to brake or not to brake), In (2), the agent learns a function from images to a Boolean output (whether the image contains a bus). In (3), the theory of braking is a function from states and braking actions to, say, stopping distance in feet. Notice that in cases (1) and (2), a teacher provided the correct output value of the examples; in the third, the output value was available directly from the agent's percepts. For fully observable environments, it will always be the case that an agent can observe the effects of its actions and hence can use supervised learning methods to learn to predict them. For partially observable environments, the problem is more difficult, because the immediate effects might be invisible. UNSUPERVISED The problem of unsupervised learning involves learning patterns in the input when no LEARNING specific output values are supplied. For example, a taxi agent might gradually develop a con- cept of "good traffic days" and "bad traffic days" without ever being given labelled examples of each. A purely unsupervised learning agent cannot learn what to do, because it has no information as to what constitutes a correct action or a desirable state. We will study unsu- pervised learning primarily in the context of probabilistic reasoning systems (Chapter 20). REINFORCEMENT The problem of reinforcement learning, which we cover in Chapter 21, is the most LEARNING general of the three categories. Rather than being told what to do by a teacher, a reinforcement RErNwRcEMENr learning agent must learn from reinforcement.' For example, the lack of a tip at the end of the journey (or a hefty bill for rear-ending the car in front) gives the agent some indication that its behavior is undesirable. Reinforcement learning typically includes the subproblem of learning how the environment works. The representation of the learned information also plays a very important role in de- termining how the learning algorithm must work. Any of the components of an agent can be represented using any of the representation schemes in this book. We have seen sev- The term reward as used in Chapter 17 is a synonym for reinforcement. Section 18.2. Inductive Learning 65 1 era1 examples: linear weighted polynomials for utility functions in game-playing programs; propositional and first-order logical sentences for all of the cornponents in a logical agent; and probabilistic descriptions such as Bayesian networks for the inferential components of a decision-theoretic agent. Effective learning algorithms have been devised for all of these. This chapter will cover methods for propositional logic, Chapter 19 describes methods for first-order logic, and Chapter 20 covers methods for Bayesian networks and for neural net- works (which include linear polynomials as a special case). The last major factor in the design of learning systems is the availability ofprror knowl- edge. The majority of learning research in AI, computer science, and psychology has studied the case in which the agent begins with no knowledge at all about what it is trying to learn. experience. Although this is an important It has access only to the examples presented by its special case, it is by no means the general case. Most human learning takes place in the con- text of a good deal of background knowledge. Some psychologists and linguists claim that even newborn babies exhibit knowledge of the world. 'Whatever the truth of this claim, there is no doubt that prior knowledge can help enormously in learning. A physicist examining a stack of bubble-chamber photographs might be able to induce a 1.heory positing the existence of a new particle of a certain mass and charge; but ain art critic examining the same stack might learn nothing more than that the "artist" must ble some sort of abstract expressionist. Chapter 19 shows several ways in which learning is helped by the use of existirig knowl- edge; it also shows how knowledge can be compiled in order to speed up decision making. Chapter 20 shows how prior knowledge helps in the learning of probabilistic theories. An algorithm for deterministic supervised learning is given as input the correct value of the unknown function for particular inputs and must try to recover the unknown function or some- EXAMPLE thing close to it. More formally, we say that an example is a pair (x, f (z)), where x is the input and f(x) is the output of the function applied to x. The task of pure inductive infer- INDUCTIVE ence (or induction) is this: INFERENCE Given a collection of examples of f, return a function h that approximates f. HYPOTHESIS The function h is called a hypothesis. The reason that learning is difficult, from a conceptual point of view, is that it is not easy to tell whether any p,articular h is a good approximation of GENERALIZTION f. A good hypothesis will generalize well-that is, will predict unseen examples correctly. PROBLEM OF This is the fundamental problem of induction. The problem has been studied for centuries; INDUCTION Section 18.5 provides a partial solution. Figure: 18.1 shows a familiar example: fitting a function of a single variable to some data points. The examples are (x, f (x)) pairs, where both x and f (x) are real numbers. We HYPOTHESISSPACE choose the hypothesis space H- the set of hypotheses we will consider-to be the set of 2 3 polynomials of degree at most k, such as 3x 2, x17 - 4x , and so on. Figure 18.l(a) shows some data with an exact fit by a straight line (a polynomial of degree I). The line is CONSISTENT called a consistent hypothesis because it agrees with all the data. Figure 18.l(b) shows a 652 Chapter 18. Learning from Observations Figure 18.1 (a) Example (x, f (x)) pairs and a consistent, linear hypothesis. (b) A consis- tent, degree-7 polynomial hypothesis for the same data set. (c) A different data set that admits an exact degree-6 polynomial fit or an approximate linear fit. (d) A simple, exact sinusoidal fit to the same data set. high-degree polynomial that is also consistent with the same data. This illustrates the first issue in inductive learning: how do we choose from among multiple consistent hypotheses? 2 One answer is Ockham's razor: prefer the simplest hypothesis consistent with the data. OCKHAM'S RAZOR Intuitively, this makes sense, because hypotheses that are no simpler than the data themselves are failing to extract any pattern from the data. Defining simplicity is not easy, but it seems reasonable to say that a degree- 1 polynomial is simpler than a degree- 12 polynomial. Figure 18.l(c) shows a second data set. There is no consistent straight line for this data set; in fact, it requires a degree-6 polynomial (with 7 parameters) for an exact fit. There are just 7 data points, so the polynomial has as many parameters as there are data points: thus, it does not seem to be finding any pattern in the data and we do not expect it to generalize well. It might be better to fit a simple straight line that is not exactly consistent but might make reasonable predictions. This amounts to accepting the possibility that the true function is not deterministic (or, roughly equivalently, that the true inputs are not fully observed). For nondeterministic functions, there is an inevitable tradeoff between the complexity of the hypothesis and the degree of jit to the data. Chapter 20 explains how to make this tradeoff using probability theory. One should keep in mind that the possibility or impossibility of finding a simple, con- sistent hypothesis depends strongly on the hypothesis space chosen. Figure 18.l(d) shows that the data in (c) can be fit exactly by a simple function of the form ax + b + c sin x. This example shows the importance of the choice of hypothesis space. A hypothesis space con- sisting of polynomials of finite degree cannot represent sinusoidal functions accurately, so a learner using that hypothesis space will not be able to learn from sinusoidal data. We say that REALIZABLE a learning problem is realizable if the hypothesis space contains the true function; otherwise, UNREALIZABLE it is unrealizable. Unfortunately, we cannot always tell whether a given learning problem is realizable, because the true function is not known. One way to get around this barrier is to use prior knowledge to derive a hypothesis space in which we know the true function must lie. This topic is covered in Chapter 19. Named after the 14th-century English philosopher, William of Ockham. The name is often misspelled as "Occam," perhaps from the French rendering, "Guillaume d'Occain." Section 18.3. Learning Decision Trees 653 Another approach is to use the largest possible hypothesis space. For example, why not let H be the class of all Turing machines? After all, every computable function can be represented by some Turing machine, and that is the blest we can do. The problem with this idea is that it does not take into account the computational complexity of learning. There is a tradeoff between the expressiveness of a hypothesis space and the complexity of finding simple, consistent hypotheses within that space. For example, fitting straight lines to data is very easy; fitting high-degree polynomials is harder; and fitting Turing machines is very hard indeed because determining whether a given Turing machine is consistent with the data is not even decidable in general. A second reason to prefer simple hypothesis spaces is that the resulting hypotheses may be simpler to use-that is, it is faster to compute h(x) when h is a linear function than when it is an arbitrary Turing maclhine program. For these reasons, most work on learning has folcused on relatively simple representa- tions. In this chapter, we concentrate on propositional logic and related languages. Chapter 19 looks at learning theories in first-order logic. We will see that the expressiveness-complexity tradeoff is not as simple as it first seems: it is often the case. as we saw in Chapter 8, that an expressive llanguage makes it possible for a simple theory to fit the data, whereas restricting the expressiveness of the language means that any consistent theory must be very complex. For example, the rules of chess can be written in a page or two of first-order logic, but require thousands of pages when written in propositional logic. In such cases, it should be possible to learn much faster by using the more expressive language. Decision tree induction is one of the simplest, and yet most successful forms of learning algorithm. It serves as a good introduction to the area of inductive learning, and is easy to implement. We first describe the performance element, and then show how to learn it. Along the way, we will introduce ideas that appear in all areas of inductive learning. Decision trees as performance elements DECISIONTREE A decision tree takes as input an object or situation described by a set of attributes and w ATTRIBUTES returns a "decision-the predicted output value for the input. The input attributes can be discrete or continuous. For now, we assume discrete inputs. The output value can also be CLASSIFICATION discrete or continuous; learning a discrete-valued function is called classification learning; REGRESSION learning a continuous function is called regression. We will concentrate on Boolean classifi- POSITIVE cation, wherein each example is classified as true (positive) or false (negative). NEGATIVE A decision tree reaches its decision by performing a sequence of tests. Each internal node in the tree corresponds to a test of the value of oine of the properties, and the branches from the node are labeled with the possible values of the test. Each leaf node in the tree specifies the value to be returned if that leaf is reached. The decision tree representation seems to be very natural for humans; indeed, many "How To" manuals (e.g., for car repair) are written entirely as a single decision tree stretching over hundreds of pages. 654 Chapter 18. Learning from Observations A somewhat simpler example is provided by the problem of whether to wait for a table GOALPREDICATE at a restaurant. The aim here is to learn a definition for the goal predicate Will Wait. In setting this up as a learning problem, we first have to state what attributes are available to describe examples in the domain. In Chapter 19, we will see how to automate this task; for now, let's suppose we decide on the following list of attributes: 1. Alternate: whether there is a suitable alternative restaurant nearby. 2. Bar: whether the restaurant has a comfortable bar area to wait in. 3. Fri/Sat: true on Fridays and Saturdays. 4. Hungry: whether we are hungry. 5. Patrons: how many people are in the restaurant (values are None, Some, and Full). 6. Price: the restaurant's price range (, , ). 7. Raining: whether it is raining outside. 8. Reservation: whether we made a reservation. 9. Type: the kind of restaurant (French, Italian, Thai, or burger). 10. WaitEstimate: the wait estimated by the host (0-10 minutes, 10-30, 30-60, 60). The decision tree usually used by one of us (SR) for this domain is shown in Figure 18.2. Notice that the tree does not use the Price and Type attributes, in effect considering them to be irrelevant. Examples are processed by the tree starting at the root and following the appropriate branch until a leaf is reached. For instance, an example with Patrons = Full and WaitEstimate = 0-10 will be classified as positive (i.e., yes, we will wait for a table). Figure 18.2 A decision tree for deciding whether to wait for a table. Section 18.3. Learning Decision Trees 655 Expressiveness of decision trees Logically speaking, any particular decision tree hypotlhesis Ifor the Will Wait goal predicate can be seen as an assertion of the form Vs WillWait(s) H (Pl(s)VP2(s)V...VPn(s)), where each condition P,(s) is a conjunction of tests corresponding to a path from the root of the tree to a leaf with a positive outcome. Although this looks like a first-order sentence, it is, in a sense, propositional, because it contains just one variable and all the predicates are unary. The decision tree is really describing a relaltionship between Will Wait and some logical combination of attribute values. We cannot use decjsion trees to represenl tests that refer to two or more different objects-for example, 3 ra Nearby (rz, r) A Price(r,p) A Price(r2,pa) A Cheaper(p2, p) (is there a cheaper restaurant nearby?). Obviously, we could add another Boolean attribute with the name CheaperRestaurantNearby, but it is intractable to add all such attributes. Chapter 19 will delve further into the problem of learning in first-order logic prope:r. Decision trees are fully expressive within the class of pro,positional languages; that is, any Boolean function can be written as a decision tree. This can be done trivially by having each row in the truth table for the function correspond to a path in the tree. This w'ould yield an exponentially large decision tree representation because the truth table has exponentially many rows. Clearly, decision trees can represent many functions with much smaller trees. For some kinds of functions, however, this is a real problem. For example, if the func- PARITY FUNCTION tion is the parity function, which returns 1 if and only if an even number of inputs are 1, then an exponentially large decision tree will be needed. It j.s also difficult to use ;a decision MAJORIWFUNCTION tree to represent a majority function, which returns 1 if more than half of its inputs are 1. In other words, decision trees are good for some kinds of functions and bad Ior others. Is there any kind of representation that is efficient for ,all kinds of functions? Unfortunately, the answer is no. We can show this in a very general way. Consider the set of all Boolean functions on n attributes. How many different functions are in this set? This is just the number of different truth tables that we can write down, because the fuinction is defined by its truth table. The truth table has 2n rows, because each input case is described by n attrilbutes. We can consider the "answer" column of the table as a 2n-bit number that defines the function. No matter what representation we use for functions, some of the functions (almost all1 of them, in fact) are going to require at least that many bits to represent. If it takes 2n bits to define the function, then tliere are 22n different functions on n attributes. This is a scary number. For example, with just six Boolean attributes, there are 226 = 18,446,744,073,709,551,616 different functions to choose from. We will need some ingenious algorithms to find consistent hypotheses in such a large space. Inducing decision trees from examples An example for a Boolean decision tree consists of a vector of' input attributes, X, antd a single Boolean output value 3. A set of examples (XI, yl) . . . , (XI2, y12) is shown in Figure 18.3. 656 Chapter 18. Learning from Observations Attributes Goal Example Alt Bar Hun Pat Rain L Est Will Wait I Yes No No Yes Some No XI Yes French 0-1 0 Yes Yes No No No X2 Yes Full No Thai 3040 No No Yes No No Some No No Burger 0-1 0 Yes x3 Yes No Yes Yes Full Yes No Thai 10-30 Yes x4 Yes No Yes No Full No Yes French 60 No X5 NO Yes No Yes Some Yes Yes Italian 0-10 Yes x6 No Yes No No None Yes No Burger 0-1 0 No x7 No No No Yes Some Yes Yes Thai 0-1 0 Yes X8 No Yes Yes No Full Yes No Burger 60 x9 No Yes Yes Yes Yes Full No Yes Italian 10-30 No XIO No No No No None No No Thai 0-1 0 NO x11 Yes Yes Yes Yes Full No No Burger 30-60 Yes X12 - - - - - 1 Figure 18.3 Examples for the restaurant domain. I The positive examples are the ones in which the goal Will Wait is true (XI, X3, . . .); the neg- ative examples are the ones in which it is false (X2, X5, . . .). The complete set of examples TRAINING SET is called the training set. The problem of finding a decision tree that agrees with the training set might seem difficult, but in fact there is a trivial solution. We could simply construct a decision tree that has one path to a leaf for each example, where the path tests each attribute in turn and follows the value for the example and the leaf has the classification of the example. When given the same example again,3 the decision tree will come up with the right classification. Unfortunately, it will not have much to say about any other cases The problem with this trivial tree is that it just memorizes the observations. It does not extract any pattern from the examples, so we cannot expect it to be able to extrapolate to examples it has not seen. Applying Ockham's razor, we should find instead the smallest decision tree that is consistent with the examples. Unfortunately, for any reasonable defi- nition of "smallest," finding the smallest tree is an intractable problem. With some simple heuristics, however, we can do a good job of finding a "smallish" one. The basic idea behind the DECISION-TREE-LEARNING algorithm is to test the most important attribute first. By "most important," we mean the one that makes the most difference to the classification of an example. That way, we hope to get to the correct classification with a small number of tests, meaning that all paths in the tree will be short and the tree as a whole will be small. Figure 18.4 shows how the algorithm gets started. We are given 12 training examples, which we classify into positive and negative sets. We then decide which attribute to use as the first test in the tree. Figure 18.4(a) shows that Type is a poor attribute, because it leaves us with four possible outcomes, each of which has the same number of positive and negative examples. On the other hand, in Figure 18.4(b) we see that Patrons is a fairly important The same example or an example with the same description-this distinction is very important, and we will return to it in Chapter 19. Section 18.3. Learning Decision Trees 657 Patrons? Figure 18.4 Splitting the examples by testing on attributes. (a) Splitting on Type brings us no nearer to distinguishing between positive and negative examples. (b) Splitting on Patrons does a good job of separating positive and negative examples. After splitting on F'atrons, Hungry is a fairly good second test. attribute, because if the value is None or Some, then we are left with example sets for which we can answer definitively (No and Yes, respectively).. If the value is Full, we are left with a mixed set of examples. In general, after the first attribute test splits up the examples, each outcome is a new decision tree learning problem in itself, with fewer examples and one fewer attribute. There are four cases to consider for these recursive problems: 1. If there are some positive and some negative examples, then choose the best attribute to split them. Figure 18.4(b) shows Hungry being used to split the remaining examples. 2. If all the remaining examples are positive (or all. negative), then we are done: we can answer Yes or No. Figure 18.4(b) shows examples of this in the None and Some cases. 3. If there are no examples left, it means that no such example has been observed, and we return a default value calculated from the majority classification at the node's parent. 4. If there are no attributes left, but both positive and neg,ative examples, we have a prob- lem. It means that these examples have exactly the same description, but different NOISE classifications. This happens when some of the data are incorrect; we say there is noise in the data. It also happens either when the attributes do not give enough information to describe the situation fully, or when the domain is truly nondeterministic. One simple way out of the problem is to use a majority vote. The DECISION-TREE-LEARNING algorithm is showin in Figure 18.5. The details of the HOOSE-ATTRIBUTE are given in the next subsection. method for C The final tree produced by the algorithm applied to the 12-example data set is shown in Figure 18.6. The tree is clearly different from the original tree shown in Figure 18.2, despite the fact that the data were actually generated from an agent using the original tree. One might conclude that the learning algorithm is not doing a very good job of learning the correct 658 Chapter 18. Learning from Observations function DECISION-TREE-LEARNING(X, attribs, default) returns a decision tree inputs: examples, set of examples attrzbs, set of attributes default, default value for the goal predicate if examples is empty then return default else if all examples have the same classification then return the classification else if attrzbs is empty then return MAJORITY-VALUE() else best +- CHOOSE-ATTRIBUTE(, examples) tree +a new decision tree with root test best m +- MAJORITY-VALUE(XS) for each value vi of best do examplesi + elements of examples with best = vi) subtree + DECISION-TREE-LEARNING(X, attribs - best, m) add a branch to tree with label vi and subtree subtree return tree Figure 18.5 The decision tree learning algorithm. Figure 18.6 The decision tree induced from the 12-example training set. function. This would be the wrong conclusion to draw, however. The learning algorithm examples, not at the correct function, and in fact, its hypothesis (see Figure 18.6) looks at the not only agrees with all the examples, but is considerably simpler than the original tree. The learning algorithm has no reason to include tests for Raining and Reservation, because it can classify all the examples without them. It has also detected an interesting and previously unsuspected pattern: the first author will wait for Thai food on weekends. Of course, if we were to gather more examples, we might induce a tree more similar to the original. The tree in Figure 18.6 is bound to make a mistake; for example, it has never seen a case where the wait is 0-10 minutes but the restaurant is full. For a case where Section 18.3. Learning Decision Trees 659 Hungry is false, the tree says not to wait, but I (SR) would certainly wait. This raises an obvious question: if the algorithm induces a consistent, but incorrect, tree from the examples, how incorrect will the tree be? We will show how to analyze this question experimentally, after we explain the details of the attribute selection step. Choosing attribute tests The scheme used in decision tree learning for selecting attributes is designed to minimize the depth of the final tree. The idea is to pick the attribute that goes as far as possible toward providing an exact classification of the examples. A perfect attribute divides the examples into sets that are all positive or all negative. The Patrons attribute is not perfect, but it is fairly good. A really useless attribute, such as Type, leaves the example sets with roughly the same proportion of positive and negative examples as the original set. All we need, then, is a formal measure of "fairlly good" and "really useless" and we can implement the CHOOSE-ATTRIBUTE function of Figure 18.5. The measure should have its maximum value when the attribute is perfect and ts miilimum value when the attribute INFORMATION is of no use at all. One suitable measure is the expected amount of information provided by the attribute, where we use the term in the mathematical sense first defined in Shannon and Weaver (1949). To understand the notion of information, think about it as providing the answer to a question-for example, whether a coin will come up heads. The amount of information( contained in the answer depends on one's prior knolwledge. The less you know, the more information is provided. Information theory measures information content in bits. One bit of information is enough to answer a yeslno question about which one has no idea, such as the flip of a fair coin. In general, if the possibl'e answers vi have probabilities P(vi), then the information content I of the actual answer is given by To check this equation, for the tossing of a fair coin, we get logz = 1 bit. 2 If the coin is loaded to give 99% heads, we get I (1/100,99/100) = 0.08 bits, and as the 1, the information of the actual ansvver goes to 0. probability of heads goes to For decision tree learning, the question that needs answering is; for a given example, what is the correct classification? A correct decision tree will answer this question. An esti- mate of the probabilities of the possible answers before any of the attributes have been tested is given by the proportions of positive and negative exalmples in the training set. Suppose the training set contains p positive examples and n negative examples. Then an estimate of the information contained in a correct answer is The restaurant training set in Figure 18.3 has p = n = 6, so we need 1 bit of information. Now a test on a single attribute A will not usually tell us this much information, but it will give us some of it. We can measure exactly how much by looking at how much 660 Chapter 18. Learning from Observations information we still need after the attribute test. Any attribute A divides the training set E into subsets El, . . . , E, according to their values for A, where A can have v distinct values. Each subset Ei has pi positive examples and ni negative examples, so if we go along that branch, we will need an additional I (pi/(pi + ni), ni/(pi + ni)) bits of information to answer the question. A randomly chosen example from the training set has the ith value for the attribute with probability (pi + ni)/(p + n), so on average, after testing attribute A, we will need INFORMATION GAIN bits of information to classify the example. The information gain from the attribute test is the difference between the original information requirement and the new requirement: The heuristic used in the CHOOSE-ATTRIBUTE function is just to choose the attribute with the largest gain. Returning to the attributes considered in Figure 18.4, we have Gain(Patrons) = 1 - &I(o, 1) + &1(1,0) + 1 (a, )I = 0.541 bits. confirming our intuition that Patrons is a better attribute to split on. In fact, Patrons has the highest gain of any of the attributes and would be chosen by the decision-tree learning algorithm as the root. Assessing the performance of the learning algorithm A learning algorithm is good if it produces hypotheses that do a good job of predicting the classifications of unseen examples. In Section 18.5, we will see how prediction quality can be estimated in advance. For now, we will look at a methodology for assessing prediction quality after the fact. Obviously, a prediction is good if it turns out to be true, so we can assess the quality of a hypothesis by checking its predictions against the correct classification once we know it. We TEST SET do this on a set of examples known as the test set. If we train on all our available examples, then we will have to go out and get some more to test on, so often it is more convenient to adopt the following methodology: 1. Collect a large set of examples. 2. Divide it into two disjoint sets: the training set and the test set. 3. Apply the learning algorithm to the training set, generating a hypothesis h. 4. Measure the percentage of examples in the test set that are correctly classified by h. 5. Repeat steps 2 to 4 for different sizes of training sets and different randomly selected training sets of each size. The result of this procedure is a set of data that can be processed to give the average prediction quality as a function of the size of the training set. This function can be plotted on a graph, LEARNINGCURVE giving what is called the learning curve for the algorithm on the particular domain. The 66 1 Section 18.3. Learning Decision Trees 100 1 20 T:Ling se::ze 1 Figure 18.7 A learning curve for the decision tree algorithm on 100 randomly generated examples in the restaurant domain. The graph summarizes 20 trials. learning curve for DECISION-TREE-LEARNING with( the restaurant examples is shown in Figure 18.7'. Notice that, as the training set grows, the prediction quality increases. (For this reason, such curves are also called happy graphs.) This is a good sign that there: is indeed some pattern in the data and the learning algorithm is picking it up. Obviously, the learning algorithm must not be allowed to "see" the test data before the learned hypothesis is tested on them. Unfortunately, it is all too easy to fall into the trap PEEKING of peeking at the test data. Peeking typically happens as follows: A learning algorithm can have various "knobs" that can be twiddled to tune its behavior-for example, various different criteria for choosing the next attribute in decision tree learning. We generate hypotheses for various different settings of the knobs, measure their perforinance on the test set, and report the prediction performance of the best hypothesis. Alas, peeking has occurred The reason is that the hypothesis was selected on the basis of its test setperjiormance, so information about the test set has leaked into the learning algorithm. The moral of this tale is that any process that involves comparing the performance of hypotheses on a test set must use a new test set to measure the performance of the hypothesis that is finally selected. In practice, this is too difficult, so people continue to run experiments on tain.ted sets of examples. Noise and overfitting We saw earlier that if there are two or more examples with the same description (in terms of the attributes) but different classifications, then the DIZCISIN-TREE-LEARNING algorithm must fail to find a decision tree consistent with all the examples. The solution we mentioned before is to have each leaf node report either the majority classification for its set of exam- ples, if a deterministic hypothesis is required, or report the estimated probabilities of each classification using the relative frequencies. Unfortunately, this is far from the whole story. It is quite possible, and in fact likely, that even when vital information is missing, the decision tree learning algorithm will find a decision tree that is consistent with all the examples. This is because the algorithm can use the irrelevant attributes, if any, to make spurious distinctions among the examples. 662 Chapter 18. Learning from Observations Consider the problem of trying to predict the roll of a die. Suppose that experiments are carried out during an extended period of time with various dice and that the attributes describing each training example are as follows: 1. Day: the day on which the die was rolled (Mon, Tue, Wed, Thu). 2. Month: the month in which the die was rolled (Jan or Feb). 3. Color: the color of the die (Red or Blue). As long as no two examples have identical descriptions, DECISION-TREE-LEARNING will find an exact hypothesis. The more attributes there are, the more likely it is that an exact hypothesis will be found. Any such hypothesis will be totally spurious. What we would like is that DECISION-TREE-LEARNING return a single leaf node with probabilities close to 116 for each roll, once it has seen enough examples. Whenever there is a large set of possible hypotheses, one has to be careful not to use the resulting freedom to find meaningless "regularity" in the data. This problem is called OVERFITTING overfitting. A very general phenomenon, overfitting occurs even when the target function is not at all random. It afflicts every kind of learning algorithm, not just decision trees. A complete mathematical treatment of overfitting is beyond the scope of this book. P DEClsloN RUNING TREE Here we present a simple technique called decision tree pruning to deal with the problem. Pruning works by preventing recursive splitting on attributes that are not clearly relevant, even when the data at that node in the tree are not uniformly classified. The question is, how do we detect an irrelevant attribute? Suppose we split a set of examples using an irrelevant attribute. Generally speaking, we would expect the resulting subsets to have roughly the same proportions of each class as the original set. In this case, the information gain will be close to zero.4 Thus, the information gain is a good clue to irrelevance. Now the question is, how large a gain should we require in order to split on a particular attribute? We can answer this question by using a statistical significance test. Such a test begins SIGNIFICANCE TEST NULLHYPOTHESIS by assuming that there is no underlying pattern (the so-called null hypothesis). Then the ac- tual data are analyzed to calculate the extent to which they deviate from a perfect absence of pattern. If the degree of deviation is statistically unlikely (usually taken to mean a 5% prob- ability or less), then that is considered to be good evidence for the presence of a significant pattern in the data. The probabilities are calculated from standard distributions of the amount of deviation one would expect to see in random sampling. In this case, the null hypothesis is that the attribute is irrelevant and, hence, that the information gain for an infinitely large sample would be zero. We need to calculate the probability that, under the null hypothesis, a sample of size v would exhibit the observed deviation from the expected distribution of positive and negative examples. We can measure the deviation by comparing the actual numbers of positive and negative examples in each subset, pi and ni, with the expected numbers, pi and fii, assuming true irrelevance: In fact, the gain will be positive unless the proportions are all exactly the same. (See Exercise 18.10.) Section 18.3. Learning Decision Trees 663 A convenient measure of the total deviation is given by D=C " (pi - J3i)2 + (ni - fii)2 i=l J3i fii Under the null hypothesis, the value of D is distributed according to the X2 (chi-squared) distribution with v - 1 degrees of freedom. The probability that the attribute is really ir- relevant can be calculated with the help of standard X2 tables or with statistical software. Exercise 18.11 asks you to make the appropriate changes to DECISION-TREE-LEARNING to implement this form of pruning, which is known as X2 pruning. X2 PRUNING With pruning, noise can be tolerated: classification errors give a linear increase in pre- diction error, whereas errors in the descriptions of ex,amples have an asymptotic effect that gets worse as the tree shrinks down to smaller sets. Trees constructed with pruning per- form significantly better than trees constructed without pruning when the data contain a large amount of noise. The pruned trees are often much smaller and hence easier to understand. CROSS-VALIDATION Cross-validation is another technique that reduces overfitting. It can be applied to any learning algorithm, not just decision tree learning. The basic idea is to estimate how well each hypothesis will predict unseen data. This is done by setting aside some fraction of the known data and using it to test the prediction performance of a hypothesis induced from the remaining data. K-fold cross-validation means that you run k experiments, each time setting aside a different Ilk of the data to test on, and average the results. Popular values for k are 5 and 10. The extreme is k = n, also known as leave-one-out cross-validation. Cross- validation can be used in conjunction with any tree-construction method (including pruning) in order to select a tree with good prediction performance. 'To avoid peeking, we must then measure this performance with a new test set. Broadening the applicability of decision trees In order to extend decision tree induction to a wider variety of problems, a number of issues must be addressed. We will briefly mention each, suggesting that a full understanding is best obtained by doing the associated exercises: 0 Missing data: In many domains, not all the attribute values will be known for every example. The values might have gone unrecorded, or they might be too expensive to obtain. This gives rise to two problems: First, given a complete decision tree, how should one classify an object that is missing one of the test attributes? Second, how should one modify the information gain formula when some examples have unknown values for the attribute? These questions are addressed in Exercise 18.12. 0 Multivalued attributes:When an attribute has nnany possible values, the information gain measure gives an inappropriate indication of the attribute's usefulness. In the ex- treme case, we could use an attribute, such as IlestaurantName, that has a different value for every example. Then each subset of examples would be a singleton with a unique classification, so the information gain measure would have its highest value for this attribute. Nonetheless, the attribute could be irrelevant or useless. One solution is to use the gain ratio (Exercise 18.13). 664 Chapter 18. Learning from Observations 0 Continuous and integer-valued input attributes: Continuous or integer-valued at- tributes such as Height and Weight, have an infinite set of possible values. Rather than generate infinitely many branches, decision-tree learning algorithms typically find the SPLIT POINT split point that gives the highest information gain. For example, at a given node in the tree, it might be the case that testing on Weight 160 gives the most information. Ef- ficient dynamic programming methods exist for finding good split points, but it is still by far the most expensive part of real-world decision tree learning applications. 0 Continuous-valued output attributes: If we are trying to predict a numerical value, such as the price of a work of art, rather than a discrete classification, then we need REGRESSION TREE a regression tree. Such a tree has at each leaf a linear function of some subset of numerical attributes, rather than a single value. For example, the branch for hand- colored engravings might end with a linear function of area, age, and number of colors. The learning algorithm must decide when to stop splitting and begin applying linear regression using the remaining attributes (or some subset thereof). A decision-tree learning system for real-world applications must be able to handle all of these problems. Handling continuous-valued variables is especially important, because both physical and financial processes provide numerical data. Several commercial packages have been built that meet these criteria, and they have been used to develop several hundred fielded systems. In many areas of industry and commerce, decision trees are usually the first method tried when a classification method is to be extracted from a data set. One important property of decision trees is that it is possible for a human to understand the output of the learning algorithm. (Indeed, this is a legal requirement for financial decisions that are subject to anti- discrimination laws.) This is a property not shared by neural networks (see Chapter 20). So far we have looked at learning methods in which a single hypothesis, chosen from a hypothesis space, is used to make predictions. The idea of ensemble learning methods is to select a whole collection, or ensemble, of hypotheses from the hypothesis space and combine their predictions. For example, we might generate a hundred different decision trees from the same training set and have them vote on the best classification for a new example. The motivation for ensemble learning is simple. Consider an ensemble of M = 5 hy- potheses and suppose that we combine their predictions using simple majority voting. For the ensemble to misclassify a new example, at least three of theJCive hypotheses have to misclas- sib it. The hope is that this is much less likely than a misclassification by a single hypothesis. Suppose we assume that each hypothesis hi in the ensemble has an error of p- that is, the probability that a randomly chosen example is rnisclassified by hi is p. Furthermore, suppose we assume that the errors made by each hypothesis are independent. In that case, if p is small, then the probability of a large number of misclassifications occurring is minuscule. For ex- ample, a simple calculation (Exercise 18.14) shows that using an ensemble of five hypotheses reduces an error rate of 1 in 10 down to an error rate of less than 1 in 100. Now, obviously Section 18.4. Ensemble Learning 665 Figure 18.8 Illustration of the increased expressive power obtained by ensemble learning. We take three linear threshold hypotheses, each of which classifies positively on the non- shaded side, and classify as positive any example classified positively by all three. The resulting triangular region is a hypothesis not expressible in the original hypothesis space. the assumption of independence is unreasonable, because hypotheses are likely to be misled in the same way by any misleading aspects of the training data. But if the hypotheses are at least a little bit different, thereby reducing the correlation between their errors, then ensemble learning can be very useful. Another way to think about the ensemble idea is as a generic way of enlarging the hypothesis space. That is, think of the ensemble itself as a hypothesis and the new hypothesis space as the set of all possible ensembles constructible from hypotheses in the original space. Figure 18.8 shows how this can result in a more expressive hypothesis space. If the original hypothesis space allows for a simple and efficient learning algorithm, then the ensemble method provides a way to learn a much more expressive class of hypotheses without incurring much additional computational or algorithmic complexity. BOOSTING The most widely used ensemble method is called boosting. To understand how it works, WEIGHTEDTRA'NING SET we need first to explain the idea of a weighted training set. In such a training set, each example has an associated weight wJ 0. The higher the weight of an example, the higher is the importance attached to it during the learning of a hypothesis. It is straightforward to modify the learning algorithms we have seen so far to operate with weighted training sets5 Boosting starts with w:, = 1 for all the examples (i.e., a normal training set). From this set, it generates the first hypothesis, hl. This hypothesis wall classify some of the training examples correctly and some incorrectly. We would like the next hypothesis to do better on the misclassified examples, so we increase their weights while decreasing the weights of the correctly classified examples. From this new weighted training set, we generate hypothesis h2. The process continues in this way until we have generated M hypotheses, where M is For learning algorithms in which this is not possible, one can instead create a replicated training set where the ith example appears w, times, using randomization to handle fractional weights. 666 Chapter 18. Learning from Observations 8 1 I hl= fi h h3=k I h How the boosting algorithm works. Each shaded rectangle corresponds to Figure 18.9 an example; the height of the rectangle corresponds to the weight. The ticks and crosses indicate whether the example was classified correctly by the current hypothesis. The size of the decision tree indicates the weight of that hypothesis in the final ensemble. an input to the boosting algorithm. The final ensemble hypothesis is a weighted-majority combination of all the M hypotheses, each weighted according to how well it performed on the training set. Figure 18.9 shows how the algorithm works conceptually. There are many variants of the basic boosting idea with different ways of adjusting the weights and combining the hypotheses. One specific algorithm, called ADABOOST, is shown in Figure 18.10. While the details of the weight adjustments are not so important, ADABOOST does have a very WEAK LEARNING important property: if the input learning algorithm L is a weak learning algorithm-which means that L always returns a hypothesis with weighted error on the training set that is slightly better than random guessing (i.e., 50% for Boolean classification)-then ADABOOST will return a hypothesis that classiJies the training data perfectly for large enough M. Thus, the algorithm boosts the accuracy of the original learning algorithm on the training data. This result holds no matter how inexpressive the original hypothesis space and no matter how complex the function being learned. Let us see how well boosting does on the restaurant data. We will choose as our original DECISION STUMP hypothesis space the class of decision stumps, which are decision trees with just one test at the root. The lower curve in Figure 18.1 1(a) shows that unboosted decision stumps are not very effective for this data set, reaching a prediction performance of only 8 1 % on 100 training examples. When boosting is applied (with M = 5), the performance is better, reaching 93% after 100 examples. An interesting thing happens as the ensemble size M increases. Figure 18.11(b) shows the training set performance (on 100 examples) as a function of M. Notice that the error reaches zero (as the boosting theorem tells us) when M is 20; that is, a weighted-majority Section 18.4. Ensemble Learning 667 function ABoos(examples, L, M) returns a weighted-majority hypothesis inputs: examples, set of N labelled examples (XI, yl), . . . , (XN, y) L, a learning algorithm M, the number of hypotheses in the ensemble local variables: w, a vector of N example weights, initially l./N h, a vector of M hypotheses z, a vector of M hypothesis weights for m =: 1 to M do hm + L(examples, w) error c 0 forj=ltoNdo if hm (xj) yj then error c error + wj forj=ltoNdo if hm (xj) = yj then wj +- wj . error/(l - error) w +- NORMALIZE(W) zm t log (1 - error)/ error return WEIGHTED-MAJORITY(, z) Figure 18.10 The ADABOOST variant of the boosting method for ensemble learning. The algorithm generates hypotheses by successively reweighting the trainng examples. The func- tion WEIGHTED-MAJORITY generates a hypothesis that returns the output value with the highest vote from the hypotheses in h, with votes weighted by z. 17 - 0.95 . B % 0.9 - 3 g 0.85 . Training error - - + 0.8 . ,A -., ,-,,,.,,, ,',,+.;.,,,,- 8 Test error 0.75 : ./i , ), y ,,/ ' 0.7 +.J; 8 - 0.65 . jj Boosted decision stumps - 5 Decision stump 8 0.6 . / 0.55 - 0.5 . L Training set size Number of hypotheses M (4 (b) Figure 18.11 (a) Graph showing the performance of boosted decision stumps with M = 5 versus decision stumps on the restaurant data. (b) The propoition correct on the training set and the test set as a function of M, the number of hypotheses in the ensemble. Notice that the test set accuracy improves slightly even after the training, accuracy reaches 1, i.e., after the ensemble fits the data exactly. 668 Chapter 18. Learning from Observations combination of 20 decision stumps suffices to fit the 100 examples exactly. As more stumps are added to the ensemble, the error remains at zero. The graph also shows that the test set performance continues to increase long afer the training set error has reached zero. At M = 20, the test performance is 0.95 (or 0.05 error), and the performance increases to 0.98 as late as M = 137, before gradually dropping to 0.95. This finding, which is quite robust across data sets and hypothesis spaces, came as quite a surprise when it was first noticed. Ockham's razor tells us not to make hypotheses more complex than necessary, but the graph tells us that the predictions improve as the ensemble hypothesis gets more complex Various explanations have been proposed for this. One view is that boosting approximates Bayesian learning (see Chapter 20), which can be shown to be an optimal learning algorithm, and the approximation improves as more hypotheses are added. Another possible explanation is that the addition of further hypotheses enables the ensemble to be more deJinite in its distinction between positive and negative examples, which helps it when it comes to classifying new examples. The main unanswered question posed in Section 18.2 was this: how can one be sure that In one's learning algorithm has produced a theory that will correctly predict the future? formal terms, how do we know that the hypothesis h is close to the target function f if we don't know what f is? These questions have been pondered for several centuries. Until we find answers, machine learning will, at best, be puzzled by its own success. COMPUTATIONAL The approach taken in this section is based on computational learning theory, a field LEARNING THEORY at the intersection of AI, statistics, and theoretical computer science. The underlying principle is the following: any hypothesis that is seriously wrong will almost certainly be "ound out" with high probability after a small number of examples, because it will make an incorrect prediction. Thus, any hypothesis that is consistent with a suficiently large set of training PROBABLY APPROXIMATELY examples is unlikely to be seriously wrong: that is, it must beprobably approximately correct. CORRECT Any learning algorithm that returns hypotheses that are probably approximately correct is PAC-LEARNING called a PAC-learning algorithm. There are some subtleties in the preceding argument. The main question is the con- nection between the training and the test examples; after all, we want the hypothesis to be approximately correct on the test set, not just on the training set. The key assumption is that the training and test sets are drawn randomly and independently from the same pop- ulation of examples with the same probability distribution. This is called the stationarity STATIONARITY assumption. Without the stationarity assumption, the theory can make no claims at all about the future, because there would be no necessary connection between future and past. The stationarity assumption amounts to supposing that the process that selects examples is not malevolent. Obviously, if the training set consists only of weird examples-two-headed dogs, for instance-then the learning algorithm cannot help but make unsuccessful generalizations about how to recognize dogs.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.