How Decision trees work

decision trees in engineering design lecture notes, how are decision trees used to analyze sequential games, how decision tree works in data mining, how decision tree algorithm works pdf free download
Dr.SamuelHunt Profile Pic
Dr.SamuelHunt,United Arab Emirates,Teacher
Published Date:21-07-2017
Your Website URL(Optional)
Lists, Decisions and Graphs With an Introduction to Probability Unit DT: Decision Trees and Recursion Edward A. Bender S. Gill Williamson cEdward A. Bender & S. Gill Williamson 2010. All rights reserved.Preface The material in this unit of study was, over several years, presented by the authors to lower division undergraduates in the Department of Mathematics and the Department of Computer Science and Engineering at the University of California, San Diego (UCSD). All material has been classroom tested by the authors and other faculty members at UCSD. Thefirstcourseofatwoquartersequencewaschosen fromsixunitsofstudy: Boolean Functions (Unit BF), Logic (Unit Lo), Number Theory and Cryptography (Unit NT), Sets and Functions (Unit SF), and Equivalence and Order (Unit EO) The second course of the sequence was chosen from four units of study: Counting and Listing (Unit CL), Functions (Unit Fn), Decision Trees and Recursion (Unit DT), and Basic Concepts in Graph Theory (Unit GT). The order of presentation of units within the first six, as well as those within the second four, can be varied for students with a good high school background in mathematics. Discrete mathematics has become an essential tool in computer science, economics, biology, mathematics, chemistry, and engineering. Each area introduces its own special terms for shared concepts in discrete mathematics. The only way to keep from reinventing the wheel from area to area is to know the precise mathematical ideas behind the concepts being applied by these various fields. Our course material is dedicated to this task. At the end of each unit is a section of multiple choice questions: Multiple Choice Questions for Review. These questions should be read before reading the corresponding unit, and they should be referred to frequently as the units are read. We encouraged our students to beable to work these multiple choice questions and variations on them with ease and understanding. At the end of each section of the units are exercises that are suitable for written homework, exams, or class discussion. iiiivTable of Contents Unit DT: Decision Trees and Recursion Section 1: Basic Concepts of Decision Trees................................................DT-1 decisiontrees, vertices, root, edges, degree ofvertex, downdegree, child, parent, leaves, inter- nalvertex, height ofleaf, pathtovertex, traversals ofdecision tree, depthfirstvertices, depth first edges, breadth first, preorder, postorder, length-first lex order, dictionary order, permu- tations inlex order, partial permutation, rankofleaf, directinsertion orderforpermutations, backtracking, Latin squares, domino coverings, strictly decreasing functions, unlabeled balls into boxes, isomorph rejection Section 2: Recursive Algorithms................................................................DT-15 recursivealgorithm, simplestcasereduction, recursivealgorithm for0-1sequences, sortingby recursive merging, recursive approach, recursive solutions, local description for permutations in lex order, recursive description of Towers of Hanoi, decision tree for Towers of Hanoi, recursionandstacks, configurationanalysisofTowersofHanoi, abandonedleavesandRANK, characteristic functions and subsets, Gray code for subsets, decision tree for Gray code for subsets, local description of Gray code, Towers of Hanoi with four poles Section 3: Decision Trees and Conditional Probability..............................DT-27 conditional probability, independent events, Venn diagrams, probabilities of leaves, probabil- ities of edges, probabilistic decision trees, decision trees and Bayesian methods, Bayes’ the- orem, multiplication theorem for conditional probabilities, sequential sampling, Monty Hall problem,theSATproblem,firstmomentmethod,tournaments,gambler’sruinproblem(S(n,k)), straight (card hand), Bell numbers B n Section 4: Inductive Proofs and Recursive Equations................................DT-42 induction, recursive equations, induction hypothesis, inductive step, base case, prime factor- ization, sum of first n integers, local description, recurrence relation, binomial coefficients C(n,k), Stirling numbers S(n,k), guessing solutions to recurrences, linear two term recur- rence, constant coefficients, characteristic equation, two real roots, one real root, complex roots, recursion for derangements, Fibonacci recurrence relation, recurrence relation for de- rangements Multiple Choice Questions for Review.......................................................DT-53 Notation Index.................................................................................. DT-Index 1 Subject Index.................................................................................... DT-Index 3 Starred sections () indicate more difficult and/or specialized material. vviUnit DT Decision Trees and Recursion In many situations one needs to make a series of decisions. This leads naturally to a structure called a “decision tree.” Decision trees provide a geometrical framework for organizing the decisions. The important aspect is the decisions that are made. Everything we do in this unit could be rewritten to avoid the use of trees; however, trees • give us a powerful intuitive basis for viewing the problems of this chapter, • provide a language for discussing the material, • allow us to view the collection of all decisions in an organized manner. We’ll begin with elementary examples of decision trees. We then show how decision trees can be used to study recursive algorithms. Next we shall look at decision trees and “Bayesian methods” in probability theory. Finally we relate decision trees to induction and recursive equations. Section 1: Basic Concepts of Decision Trees One area of application for decision trees is systematically listing a variety of functions. k The simplest general class of functions to list is the entire set n . We can create a typical element in the list by choosing an element of n and writing it down, choosing another element (possibly the same as before) of n and writing it down next, and so on until we have made k decisions. This generates a function in one line form sequentially: First f(1) is chosen, thenf(2) is chosen and so on. We can represent all possible decisions pictorially by writing down the decisionsmade so far and thensome downward “edges”indicatingthe possible choices for the next decision. We begin this sectionby discussing the picture of a decision tree, illustrating this with a variety of examples. Then we study how a tree is traversed, which is a way computers deal with the trees. DT-1Decision Trees and Recursion What is a Decision Tree? 3 Example 1 (Decision tree for 2 ) Here is an example of a decision tree for the 3 functions 2 . We’ve omitted the commas; for example, 121 stands for the function 1,2,1 in one-line form. R 2 1 1 2 1 2 1 2 21 22 11 12 1 2 1 2 1 2 1 2 111 112 121 122 211 212 221 222 The set V =R,1,2,11,12,21,22,111,112,121,122,211,212,221,222 is called the set of vertices of the decision tree. The vertex set for a decision tree can be any set, but must be specified in describing the tree. You can see from the picture of the decision tree that the places where the straight line segments (called edges) of the tree end is where the vertices appear in the picture. Each vertex should appear exactly once in the picture. The symbol R stands for the root of the decision tree. Various choices other than R can be used as the symbol for the root. The edges of a decision tree such as this one are specifiedby giving the pair of vertices atthetwoendsoftheedge,topvertexfirst,asfollows: (R,1),(21,212),etc. Theverticesat theendsofanedgearesaidtobe“incident”onthatedge. Thecompletesetofedgesofthis decision tree is the set E = (R,1), (R,2), (1,11), (1,12), (2,21),(2,22), (11,111), (11,112), (12,121), (12,122), (21,211), (21,212), (22,221), (22,222). In addition to the edges, there are“labels,” eithera“1”ora “2,”shownonthe line segmentsrepresentingthe edgesinthe picture. This labeling of the edges can be thought of as a function from the set of edges, E, to the set1,2. If e = (v,w) is an edge, the vertex w, is called a child of v, and v is the parent of w. The children of 22 are 221 and 222. The parent of 22 is 2. The degree of a vertex v is the number of edges incident on that vertex. The down degree of v is the number of edges e = (v,w) incident on that vertex (and below it); in other words, it is the number of children of v. The degree of 22 is 3 (counting edges (2,22),(22,221),(22,222)). Thedowndegreeof22is 2(countingedges(22,221),(22,222)). Verticeswithdowndegree0arecalled leaves. All otherverticesare called internal vertices. DT-2Section 1: Basic Concepts of Decision Trees For any vertex v in a decision tree there is a unique list of edges (x ,x ),(x ,x ),..., 1 2 2 3 (x ,x ) from the rootx tov =x . This sequence is called the path to a vertex in the k k+1 1 k+1 decision tree. The number of edges, k, is the length of this path and is called the height or distance of vertex v to from the root. The path to 22 is (R,2),(2,22). The height of 22 is 2. The decision tree of the previous example illustrates the various ways of generating 3 a function in 2 sequentially. It’s called a decision tree for generating the functions in 3 2 . Each edge in the decision tree is labeled with the choice of function value to which it corresponds. Note that the labeling does not completely describe the corresponding decision — we should have used something like “Choose 1 for the value of f(2)” instead of simply “1” on the line from 1 to 11. In this terminology, a vertexv representsthe partial functionconstructedso far, when the vertexv has been reachedby starting at the root, following the edgestov, and making the decisions that label the edges on that unique path from the root to v. The edges leading out of a vertex are labeled with all possible decisions that can be made next, given the partial function at the vertex. We labeled the edges so that the labels on edges out of each vertex are in order, 1,2, when read left to right. The leaves are the finished functions. Notice that the leaves are in lexicographic order. In general, if we agree to label the edges from each vertex in order, then any set of functions generated sequentially by specifying f(i) at the ith step will be in lex order. To create a single functionwe start at the root and choose downward edges (i.e., make decisions) until we reach a leaf. This creates a path from the root to a leaf. We may describe a path in any of the following ways: • the sequence of verticesv ,v ,...,v on the path from the rootv to the leafv ; 0 1 m 0 m • the sequence of edges e ,e ,...,e , where e =(v ,v ), i = 1,...,m; 1 2 m i i−1 i • the sequence of decisions D ,D ,...,D , where e is labeled with decision D . 1 1 m i i We illustrate with three descriptions of the path from the root R to the leaf 212 in Exam- ple 1: • the vertex sequence is R, 2, 21, 212; • the edge sequence is (R,2),(2,21),(21,212); • the decision sequence is 2, 1, 2. Decision trees are a part of a more general subject in discrete mathematics called “graph theory,” which is studied in another unit. It is now time to look at some more challenging examples so that we can put decision trees to work for us. The next example involves counting words where the decisions are based on patterns of consonants and vowels. DT-3Decision Trees and Recursion Example 2 (Counting words) Using the 26 lettersof the alphabetandconsideringthe letters AEIOUY to be vowels how many five letter “words” (i.e. five long lists of letters) are there, subject to the following rules? (a) No vowels are ever adjacent. (b) There are never three consonants adjacent. (c) Adjacent consonants are always different. Tostartwith,it wouldbeusefultohave alist ofallthepossiblepatternsofconsonants and vowels; e.g., CCVCV (with C for consonant and V for vowel) is possible but CVVCV and CCCVC violate conditions (a) and (b) respectively and so are not possible. We’ll use a decision tree to generate these patterns in lex order. Of course, a pattern CVCCV can be thought of as a function f where f(1)= C, f(2)= V, ..., f(5)=V. We could simply try to list the patterns (functions) directly without using a deci- sion tree. The decision tree approach is preferable because we are less likely to overlook something. The resulting tree can be pictured as follows: _ C V CC CV VC CCV CVC VCC VCV CCVC CVCC CVCV VCCV VCVC CCVCC CCVCV CVCCV CVCVC VCCVC VCVCC VCVCV At each vertex there are potentially two choices, but at some vertices only one is possible becauseof rules (a)and(b) above. Thus thereare one or two decisionsat eachvertex. You should verify that this tree lists all possibilities systematically. We have usedthedash “–”asthe symbol forthe root. This standsforthe emptyword on the letters C and V. The set of labels for the vertices of this decision tree T is a set of words of length 0 through 5. The vertex set is determined by the rules (or “syntax”) associated with the problem (rules (a), (b), and (c) above). Using the rules of construction, (a), (b), and (c), we can now easily count the number ofwordsassociatedwitheachleaf. Thetotalnumberofwordsisthesumoftheseindividual 2 counts. ForCCVCC we obtain(20×19) ×6; forCCVCV, CVCCV, VCCVC, andVCVCC 2 3 2 we obtain (20×19)×20×6 ; for CVCVC we obtain 20 × 6 ; for VCVCV we obtain 2 3 20 ×6 . Definition 1 (Rank of an element of a list) The rank of an element in a list is the number of elements that appear before it in the list. The rank of a leaf of a decision tree is the number of leaves that are to the left of it in the picture of the tree. The rank is denoted by the function RANK. In Example 1, RANK(212)=5 and RANK(111)= 0. DT-4Section 1: Basic Concepts of Decision Trees Rank can be used to store data in a computer. For example, suppose we want to store 6 information for each of the 10≈ 3.6×10 permutations of 10. The naive way to do this is to have a 10× 10×···× 10 array and store information about the permutation f in 10 location f(1),...,f(10). This requires storage of size 10 . If we store information about permutations in a one dimensional array with information about f stored at RANK(f), we only need 10 storage locations, which is much less. We’ll discuss ranking permutations soon. The inverse of the rank function is also useful. Suppose we want to generate objects at random from a set of n objects. Let RANK be a rank function for them. Generate a −1 numberk between 0 andn−1 inclusive at random. Then RANK (k) is a random object. Example 3 (Permutations in lexicographic order) Recall that we can think of a permutation on 3 as a bijection f : 3→ 3. Its one-line form is f(1),f(2),f(3). Here is an example of a decision trees for this situation (omitting commas): _ 1 2 3 1 2 3 2 3 1 3 2 1 12 13 21 23 31 32 3 2 2 3 1 1 123 132 213 231 312 321 Because we first chose f(1), listing its values in increasing order, then did the same with f(2) and finally with f(3), the leaves are listed lexicographically, that is, in “alphabetical” order like a dictionary only with numbers instead of letters. We could have abbreviatedthis decision tree a bit by shrinking the edgescoming from vertices with only one decision and omitting labels on nonleaf vertices. As you can see, there is no “correct” way to label a decision tree. The intermediate labels are simply a tool to help you correctly list the desired objects (functions in this case) at the leaves. Sometimes one may even omit the function at the leaf and simply read it off the tree by looking at the labels on the edges or vertices associated with the decisions that lead from theroottotheleaf. Inthistree,thelabelsonanedgegoingdownfromvertexv telluswhat values to add to the end of v to get a “partial permutation” that is one longer than v. DT-5Decision Trees and Recursion Permutations We now look at two ways of ranking permutations. The method for generating random permutations in Example 23 of Unit Fn can provide another method for ranking permu- tations. If you’d like a challenge, you can think about how to do that. We won’t discuss it. Example 4 (Permutations in direct insertion order) Another way to create a per- mutation is by direct insertion. (Often this is simply called “insertion.”) Suppose that we have an ordered list of k items into which we want to insert a new item. It can be placed in any of k+1 places; namely, at the end of the list or immediately before the ith item where 1≤i≤k. By starting out with 1, choosing one of the two places to insert 2 in this list, choosing one of the three places to insert 3 in the new list and, finally, choosing one of the four places to insert 4 in the newest list, we will have produced a permutation of 4. To do this, we need to have some convention as to how the places for insertion are numbered whenthe list is writtenfrom left toright. The obvious choice is from left to right; however, right to left is often preferred. We’ll use right to left. One reason for this choice is that the leftmost leaf is 12...n as it is for lex order. If there are k + 1 possible positions to insert something, we number the positions 0,1,...,k, starting with the rightmost position as number 0. We’ll use the notation () to i stand for position i so that we can keep track of the positions when we write a list into which something is to be inserted. Here’s the derivationof the permutationof 4 associatedwith the insertions 1, 1 and 2. • Start with () 1() . 1 0 • Choose the insertion of the symbol 2 into position 1 (designated by () ) to get 1 21. With positions of possible insertions indicated, this is () 2() 1() . 2 1 0 • Nowinsertsymbol3intoposition1(designatedby() )toget231or,withpossible 1 insertions indicated () 2() 3() 1() . 3 2 1 0 • Finally, insert symbol 4 into position 2 to get 2431. Here is the decision tree for permutations of 4 in direct insertion order. We’ve turned DT-6Section 1: Basic Concepts of Decision Trees the vertices sideways so that rightmost becomes topmost in the insertion positions. 0 1 0 1 2 0 1 2 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 Thelabelsontheverticesare,ofcourse,thepartialpermutations,withthefullpermutations appearingontheleaves. Thedecisionlabelsontheedgesarethepositionsinwhichtoinsert the next number. Notice that the labels on the leaves are no longer in lex orderbecausewe constructed the permutations differently. Had we labeled the vertices with the positions usedforinsertion,theleaveswouldthenbelabeledinlexorder. Forexample,2413becomes 1,0,2, which is gotten by reading edge labels on the path from the root to the leaf labeled 2413. Similarly 4213 becomes, 1,0,3. Like the method of lex order generation, the method of direct insertion generation can be used for other things besides permutations. However, direct insertion cannot be applied as widely as lex order. Lex order generation works with anything that can be thought of as an (ordered) list, but direct insertion requires more structure. Note that the RANK(3412) = 10 and RANK(4321) = 23. What would RANK(35412) be for the permutations on 5 in direct insertion order? Traversing Decision Trees We conclude this section with an important class of search algorithms called back- tracking algorithms. In many computer algorithms it is necessary either to systematically inspectall the verticesof a decisiontree orto findthe leaves of the tree. An algorithmthat systematically inspects all the vertices (and so also finds the leaves) is called a traversal of the tree. How can we create such an algorithm? To understand how to do this, we first look at how a tree can be “traversed.” DT-7 1234 1243 123 1423 4123 1324 1342 132 12 1432 4132 3124 3142 312 3412 4312 1 2134 2143 213 2413 4213 2314 2341 231 21 2431 4231 3214 3241 321 3421 4321Decision Trees and Recursion Example 5 (Traversals of a decision tree) Here is a sample decision tree T with edges labeled A through J and vertices labeled 1 through 11. 1 B A 2 3 C D E F 4 5 6 7 G I J H 8 9 10 11 The arrows are not part of the decision tree, but will be helpful to us in describing certain ideas about linear orderings of vertices and edges that are commonly associated with deci- sion trees. Imagine going around(“traversing”)the decisiontree following arrows. Start at theroot,1, godownedgeAtovertex2, etc. Hereisthesequenceofverticesasencountered in this process: 1, 2, 4, 2, 5, 2, 1, 3, 6, 8, 6, 9, 6, 10, 6, 11, 6, 3, 7, 3, 1. This sequence of vertices is called the depth first vertex sequence, DFV(T), of the decision tree T. The numberoftimeseachvertexappearsinDFV(T) isone plusthedowndegreeofthatvertex. For edges, the corresponding sequence is A, C, C, D, D, A, B, E, G, G, H, H, I, I, J, J, E, F, F, B. This sequence is the depth first edge sequence, DFE(T), of the tree. Every edge appears exactly twice in DFE(T). If the vertices of the tree are read left to right, top to bottom, we obtain the sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11. This is called the breadth first vertex sequence, BFV(T). Similarly, the breadth first edge sequence, BFE(T), is A, B, C, D, E, F, G, H, I, J. The sequences BFV(T) and BFE(T) are linear orderings of the vertices and edges of thetreeT (i.e.,eachvertexoredgeappearsexactlyonceinthesequence). Wealsoassociate two linear orderings with DFV(T): • PREV(T), called the preorder sequence of vertices of T, is the sequence of first occur- rences of the vertices of T in DFV(T). • POSV(T), called the postorder sequence of vertices ofT, is the sequence of last occur- rences of the vertices of T in DFV(T). For the present tree PREV(T)= 1,2,4,5,3,6,8,9,10,11,7 and POSV(T)= 4,5,2,8,9,10,11,6,7,3,1. With a little practice, you can quickly construct PREV(T) and POSV(T) directly from the picture of the tree. For PREV(T), follow the arrows and list each vertex the first time you encounterit (and only then). For POSV(T), follow the arrows and list each vertex the last time you encounter it (and only then). Notice that the order in which the leaves of T appear, 4, 5, 8, 9, 10, 11, is the same in both PREV(T) and POSV(T). DT-8Section 1: Basic Concepts of Decision Trees We now return to the problem of creating a traversal algorithm. One way to imagine doing this is to generate the depth first sequence of vertices and/or edges of the tree as done in the preceding example. We can describe our traversal more precisely by giving an algorithm. Here is one which traverses a tree whose leaves are associated with functions and lists the functions in the order of PREV(T). Theorem 1 (Systematic Traversal Algorithm) The following procedure systemati- cally visits the vertices of a tree T in depth-first order, DFV(T), listing the leaves as they occur in the list DFV(T). 1. Start: Mark all edges as unused and position yourself at the root. 2. Leaf: If you are at a leaf, list the function. 3. Decide case: If there are no unused edges leading out from the vertex, go to Step 4; otherwise, go to Step 5. 4. Backtrack: If you are at the root, STOP; otherwise, return to the vertex just above this one and go to Step 3. 5. Decision: Select the leftmost unused edge out of this vertex, mark it used, follow it to a new vertex and go to Step 2. Step 4 is labeled Backtrack. What does this mean? If you follow the arrows in the tree pictured in Example 5, backtracking correspondsto going toward the root on an edge that has already been traversed in the opposite direction. In other words, backtracking refers to the process of moving along an edge back toward the root of the tree. Thinking in termsof the decision sequence,backtrackingcorrespondsto undoing(i.e., backtrackingon) a decision previously made. Notice that the algorithm only needs to keep track of the decisions from the root to the present vertex — when it backtracks, it can “forget” the decision it is backtracking from. You should take the time now to apply the algorithm to Example 1, noting which decisions you need to remember at each time. Sofarinouruseofdecisiontrees,ithasalwaysbeenclearwhatdecisionsarereasonable; i.e., are onapathtoasolution. (Inthiscase every leafis a solution.) This is becausewe’ve looked only at simple problems such as listing all permutationsofn or listing all functions k in n . We now look at a situation where that is not the case. Example 6 (Restricted permutations) Consider the following problem. List all permutations f of n such that f(i)−f(i+1)≤ 3 for 1≤in. It’s not at all obvious what decisions are reasonable in this case. For instance, whenn =9, the partially specified one line function 124586 cannot be completed to a permutation. There is a simple cure for this problem: We will allow ourselves to make decisions whichleadto“deadends,” situationswherewe cannotcontinue ontoasolution. Withthis expandednotionofa decisiontree,thereareoftenmany possibledecisiontreesthatappear reasonablefordoingsomething. Supposethatwe’regeneratingthingsinlexorderandwe’ve DT-9Decision Trees and Recursion reached the vertex 12458. What do we do now? We’ll simply continue to generate more of the permutation, making sure thatf(i)−f(i+1)≤ 3 is satisfied for that portion of the permutation we have generated so far. The resulting portion of the tree that starts with the partial permutation 12458 is represented by the following decision tree. Because the namesthe of the verticesare so long, we’ve omittedall of the name, exceptfor the function value just added, for all but the root. Thus the rightmost circled vertex is 124589763. 12458 7 9 6 3 9 6 7 6 9 7 7, 9 9 7 3 9 6 3 7 6 3 3 3 9 3 7 3 3 Each vertex is labeled with an additional symbol of the permutation after the symbols 12458. The circled leaves represent solutions. Each solution is obtained by starting with 12458 and adding all symbols on vertices of the path from the root, 12458, to the circled leaf. Thus the two solutions are 124587963 and 124589763. The leaves that are not circled are places where there is no way to extend the partial permutation corresponding to that leaf such thatf(i)−f(i+1)≤ 3 is satisfied. Below each leaf that is not a solution, you can see the set of values available for completing the partial permutation at that leaf. It is clear in those cases that no completion satisfyingf(i)−f(i+1)≤3 is possible. Had we been smarter, we might have come up with a simple test that would have told us that 124586 could not be extended to a solution. This would have lead to a different decision tree in which the vertex corresponding to the partial permutation 124586 would have been a leaf. You should note here that the numbers 3, 6, 7, 9 are labels on the vertices of this tree. A vertex of this tree is the partial permutation gotten by concatenating (attaching) the labelsonthepathfromtheroottothatvertextothepermutation12458. Thus, 124586isa vertex with label 6. The path from the root to that vertex is 12458, 124586(corresponding to the edge (12458,124586)). The vertices are not explicitly shown, but can be figured out from the rules just mentioned. The labels on the vertices correspondto the decisions to be made (how to extend the partial permutation created thus far). Our tree traversal algorithm, Theorem 1, requires a slight modification to cover this type of extended decision tree concept where a leaf need not be a solution: Change Step 2 to ′ 2. Leaf: If you are at a leaf, take appropriate action. For the tree rooted at 12458 in this example, seven of the leaves are not solutions and two are. For the two that are, the “appropriate action” is to print them out, jump and shout, or, in some way, proclaim success. For the leaves that are not solutions, the appropriate action is to note failure in some way (or just remain silent) and Backtrack (Step (4) DT-10Section 1: Basic Concepts of Decision Trees of Theorem 1). This backtracking on leaves that are not solutions is where this type of algorithm gets its name: “backtracking algorithm.” How can there be more than one decision tree for generating solutions in a specified order? Suppose someone who was not very clever wanted to generateall permutationsofn n in lex order. He might program a computerto generate all functions inn in lex order and to then discard those functions which are not permutations. This leads to a much bigger n tree becausen is much bigger thann, even whenn is as small as 3. A somewhat cleverer friend might suggest that he have the program check to see that f(k)6=f(k−1) for each n−1 k 1. This won’t slow down the program very much and will lead to only n(n−1) functions. Thus the program should run faster. Someone else might suggest that the programmer check at each step to see that the function produced so far is an injection. If this is done, nothing but permutations will be produced, but the program may be much slower. The lesson to be learned from the previous paragraph is that there is often a trade off between the size of the decision tree and the time that must be spent at each vertex determiningwhatdecisionstoallow. Becauseofthis,differentpeoplemay developdifferent decision trees for the same problem. The differences between computer run times for different decision trees can be truly enormous. By carefully defining the criteria that allow one to decide that a vertex is a leaf, people have changed problems that were too long to run on a supercomputer into problems that could be easily run on a personal computer. We’ll conclude this section with two examples of backtracking of the type just discussed. Example 7 (Domino coverings) We are going to consider the problem of covering a m by n board (for example, m = n = 8 gives a chess board) with 1 by 2 rectangles (called “dominoes”). A domino canbe placedeitherhorizontally or vertically so thatit covers two squares and does not overlap another domino. Here is a picture of the situation for m =3, n= 4. (The sequencesofh’s andv’s undereleven covered boardswill be explained below.) h = horizontal domino v = vertical domino 3 x 4 board hhhhhh hhhvvh hhvhvh hhvvhh hhvvvv hvvhhh hvvvvh vhvhhh vvhhhh vvhvvh vvvvhh If the squares of the board are numbered systematically, left to right, top to bottom, from 1 to 12, we can describe any placement of dominoes by a sequence of 6 h’s and v’s: Each of the domino placements in the above picture has such a description just below it. Take as an example, hhvhvh (the third domino covering in the picture). We begin with no DT-11Decision Trees and Recursion dominoes on the board. None of the squares, numbered 1 to 12 are covered. The list of “unoccupied squares”is as follows: 1 2 3 4 5 6 7 8 9 10 11 12 Thus, the smallest unoccupied square is 1. The first symbol in hhvhvh is the h. That means that we take a horizontal domino and cover the square 1 with it. That forces us to cover square 2 also. The list of unoccupied squares is as follows: 3 4 5 6 7 8 9 10 11 12 Now the smallest unoccupiedsquare is 3. The secondsymbol in hhvhhv is also an h. Cover square 3with ahorizontaldomino, forcingustocover square 4also. The list ofunoccupied squares is as follows: 5 6 7 8 9 10 11 12 At this point, the first row of the board is covered with two horizontal dominoes (check the picture). Now the smallest unoccupied square is 5 (the first square in the second row). The third symbol in hhvhvh is v. Thus we cover square 5 with a vertical domino, forcing us to cover square 9 also. The list of unoccupied squares is as follows: 6 7 8 10 11 12 Weleave it toyoutocontinuethisprocesstothebitterendandobtainthedominocovering shown in the picture. Here is the general description of the process. Place dominoes sequentially as follows. If the first unused element in the sequence is h, place a horizontal domino on the first unoccupiedsquare andthesquare toits right. If thefirst unusedelement in the sequenceis v, place a vertical domino on the first unoccupiedsquare and the square just below it. Not all sequences correspond to legal placements of dominoes (try hhhhhv). For a 2×2 board, the only legal sequences are hh and vv For a 2×3 board, the legal sequences are hvh, vhh and vvv. For a 3×4 board, there are eleven legal sequences as shown in the picture at the start of this example. To find these sequences in lex order we used a decision tree for generating sequences of h’s and v’s in lex order. Each decision is required to lead to a domino that lies entirely DT-12Section 1: Basic Concepts of Decision Trees on the board and does not overlap another domino. Here is our decision tree: h v h v h v h v v v h v h v h v h v h v h v v h v v h v h v v v h v h h h h h v h h h h h h Note thatin this tree, thedecision(label)that ledtoa vertexis placedat the vertexrather than on the edge. The actual vertices, not explicitly labeled, are the sequences of choices from the root to that vertex (e.g., the vertex hvv has label v). The leaf vhvvv associated with the path v,h,v,v,v does not correspond to a covering. It has been abandoned (i.e., declared a leaf but not a solution) because there is no way to place a domino on the lower left square of the board, which is the first free square. Draw a picture of the board to see what is happening. Our criterion for deciding if a vertex is a leaf is to check if that vertex corresponds to a solution or to a placement that does not permit another domino to be placed on the board. It is not hard to come up with a criterion that produces a smaller decision tree. For example, vhvv leaves the lower left corner of the board isolated. That means that vhvv cannot be extended to a solution, even though more dominoes can be placed on the board. But, checking this more restrictive criterion is more time consuming. Exercises for Section 1 1.1. List the nonroot vertices of the decision tree in Example 2 in PREV, POSV and BFV orders. 1.2. Let RANK denote the rank in lex order and let RANK denote the rank in inser- L I tion order on permutations of n. Answer the following questions and give reasons for your answers: (a) For n= 3 and n =4 which permutations σ have RANK (σ)= RANK (σ)? L I (b) What is RANK (2314)? RANK (45321)? L L (c) What is RANK (2314)? RANK (45321)? I I (d) What permutation σ of 4 has RANK (σ) =15? L (e) What permutation σ of 4 has RANK (σ) = 15? I DT-13Decision Trees and Recursion (f) What permutation σ of 5 has RANK (σ) =15? L 1.3. Draw the decision tree to list all sequences of length six of A’s and B’s that satisfy the following conditions: • There are no two adjacent A’s. • There are never three B’s adjacent. • If each leaf is thought of as a word, the leaves are in alphabetical order. 4 1.4. Draw a decision tree for D(6 ), the strictly decreasing functions from 4 to 6. You should choose a decision tree so that the leaves are in lex orderwhen read from left to right (a) What is the rank of 5431? of 6531? (b) What function has rank 0? rank 7? 4 (c) Your decision tree should contain the decision tree for D(5 ). Indicate it and use it to list those functions in lex order. (d) Indicate how all of the parts of this exercise can be interpreted in terms of subsets of a set. 1.5. ModifyTheorem1tolist all verticesin PREV order. Dothesame forPOSV order. 1.6. The president of Hardy Hibachi Corporation decided to design a series of differ- ent grills for his square-topped hibachis. They were to be collectibles. He hoped his customers would want one of each different design (and spend big bucks to get them). Having studied combinatorics in college, his undergrad summer intern suggested that these grills be modeled after the patterns associated with domino arrangements on 4× 4 boards. Their favorite grill was in the design which has the code vvhvvhhh. The student, looking at some old class notes, suggested seven other designs: vvhhvhvh, hhvvhvvh, vhvhhvvh, hvvhvhvh, hvvvvhhh, vhvhvvhh, hhhvvvh. These eight grills were fabricated out of sturdy steel rods, put in a box, andshippedtotheboss. Whenheopenedupthebox,muchtohisdisgust,hefound that all of the grills were the same. What went wrong? How should the collection of different grills be designed? (This is called an isomorph rejection problem.) The favorite grill: vvhvvhhh = DT-14

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.