Solving problems by Searching Artificial Intelligence pdf

solving problems by searching artificial intelligence and solving problems by searching informed search and exploration, learning to solve problems by searching for macro-operators
HalfoedGibbs Profile Pic
HalfoedGibbs,United Kingdom,Professional
Published Date:02-08-2017
Your Website URL(Optional)
SOLVING PROBLEMS BY SEARCHING 3 In which we see how an agent can find a sequence of actions that achieves its goals, when no single action will do. The simplest agents discussed in Chapter 2 were the reflex agents, which base their actions on a direct mapping from states to actions. Such agents cannot operate well in environments for which this mapping would be too large to store and would take too long to learn. Goal-based agents, on the other hand, can succeed by considering future actions and the desirability of their outcomes. PROBLEM-SOLVING This chapter describes one kind of goal-based agent called a problem-solving agent. AGENT Problem-solving agents decide what to do by finding sequences of actions that lead to desir- able states. We start by defining precisely the elements that constitute a "problem" and its "solution," and give several examples to illustrate these definitions. We then describe sev- eral general-purpose search algorithms that can be used to solve these problems and compare the advantages of each algorithm. The algorithms are uninformed, in the sense that they are given no information about the problem other than its definition. Chapter 4 deals with informed search algorithms, ones that have some idea of where to look for solutioins. This chapter uses concepts from the analysis of algorithms. Readers unfanuliar with the concepts of asymptotic complexity (that is, 00 notation) and NP-completeness should consult Appendix A. Intelligent agents are supposed to maximize their performance measure. As we mentioned in Chapter 2, achieving this is sometimes simplified if the agent can adopt a goal and aim at satisfying it. Let us first look at why and how an agent might do this. Imagine an agent in the city of Arad, Romania, enjoying a touring holiday. Tlie agent's performance measure contains many factors: it wants to improve its suntan, improve its Ro- manian, take in the sights, enjoy the nightlife (such as it is), avoid hangovers, and so on. The decision problem is a complex one involving many tradeoffs and careful reading of guide- books. Now, suppose the agent has a nonrefundable ticket to fly out of Bucharest the follow- 60 Chapter 3. Solving Problems by Searching ing day. In that case, it makes sense for the agent to adopt the goal of getting to Bucharest. Courses of action that don't reach Bucharest on time can be rejected without further consid- eration and the agent's decision problem is greatly simplified. Goals help organize behavior GOAL FORMULATION by limiting the objectives that the agent is trying to achieve. Goal formulation, based on the current situation and the agent's performance measure, is the first step in problem solving. We will consider a goal to be a set of world states-exactly those states in which the goal is satisfied. The agent's task is to find out which sequence of actions will get it to a goal state. Before it can do this, it needs to decide what sorts of actions and states to consider. If it were to try to consider actions at the level of "move the left foot forward an inch" or "turn the steering wheel one degree left," the agent would probably never find its way out of the parking lot, let alone to Bucharest, because at that level of detail there is too much uncertainty in the PROBLEM world and there would be too many steps in a solution. Problem formulation is the process FORMULATION of deciding what actions and states to consider, given a goal. We will discuss this process in more detail later. For now, let us assume that the agent will consider actions at the level of driving from one major town to another. The states it will consider therefore correspond to being in a particular town.' Our agent has now adopted the goal of driving to Bucharest, and is considering where to go from Arad. There are three roads out of Arad, one toward Sibiu, one to Timisoara, and one to Zerind. None of these achieves the goal, so unless the agent is very familiar with the geography of Romania, it will not know which road to f01low. In other words, the agent will not know which of its possible actions is best, because it does not know enough about the state that results from taking each action. If the agent has no additional knowledge, then it is stuck. The best it can do is choose one of the actions at random. But suppose the agent has a map of Romania, either on paper or in its memory. The point of a map is to provide the agent with information about the states it might get itself into, and the actions it can take. The agent can use this information to consider subsequent stages of a hypothetical journey via each of the three towns, trying to find a journey that eventually gets to Bucharest. Once it has found a path on the map from Arad to Bucharest, it can achieve its goal by carrying out the driving actions that correspond to the legs of the journey. In general, an agent with several immediate options of unknown value can decide what to do by jrst examining diflerent possible sequences of actions that lead to states of known value, and then choosing the best sequence. This process of looking for such a sequence is called search. A search algorithm takes a SEARCH SOLUTION problem as input and returns a solution in the form of an action sequence. Once a solution is found, the actions it recommends can be carried out. This is called the execution phase. Thus, EXECUTION we have a simple "formulate, search, execute" design for the agent, as shown in Figure 3.1. After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. It then uses the solution to guide its actions, doing whatever the solution recommends as Notice that each of these "states" actually corresponds to a large set of world states, because a real world state specifies every aspect of reality. It is important to keep in mind the distinction between states in problem solving and world states. We are assuming that most readers are in the same position and can easily imagine themselves to be as clueless as our agent. We apologize to Romanian readers who are unable to take advantage of this pedagogical device. Section 3.1. Problem-Solving Agents 6 1 function SIMPLE-PROELEM-SOLVING-AGENT(CP) returns an action inputs: percept, a percept static: seq, an action sequence, initially empty state, some description of the current world state goal, a goal, initially null problem, a problem formulation state c UPDATE-TATE(S, percept) if seq is empty then do goal + FORMULATE-GOAL() problem + FORMULATE-PROBLEM(, goal) seq + Sc(problem) action t FIRST(S) seq + REsT(seq) return action Figure 3.1 A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over. Note that when it is executing the sequence it ignores its percepts: it assumes that the solution it has found will always work. the next thing to do-typically, the first action of the sequence-and then removing that step from the sequence. Once the solution has been executed, the agent will formulate aL new goal. We first describe the process of problem formulation, and then devote the bulk of the chapter to various algorithms for the SEARCH function. We will not discuss the workings of the UPDATE-STATE and FORMULATE-GOAL functions further in this chapter. Before plunging into the details, let us pause briefly to see where prob1e:m-solving agents fit into the discussion of agents and environments in Chapter 2. The agent design in Figure 3.1 assumes that the environment is static, because formulating and solving the problem is done without paying attention to any changes that might be occurring in the envi- ronment. The agent design also assumes that the initial state is known; knowing it is easiest if the environment is observable. The idea of enumerating "alternative courses of action" assumes that the environment can be viewed as discrete. Finally, and most impol-tantly, the agent design assumes that the environment is deterministic. Solutions to problems are single sequences of actions, so they cannot handle any unexpected events; moreover, solutions are executed without paying attention to the percepts An agent that carries out its plans with its eyes closed, so to speak., must be quite certain of what is going on. (Control theorists call OPEN-LOOP this an open-loop system, because ignoring the percepts breaks the loop between agent and environment.) All these assumptions mean that we are dealing with the easiest kinds of en- vironments, which is one reason this chapter comes early on in the book. Section 3.6 takes a brief look at what happens when we relax the assumptions of observability and determinism. Chapters 12 and 17 go into much greater depth. 62 Chapter 3. Solving Problems by Searching Well-defined problems and solutions PROBLEM A problem can be defined formally by four components: INITIAL STATE The initial state that the agent starts in. For example, the initial state for our agent in Romania might be described as In(Arad). A description of the possible actions available to the agent. The most common for- SUCCESSOR 3 mulation uses a successor function. Given a particular state x, SUCCESSOR-FN(x) FUNCTION returns a set of (action, successor) ordered pairs, where each action is one of the legal actions in state x and each successor is a state that can be reached from x by applying the action. For example, from the state In(Arad), the successor function for the Roma- nia problem would return ( Go(Sibzu), In(Sibiu)) , (Go( Timisoara), In( Tzmisoara)), (Go(Zerznd), In(Zerind))) STATE SPACE Together, the initial state and successor function implicitly define the state space of the problem-the set of all states reachable from the initial state. The state space forms a graph in which the nodes are states and the arcs between nodes are actions. (The map of Romania shown in Figure 3.2 can be interpreted as a state space graph if we view PATH each road as standing for two driving actions, one in each direction.) A path in the state space is a sequence of states connected by a sequence of actions. The goal test, which determines whether a given state is a goal state. Sometimes there GOAL TEST is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. The agent's goal in Romania is the singleton set In(Bucharest)). Sometimes the goal is specified by an abstract property rather than an explicitly enumer- ated set of states. For example, in chess, the goal is to reach a state called "checkmate," where the opponent's king is under attack and can't escape. PATH COST A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure. For the agent trying to get to Bucharest, time is of the essence, so the cost of a path might be its length in kilometers. In this chapter, we assume that the cost of a path can be described as the STEP COST sum of the costs of the individual actions along the path. The step cost of taking action a to go from state x to state y is denoted by c(x, a, y). The step costs for Romania are shown in Figure 3.2 as route distances. We will assume that step costs are nnneative. The preceding elements define a problem and can be gathered together into a single data structure that is given as input to a problem-solving algorithm. A solution to a problem is a path from the initial state to a goal state. Solution quality is measured by the path cost OPTIMALSOLUTION function, and an optimal solution has the lowest path cost among all solutions. Formulating problems In the preceding section we proposed a formulation of the problem of getting to Bucharest in terms of the initial state, successor function, goal test, and path cost. This formulation seems - An alternative formulation uses a set of operators that can be applied to a state to generate successors. The implications of negative costs are explored in Exercise 3.17. Section 3.1. Problem-Solving Agents 63 Figure 3.2 A simplified road map of part of Romania. I reasonable, yet it omits a great many aspects of the real world. Compare the sirnple state description we have chosen, In(Arad), to an actual cross-country trip, where the state of the world includes so many things: the traveling companions, what is on the radio, the scenery out of the window, whether there are any law enforcement officers nearby, how far lit is to the next rest stop, the condition of the road, the weather, and so on. All these considerations are left out of our state descriptions because they are irrelevant to the problem of finding a route ABSTRACTION to Bucharest. The process of removing detail from a representation is called abstraction. In addition to abstracting the state description, we must abstract the actions themselves. A driving action has many effects. Besides changing the location of the vehicle and its occu- pants, it takes up time, consumes fuel, generates pollution, and changes the agent (as they say, travel is broadening). In our formulation, we take into account only the change in location. Also, there are many actions that we will omit altogether: turning on the radio, lookling out of the window, slowing down for law enforcement officers, and so on. And of course, we don't specify actions at the level of "turn steering wheel to the left by three degrees." Can we be more precise about defining the appropriate level of abstraction? Think of the abstract states and actions we have chosen as corresponding to large sets of detailed world states and detailed action sequences. Now consider a solution to the abstract pro1)lem: for example, the path from Arad to Sibiu to Rimnicu Vilcea to Pitesti to Bucharest. This abstract solution corresponds to a large number of more detailed paths. For example, we could drive with the radio on between Sibiu and Rimnicu Vilcea, and then switch it off for the rest of the trip. The abstraction is valid if we can expand any abstract solution into a solution in the more detailed world; a sufficient condition is that for every detailed state that is "in Arad," 64 Chapter 3. Solving Problems by Searching there is a detailed path to some state that is "in Sibiu," and so on. The abstraction is useful if carrying out each of the actions in the solution is easier than the original problem; in this case they are easy enough that they can be carried out without further search or planning by an average driving agent. The choice of a good abstraction thus involves removing as much detail as possible while retaining validity and ensuring that the abstract actions are easy to carry out. Were it not for the ability to construct useful abstractions, intelligent agents would be completely swamped by the real world. -solving approach has been applied to a vast array of task environments. We The problem list some of the best known here, distinguishing between toy and real-world problems. A toy TOY PROBLEM problem is intended to illustrate or exercise various problem-solving methods. It can be given a concise, exact description. This means that it can be used easily by different researchers REAL-WORLD to compare the performance of algorithms. A real-world problem is one whose solutions PROBLEM people actually care about. They tend not to have a single agreed-upon description, but we will attempt to give the general Aavor of their formulations. Toy problems The first example we will examine is the vacuum world first introduced in Chapter 2. (See Figure 2.2.) This can be formulated as a problem as follows: 0 States: The agent is in one of two locations, each of which might or might not contain dirt. Thus there are 2 x 22 = 8 possible world states. 0 Initial state: Any state can be designated as the initial state. 0 Successor function: This generates the legal states that result from trying the three actions (Left, Right, and Suck). The complete state space is shown in Figure 3.3. 0 Goal test: This checks whether all the squares are clean. 0 Path cost: Each step costs 1, so the path cost is the number of steps in the path. Compared with the real world, this toy problem has discrete locations, discrete dirt, reliable (In Section 3.6, we will relax these cleaning, and it never gets messed up once cleaned. assumptions.) One important thing to note is that the state is determined by both the agent location and the dirt locations. A larger environment with n locations has n 2n states. The 8-puzzle, an instance of which is shown in Figure 3.4, consists of a 3 x 3 board with eight numbered tiles and a blank space. A tile adjacent to the blank space can slide into the space. The object is to reach a specified goal state, such as the one shown on the right of the figure. The standard formulation is as follows: 0 States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. 0 Initial state: Any state can be designated as the initial state. Note that any given goal can be reached from exactly half of the possible initial states (Exercise 3.4). Section 3.2. Example Problems 65 The state space for the vacuum world. Arcs denote actions: L = Left, R = Figure 3.3 Right, S = Suck. 0 Successor function: This generates the legal states that result from trying the four actions (blank moves Left, Right, Up, or Down). 0 Goal test: This checks whether the state matches the goal configuration shown in Fig- ure 3.4. (Other goal configurations are possible.) 0 Path cost: Each step costs 1, so the path cost is the number of steps in the path. What abstractions have we included here? The actions are abstracted to their begin- ning and final states, ignoring the intermediate locations where the block is slidiing. We've abstracted away actions such as shaking the board when pieces get stuck, or extracting the pieces with a knife and putting them back again. We're left with a description of tlhe rules of the puzzle, avoiding all the details of physical manipulations. Start State Goal State 1 Figure 3.4 A typical instance of the 8-puzzle. I 66 Chapter 3. Solving Problems by Searching SLIDING-BLOCK The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as PUZZLES test problems for new search algorithms in AI. This general class is known to be NP-complete, so one does not expect to find methods significantly better in the worst case than the search algorithms described in this chapter and the next. The 8-puzzle has 9/2 = 181,440 reachable states and is easily solved. The 15-puzzle (on a 4 x 4 board) has around 1.3 trillion states, and random instances can be solved optimally in a few milliseconds by the best search algorithms. The 24-puzzle (on a 5 x 5 board) has around loz5 states, and random instances are still quite difficult to solve optimally with current machines and algorithms. 8-QUEENS PROBLEM The goal of the 8-queens problem is to place eight queens on a chessboard such that no queen attacks any other. (A queen attacks any piece in the same row, column or diago- nal.) Figure 3.5 shows an attempted solution that fails: the queen in the rightmost column is attacked by the queen at the top left. Figure 3.5 Almost a solution to the 8-queens problem. (Solution is left as an exercise.) Although efficient special-purpose algorithms exist for this problem and the whole n- queens family, it remains an interesting test problem for search algorithms. There are two INCREMENTAL main kinds of formulation. An incremental formulation involves operators that augment the state description, starting with an empty state; for the 8-queens problem, this means that each action adds a queen to the state. A complete-state formulation starts with all 8 queens &fiATE on the board and moves them around. In either case, the path cost is of no interest because only the final state counts. The first incremental formulation one might try is the following: Q States: Any arrangement of 0 to 8 queens on the board is a state. Q Initial state: No queens on the board. Q Successor function: Add a queen to any empty square. Q Goal test: 8 queens are on the board, none attacked. Section 3.2. Example Problems 67 In this formulation, we have 64 . 63 . . .57 3 1.8 x 1014 possible sequences to investigate. A better formulation would prohibit placing a queen in any square that is already attacked: 0 States: Arrangements of n queens (0 n 5 8), one per column in the leftmost n columns, with no queen attacking another are states. 0 Successor function: Add a queen to any square in the leftmost empty colum such that it is not attacked b:y any other queen. This formulation reduces the 8-queens state space from 3 x 1014 to just 2,057, and solutions are easy to find. On the other hand, for 100 queens the initial formulation has roughly 10400 states whereas the improved formulation has about states (Exercise 3.5). Thils is a huge reduction, but the improved state space is still too big for the algorithms in this chapter to handle. Chapter 4 describes the complete-state formulation and Chapter 5 gives a simple algorithm that makes even the million-queens problem easy to solve. Real-world problems We have already seen how the route-finding problem is defined in terms of specified loca- PROBLIiM tions and transitions along links between them. Route-finding algorithms are used iin a variety of applications, such as routing in computer networks, military operations planning, and air- line travel planning systems. These problems are typically complex to specify. Consider a simplified example of an airline travel problem specified as follows: 0 States: Each is represented by a location (e.g., an airport) and the current time. 0 Initial state: This is specified by the problem. 0 Successor function: This returns the states resulting from taking any scheduled flight (perhaps further specified by seat class and location), leaving later than the current time plus the within-airport transit time, from the current airport to another. Goal test: Are we at the destination by some prespecified time? 0 Path cost: This depends on monetary cost, waiting time, flight time, customs and im- migration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on. Commercial travel advice systems use a problem formulation of this kind, with many addi- tional complications to handle the byzantine fare structures that airlines impose. Any sea- soned traveller knows, however, that not all air travel goes according to plan. A really good system should include contingency plans-such as backup reservations on alternate flights- to the extent that these are justified by the cost and likelihood of failure of the original plan. TOURING PROBLEMS Touring problems are closely related to route-finding problems, but with an important difference. Consider, for example, the problem, "Visit every city in Figure 3.2 at least once, starting and ending in Bucharest." As with route finding, the actions correspond to trips between adjacent cities. The state space, however, is quite different. Each state must include not just the current location but also the set of cities the agent has visited. So the initial state would be "In Bucharest; visited Bucharest," a typical intermediate state would be "In Vaslui; visited Bucharest,Urziceni,Vaslui)," and the goal test would check whether the agent is in Bucharest and all 20 cities have been visited. 68 Chapter 3. Solving Problems by Searching TRAVELING SALESPERSON The traveling salesperson problem (TSP) is a touring problem in which each city PROBLEM must be visited exactly once. The aim is to find the shortest tour. The problem is known to be NP-hard, but an enormous amount of effort has been expended to improve the capabilities of TSP algorithms. In addition to planning trips for traveling salespersons, these algorithms have been used for tasks such as planning movements of automatic circuit-board drills and of stocking machines on shop floors. VLSI LAYOUT A VLSI layout problem requires positioning millions of components and connections on a chip to minimize area, minimize circuit delays, minimize stray capacitances, and max- imize manufacturing yield. The layout problem comes after the logical design phase, and is usually split into two parts: cell layout and channel routing. In cell layout, the primitive components of the circuit are grouped into cells, each of which performs some recognized function. Each cell has a fixed footprint (size and shape) and requires a certain number of connections to each of the other cells. The aim is to place the cells on the chip so that they do not overlap and so that there is room for the connecting wires to be placed between the cells. Channel routing finds a specific route for each wire through the gaps between the cells. These search problems are extremely complex, but definitely worth solving. In Chapter 4, we will see some algorithms capable of solving them. Robot navigation is a generalization of the route-finding problem described earlier. ROBOT NAVIGATION Rather than a discrete set of routes, a robot can move in a continuous space with (in principle) an infinite set of possible actions and states. For a circular robot moving on a flat surface, the space is essentially two-dimensional. When the robot has arms and legs or wheels that must also be controlled, the search space becomes many-dimensional. Advanced techniques are required just to make the search space finite. We examine some of these methods in Chapter 25. In addition to the complexity of the problem, real robots must also deal with errors in their sensor readings and motor controls. AUTOMATIC ASSEMBLY Automatic assembly sequencing of complex objects by a robot was first demonstrated SEQUENCING by FREDDY (Michie, 1972). Progress since then has been slow but sure, to the point where the assembly of intricate objects such as electric motors is economically feasible. In assembly problems, the aim is to find an order in which to assemble the parts of some object. If the wrong order is chosen, there will be no way to add some part later in the sequence without undoing some of the work already done. Checking a step in the sequence for feasibility is a difficult geometrical search problem closely related to robot navigation. Thus, the generation of legal successors is the expensive part of assembly sequencing. Any practical algorithm must avoid exploring all but a tiny fraction of the state space. Another important assembly problem is protein design, in which the goal is to find a sequence of amino acids that will PROTEINDESIGN fold into a three-dimensional protein with the right properties to cure some disease. In recent years there has been increased demand for software robots that perform In- INTERNET ternet searching, looking for answers to questions, for related information, or for shopping SEARCHING deals. This is a good application for search techniques, because it is easy to conceptualize the Internet as a graph of nodes (pages) connected by links. A full description of Internet search is deferred until Chapter 10. Section 3.3. Searching for Solutions 69 3.3 SEA.RCHING FOR SOLUTIONS Having formulated some problems, we now need to solve them. This is done by a search through the state space. This chapter deals with search techniques that use an explilcit search SEARCH TREE tree that is generated by the initial state and the successor function that together define the state space. In general, we may have a search graph rather than a search tree, when the same state can be reached from multiple paths. We defer consideration of this important complication until Section 3.5. Figure 3.6 shows some of the expansions in the search tree for finding a route from SEARCIHNODE Arad to Bucharest. The root of the search tree is a search node corresponding to the initial state, In(Arad). The first step is to test whether this is a goal state. Clearly it is not, but it is important to check so that we can solve trick problems like "starting in Arad, get to Arad." Because this is not a goal state, we need to consider some other states. This is done EXPANDING by expanding the current state; that is, applying the successor function to the curent state, GENERATTING thereby generating a new set of states. In this case, we get three new states: In(Sibiu), In(Timisoara), and In(Zerind). Now we must choose which of these three possjbilities to consider further. This is the essence of search-following up one option now and putting the others aside for later, in case the first choice does not lead to a solution. Suppose we choose Sibiu first. We check to see whether it is a goal state (it is not) and then expand it to get In(Arad), In(Fagaras), In(Oradea), and In(RimnicuVi1cea). We can then choose any of these four, or go back and choose Timisoara or Zerind. We continue choosing, testing, and expanding until either a solution is found or there are no more states to be expanded. The choice of which SEARCH STRATEGY state to expand is determined by the search strategy. The general tree-search algorithm is described informally in Figure 3 .I. It is important to distinguish between the state space and the search tree. For the route finding problem, there are only 20 states in the state space, one for each city. But there are an infinite number of paths in this state space, so the search tree has an infinite number of nodes. For example, the lhree paths Arad-Sibiu, Arad-Sibiu-Arad, Arad-Sibiu-Alrad-Sibiu are the first three of an infinite sequence of paths. (Obviously, a good search algorithm avoids following such repeated paths; Section 3.5 shows how.) There are many ways to represent nodes, but we will assume that a node is a data structure with five components: STATE: the state in the state space to which the node corresponds; PARENT-NODE: the node in the search tree that generated this node; ACTION: the action that was applied to the parent to generate the node; PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state to the node, as indicated by the parent pointers; and DEPTH: the number of steps along the path from the initial state. It is important to remember the distinction between nodes and states. A node is a boolkkeeping 'data structure used to represent the search tree. A state corresponds to a configuration of the 70 Chapter 3. Solving Problems by Searching world. Thus, nodes are on particular paths, as defined by PARENT-NODE pointers, whereas states are not. Furthermore, two different nodes can contain the same world state, if that state is generated via two different search paths. The node data structure is depicted in Figure 3.8. We also need to represent the collection of nodes that have been generated but not yet FRINGE expanded-this collection is called the fringe. Each element of the fringe is a leaf node, that LEAF NODE (c) After expanding Sibiu ,- ., ,.- 8 -.., . . Figure 3.6 Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded; nodes that have been generated but not yet expanded are outlined in bold; nodes that have not yet been generated are shown in faint dashed lines. function TREE-EARCH(O, strategy) returns a solution, or failure initialize the search tree using the initial state of problem ioop do if there are no candidates for expansion then return failure choose a leaf node for expansion according to strategy if the node contains a goal state then return the corresponding solution else expand the node and add the resulting nodes to the search tree Figure 3.7 An informal description of the general tree-search algorithm. Section 3.3. Searching for Solutions 7 1 Figure 3.8 Nodes are the data structures from which the search tree is constructed. Each has a parent, a state, and various bookkeeping fields. Arrows point from child to parent. is, a node with no successors in the tree. In Figure 3.6, the fringe of each tree consis1:s of those nodes with bold outlines. The simplest representation of the fringe would be a set of nodes. The search strategy then would be a function that selects the next node to be expanded from this set. Although this is conceptually straightforward, it could be computationally e:xpensive, because the strategy function might have to look at every element of the set to choosie the best QUEUE one. Therefore, we will assume that the collection of nodes is implemented as a queue. The operations on a queue are as follows: r ME-Quu(element, . . .) creates a queue with the given element(s). EpT?(queue) returns true only if there are no more elements in the queue. r FIRsT(queue) returns the first element of the queue. r REMOVE-FIRST() returns FRsT(queue) and removes it from the queue. r INsET(elernent, queue) inserts an element into the queue and returns the resulting queue. (We will see that different types of queues insert elements in different orders.) r INs-AL(eiernents, queue) inserts a set of elements into the queue and returns the resulting queue. With these definitions, we can write the more formal version of the general tree-search algo- rithm shown in Figure 3.9. Measuring problem-solving performance The output of a problem-solving algorithm is either failure or a solution. (Some algorithms might get stuck in an infinite loop and never return an output.) We will evaluate an algorithm's performance in four ways: COMPLElrENESS 0 Completeness: Is the algorithm guaranteed to find a solution when there is one? OPTIMALUTY 0 Optimality: Does the strategy find the optimal solution, as defined on page 62? TIME COMPLEXITY 0 Time complexity: How long does it take to find a solution? SPACE COMPLEXITY 0 Space complexity: How much memory is needed to perform the search? 72 Chapter 3. Solving Problems by Searching I function TltE-Sn(problem, fringe) returns a solution, or failure fringe + INSERT(MAKE-NODE(NITIAL-STATE), fringe) loop do if EMPTY?( fringe) then return failure node c REMOVE-FIRST() if Go-TsTproblem applied to STATEnode succeeds then return SoUIo(node) fringe t INSERT-ALL(EXPAND(O, problem), fringe) function EXPAND(O, problem) returns a set of nodes successors t the empty set for each (action, result) in SUCCESSOR-FN(STATE) do s t a new NODE STATES +- result PARENT-NODES +- node ACTIONS t action PATH-COSTs c PATH-COST + STEP-COST(STATEO, action, result) DEPTHs + DpTnode + 1 add s to successors return successors Figure 3.9 The general tree-search algorithm. (Note that the fringe argument must be an empty queue, and the type of the queue will affect the order of the search.) The SOLUTION function returns the sequence of actions obtained by following parent pointers back to the root. Time and space complexity are always considered with respect to some measure of the prob- lem difficulty. In theoretical computer science, the typical measure is the size of the state space graph, because the graph is viewed as an explicit data structure that is input to the search program. (The map of Romania is an example of this.) In AI, where the graph is represented implicitly by the initial state and successor function and is frequently infinite, BRANCHINGFACTOR complexity is expressed in terms of three quantities: 13, the branching factor or maximum number of successors of any node; d, the depth of the shallowest goal node; and m, the maximum length of any path in the state space. 5 Time is often measured in terms of the number of nodes generated during the search, and space in terms of the maximum number of nodes stored in memory. SEARCH COST To assess the effectiveness of a search algorithm, we can consider just the search cost- which typically depends on the time complexity but can also include a term for memory usage- or we can use the total cost, which combines the search cost and the path cost of the TOTAL COST solution found. For the problem of finding a route from Arad to Bucharest, the search cost Some texts measure time in terms of the number of node expansions instead. The two measures differ by at most a factor of b. It seems to us that the execution time of a node expansion increases with the number of nodes generated in that expansion. Section 3.4. Uninformed Search Strategies 73 is the amount of time taken by the search and the solution cost is the total length of the path in kilometers. Thus, to compute the total cost, we have to add kilometers and milliseconds. There is no "official exchange rate" between the two, but it might be reasonable in this case to convert kilometers into milliseconds by using an estimate of the car's average speed (because time is what the agent cares about). This enables the agent to find an optimal tradeoff point at which further computation to find a shorter path becomes counterproductive. The more general problem of tradeoffs between different goods will be taken up in Chapter 16. UNINFORMED This section covers five search strategies that come under the heading of uninformed search SEARCH (also called blind search). The term means that they have no additional information about states beyond that provided in the problem definition. All they can do is generate successors Strategies that know whether one non- and distinguish a goal state from a nongoal state. INFORMED SEARCH goal state is "more promising" than another are called informed search or heuristic search HEURISTICSEARCH strategies; they will be covered in Chapter 4. All search strategies are distinguished by the order in which nodes are expanded. Breadth-first search S BREADTH-FIRST EARCH Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded. -first search can be implemented by calling TREE-SEARCH with an empty Breadth fringe that is a first-in-first-out (FIFO) queue, assuring that the nodes that are visited first will be expanded first. In other words, calling TREE-SEARCH(O,FIFO-QUEUE()) re- sults in a breadth-first search. The FIFO queue puts all newly generated successors at the end of the queue, which means that shallow nodes are expanded before deeper nodes. Figure 3.10 shows the progress of the search on a simple binary tree. We will evaluate breadth-first search using the four criteria from the previous section. We can easily see that it is complete-if the shallowest goal node is at some finite depth d, breadth-first search will eventually find it after expanding all shallower nodes (prolvided the branching factor b is finite). The shallowest goal node is not necessarily the optimal one; technically, breadth-first search is optimal if the path cost is a nondecreasing function of the depth of the node. (For example, when all actions have the same cost.) So far, the news about breadth-first search has been good. To see why it is not always the strategy of choice, we have to consider the amount of time and memory it takes to complete a search. To do this, we consider a hypothetical state space where every state has b successors. The root of the search tree generates b nodes at the first level, each of which generates b more 2 3 nodes, for a total of b at the second level. Each of these generates b more nodes, yielding b nodes at the third level, and so on. Now suppose that the solution is at depth d. In the worst 74 Chapter 3. Solving Problems by Searching case, we would expand all but the last node at level d (since the goal itself is not expanded), generating bdS1 - b nodes at level d f 1. Then the total number of nodes generated is Every node that is generated must remain in memory, because it is either part of the fringe or is an ancestor of a fringe node. The space complexity is, therefore, the same as the time complexity (plus one node for the root). Those who do complexity analysis are worried (or excited, if they like a challenge) by exponential complexity bounds such as O(bdsl). Figure 3.1 1 shows why. It lists the time and memory required for a breadth-first search with branching factor b = 10, for various values of the solution depth d. The table assumes that 10,000 nodes can be generated per second and that a node requires 1000 bytes of storage. Many search problems fit roughly within these assumptions (give or take a factor of 100) when run on a modern personal computer. There are two lessons to be learned from Figure 3.11. First, the memory requirements are a bigger problem for breadth9rst search than is the execution time. 3 1 hours would not be too long to wait for the solution to an important problem of depth 8, but few computers have the terabyte of main memory it would take. Fortunately, there are other search strategies that require less memory. The second lesson is that the time requirements are still a major factor. If your problem has a solution at depth 12, then (given our assumptions) it will take 35 years for breadth-first search (or indeed any uninformed search) to find it. In general, exponential-complexity search problems cannot be solved by uninformed methods for any but the smallest instances. Depth Nodes Time Memory 2 1100 .ll seconds 1 megabyte 11 seconds 106 megabytes 4 111,100 6 lo7 19 minutes 10 gigabytes 1 terabytes 8 lo9 3 1 hours 10 1011 129 days 10 1 terabytes 12 35 years 10 petabytes 1013 1 exabyte 14 l0l5 3,523 years Time and memory requirements for breadth-first search. The numbers shown Figure 3.11 assume branching factor b = 10; 10,000 nodes/second; 1000 byteshode. Section 3.4. Uninformed Search Strategies 7 5 Uniform-cost search Breadth-first search is optimal when all step costs are equal, because it always expands the shallowest unexpanded node. By a simple extension, we can find an algorithm that is optimal UNIFORM-COST with any step cost function. Instead of expanding the shallowest node, uniform-cost search SEARCH expands the node n with the lowest path cost. Note that if all step costs are equal, this is identical to breadth-first search. Uniform-cost search does not care about the number of steps a path has, but only about their total cost. Therefore, it will get stuck in an infinite loop if it ever expands a node that has a zero-cost action leading back to the same state (for example, a NoOp action). We can guarantee completeness provided the cost of every step is greater than or equal to siome small positive constant c. This condition is also sufficient to ensure optimality. It means that the cost of a path always increases as we go along the path. From this property, it is easy to see that the algorithm expands nodes in order of increasing path cost. Therefore, the: first goal node selected for expansion is the optimal solution. (Remember that TREE-SEARCH applies the goal test only to the nodes that are selected for expansion.) We recommend trying the algorithm out to find the shortest path to Bucharest. Uniform-cost search is guided by path costs rather than depths, so its complexity cannot easily be characterized in terms of b and d. Instead, let C be the cost of the optimal solution, and assume that every action costs at least E. Then the algorithm's worst-case time and space d complexity is O(bl+Lcl"l), which can be much greater than b . This is because uniform-cost search can, and often does, explore large trees of small steps before exploring paths involving d large and perhaps useful steps. When all step costs are equal, of course, bl+LC/" is just b . Depth-first search Depth-first search always expands the deepest node in the current fringe of the search tree. The progress of the search is illustrated in Figure 3.12. The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors. As those nodes are expanded, they are dropped from the fringe, so then the search "backs up" to the next shallowest node that still has unexplored successors. This strategy can be implemented by TREE-SEARCH with a last-in-first-out (LIFO) queue, also known as a stack. As an alternative to the TREE-SEARCH implementation, it is common to implement depth-first search with a recursive function that calls itself oln each of its children in turn. (A recursive depth-first algorithm incorporating a depth limit is shown in Figure 3.13.) Depth-first search has very modest memory requirements. It needs to store only a single path from the root to a leaf node, along with the remaining unexpanded sibling nodes for each node on the path. Once a node has been expanded, it can be removed from memory as soon as all its descendants have been fully explored. (See Figure 3.12.) For a state space with branching factor b and maximum depth m, depth-first search requires storage of only bm + 1 nodes. Using the same assumptions as Figure 3.11, and assuming that nodes at the same depth as the goal node have no successors, we find that depth-first search would require 118 kilobytes instead of 10 petabytes at depth d = 12, a factor of 10 billion times less space. 76 Chapter 3. Solving Problems by Searching Figure 3.12 Depth-first search on a binary tree. Nodes that have been expanded and have no descendants in the fringe can be removed from memory; these are shown in black. Nodes at depth 3 are assumed to have no successors and M is the only goal node. A variant of depth-first search called backtracking search uses still less memory. In backtracking, only one successor is generated at a time rather than all successors; each par- tially expanded node remembers which successor to generate next. In this way, only O(m) memory is needed rather than O(bm). Backtracking search facilitates yet another memory- saving (and time-saving) trick: the idea of generating a successor by modifying the current state description directly rather than copying it first. This reduces the memory requirements to just one state description and O(m) actions. For this to work, we must be able to undo each modification when we go back to generate the next successor. For problems with large state descriptions, such as robotic assembly, these techniques are critical to success. -first search is that it can make a wrong choice and get stuck The drawback of depth going down a very long (or even infinite) path when a different choice would lead to a solution near the root of the search tree. For example, in Figure 3.12, depth-first search will explore the entire left subtree even if node C is a goal node. If node J were also a goal node, then depth-first search would return it as a solution; hence, depth-first search is not optimal. If Section 3.4. Uninformed Search Strategies 77 function DEPTH-LIMITED-SEARCH(O, limit) returns a solution, or failurelcutoff return RECURSIVE-LLS(AKE-NODE(INITIAL-STATEO), problem, limit) function RECURSIVE-DLS(O, problem, limit) returns a solution, or failurelcutoff cuto8-occurred? t false if GoAL-TEsTo(STATEO) then return SOLUTION(O) else if DEpHnode = limit then return cut08 else for each successor in EXPAND(O, problem) do result + RECURSIVE-DLS(successor, problem, limit) if result = cut08 then cutog-occurred? +true else if result failure then return result if cutofl-occurred? then return cut08 else return failure Figure 3.13 A recursive implementation of depth-limited search. 1 the left subtree were of unbounded depth but contained no solutions, depth-first search would never terminate; hence, it is not complete. In the worst case, depth-first search will generate m all of the O(b ) nodes in the search tree, where m is the maximum depth of any niode. Note that m can be much larger than d (the depth of the shallowest solution), and is infinite if the tree is unbounded. Depth-limited search The problem of unbounded trees can be alleviated by supplying depth-first search with a pre- determined depth limit l. That is, nodes at depth l are treated as if they have no successors. This approach is called depth-limited search. The depth limit solves the infinite-path prob- SEARCH lem. Unfortunately, it also introduces an additional source of incompleteness if we choose t d, that is, the shallowest goal is beyond the depth limit. (This is not unlikeky when d is unknown.) Depth-limited search will also be nonoptimal if we choose d. Its time complexity is O(be) and its space complexity is O(bl). Depth-first search can be viewed as a special case of depth-limited search with = GO. Sometimes, depth limits can be based on knowledge of the problem. For example, on the map of Romania there are 20 cities. Therefore, we know that if there is a solutiom, it must be of length 19 at the longest, so = 19 is a possible choice. But in fact if we studied the map carefully, we would discover that any city can be reached from any other city ity at most DIAMETER 9 steps. This number, known as the diameter of the state space, gives us a better depth limit, which leads to a more efficient depth-limited search. For most problems, however, we will not know a good depth limit until we have solved the problem. Depth-limited search can be implemented as a simple modification to the general tree- search algorithm or to the recursive depth-first search algorithm. We show the pseudocode for I-ecursive depth-limited search in Figure 3.13. Notice that depth-limited search can terminate with two kinds of failure: the standard failure value indicates no solution; the cutclff value indicates no solution within the depth limit. 7 8 Chapter 3. Solving Problems by Searching Iterative deepening depth-first search ITERATIVE Iterative deepening search (or iterative deepening depth-first search) is a general strategy, often used in combination with depth -first search, that finds the best depth limit. It does this by gradually increasing the limit-first 0, then 1, then 2, and so on-until a goal is found. This will occur when the depth limit reaches d, the depth of the shallowest goal node. The algorithm is shown in Figure 3.14. Iterative deepening combines the benefits of depth-first and breadth-first search. Like depth-first search, its memory requirements are very modest: O(bd) to be precise. Like breadth-first search, it is complete when the branching factor is finite and optimal when the path cost is a nondecreasing function of the depth of the node. Figure 3.15 shows four iterations of ITERATIVE-DEEPENING-SEARCH on a binary search tree, where the solution is found on the fourth iteration. Iterative deepening search may seem wasteful, because states are generated multiple times. It turns out this is not very costly. The reason is that in a search tree with the same (or nearly the same) branching factor at each level, most of the nodes are in the bottom level, so it does not matter much that the upper levels are generated multiple times. In an iterative deepening search, the nodes on the bottom level (depth d) are generated once, those on the next to bottom level are generated twice, and so on, up to the children of the root, which are generated d times. So the total number of nodes generated is d 2 N(1DS) = (d) b + (d - l)b + . . . + (1) b , which gives a time complexity of O(bd). We can compare this to the nodes generated by a breadth-first search: 2 d N(BFS) = b + b + . . . + b + (bdtl - b) . Notice that breadth-first search generates some nodes at depth d+ 1, whereas iterative deepen- ing does not. The result is that iterative deepening is actually faster than breadth-first search, despite the repeated generation of states. For example, if b = 10 and d = 5, the numbers are In general, iterative deepening is the preferred uninformed search method when there is a large search space and the depth of the solution is not known. function ITERATIVE-DEEPENING-SEARCH(O) returns a solution, or failure inputs: problem, a problem for depth t 0 to oo do result t DEPTH-LIMITED-SEARCH(, depth) if result cutoff then return result The iterative deepening search algorithm, which repeatedly applies depth- Figure 3.14 limited search with increasing limits. It terminates when a solution is found or if the depth- limited search returns failure, meaning that no solution exists.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.