logical agents in Artificial intelligence Notes

logical agents in artificial intelligence examples and logical agents in artificial intelligence notes pdf free download
HalfoedGibbs Profile Pic
HalfoedGibbs,United Kingdom,Professional
Published Date:02-08-2017
Your Website URL(Optional)
Comment
7 LOGICAL AGENTS In which we design agents that can form representations of the would, use a pro- cess of inference to derive new representations about the world, and use these new representations to deduce what to do. This chapter introduces knowledge-based agents. The concepts that we discuss-the repre- sentation of knowledge and the reasoning processes that bring knowledge to life-are central to the entire field of artificial intelligence. Humans, it seems, know things and do reasoning. Knowledge and reasoning are also important for artificial agents because they enable successful behaviors that would be very hard to achieve otherwise. We have seen that knowledge of action outcomes enables problem- solving agents to perfornl well in complex environments. A reflex agent could only find its way from Arad to Bucharest by dumb luck. The knowledge of problem-solving agents is, however, very specific and inflexible. A chess program can calculate the legal moves of its king, but does not know in any useful sense that no piece can be on two different squares at the same time. Knowledge-based agents can benefit from knowledge expressed in very general forms, combining and recombining information to suit myriad purposes. Often, this process can be quite far removed from the needs of the moment-as when a mathematician proves a theorem or an astronomer calculates the earth's life expectancy. Knowledge and reasoning also play a crucial role in dealing with partially obsemable environments. A knowledge-based agent can combine general knowledge with current per- cepts to infer hidden aspects of the current state prior to selecting actions. For example, a physician diagnoses a patient-that is, infers a disease state that is not directly observable- prior to choosing a treatment. Some of the knowledge that the physician uses is in the form of rules learned from textbooks and teachers, and some is in the form of patterns of association that the physician may not be able to consciously describe. If it's inside the physician's head, it counts as knowledge. Understanding natural language also requires inferring hidden state, namely, the inten- tion of the speaker. When we hear, "John saw the diamond through the window and coveted it," we know "it" refers to the diamond and not the window-we reason, perhaps uncon- sciously, with our knowledge of relative value. Similarly, when we hear, "John threw the brick through the window and broke it," we know "it" refers to the window. Reasoning allows Section 7.1. Knowledge-Based Agents 195 us to cope with the virtually infinite variety of utterances using a finite store of commonsense knowledge. Problem-solving agents have difficulty with this kind of ambiguity because their representation of contingency problems is inherently exponential. Our final reason for studying knowledge-based agents is their flexibility. They are able to accept new tasks in the form of explicitly described goals, they can achieve competence quickly by being told or learning new knowledge about the environment, and they can adapt to changes in the environment by updating the relevant knowledge. We begin in Section 7.1 with the overall agent design. Section 7.2 introduces a simple new environment, the wumpus world, and illustrates the operation of a knowledge-based In Section 7.3, we explain the general agent without going into any technical detail. Then, principles of logic. Logic will be the primary vehicle for representing knowledge throughout Part 111 of the book. The knowledge of logical agents is always definiteeach proposition is either true or false in the world, although the agent may be agnostic about some propositions. Logic has the pedagogical advantage of being a simple example of a representation for knowledge-based agents, but logic has some severe limitations. Clearly, a large portion of the reasoning carried out by humans and other agents in partially observable environments de- pends on handling knowledge that is uncertain. Logic cannot represent this uncertainty well, so in Part V we cover probability, which can. In Part VI and Part VII we cover many repre- sentations, including some based on continuous mathematics such as mixtures of Gaussians, neural networks, and other representations. Section 7.4 of this chapter defines a simple logic called propositional logic. While much less expressive than first-order logic (Chapter 8), propositional logic serves to illustrate all the basic concepts of logic. There is also a well-dveloped technology for reasoning in propositional logic, which we describe in sections 7.5 and 7.6. Finally, Section 7.7 combines the concept of logical agents with the technology of propositional logic to build some simple agents for the wumpus world. Certain shortcomings in propositional logic are identified, motivating the development of more powerful logics in subsequent chapters. KNOWLEDGE BASE The central component of a knowledge-based agent is its knowledge base, or KB. Informally, SENTENCE a knowledge base is a set of sentences. (Here "sentence" is used as a technical term. It is related but is not identical to the sentences of English and other natural languages.) Each sen- KNOWLEDGE REPRESENTATION tence is expressed in a language called a knowledge representation language and represents LANGUAGE some assertion about the world. There must be a way to add new sentences to the knowledge base and a way to query what is known. The standard names for these tasks are TELL and ASK, respectively. Both INFERENCE tasks may involve inference-that is, deriving new sentences from old. In logical agents, LOGICALAGENTS which are the main subject of study in this chapter, inference must obey the fundamental requirement that when one ASKS a question of the knowledge base, the answer should follow from what has been told (or rather, TELL) to the knowledge base previously. Later in the 196 Chapter 7. Logical Agents function KB-AEN(percept) returns an action static: KB, a knowledge base t, a counter, initially 0, indicating time TELL(KB, MAKE-PERCEPT-SENTENCE(, t)) action t AsK(KB, MAKE-ACTION- QUERY()) TELL(KB, MAKE-ACTION-SENTENCE(, t)) ttt+l return action Figure 7.1 A generic knowledge-based agent. chapter, we will be more precise about the crucial word "follow." For now, take it to mean that the inference process should not just make things up as it goes along. Figure 7.1 shows the outline of a knowledge-based agent program. Like all our agents, it takes a percept as input and returns an action. The agent maintains a knowledge base, KB, which may initially contain some background knowledge. Each time the agent program is KNOWLEDGE called, it does three things. First, it TELLS the knowledge base what it perceives. Second, it ASKS the knowledge base what action it should perform. In the process of answering this query, extensive reasoning may be done about the current state of the world, about the outcomes of possible action sequences, and so on. Third, the agent records its choice with TELL and executes the action. The second TELL is necessary to let the knowledge base know that the hypothetical action has actually been executed. The details of the representation language are hidden inside three functions that im- plement the interface between the sensors and actuators and the core representation and rea- soning system. MAKE-PERCEPT-SENTENCE constructs a sentence asserting that the agent perceived the given percept at the given time. MAKE-ACTION-QUERY constructs a sentence that asks what action should be done at the current time. Finally, MAKE-ACTION-SENTENCE constructs a sentence asserting that the chosen action was executed. The details of the infer- ence mechanisms are hidden inside TELL and ASK. Later sections will reveal these details. The agent in Figure 7.1 appears quite similar to the agents with internal state described in Chapter 2. Because of the definitions of TELL and ASK, however, the knowledge-based agent is not an arbitrary program for calculating actions. It is amenable to a description at the KNOWLEDGELEVEL knowledge level, where we need specify only what the agent knows and what its goals are, in order to fix its behavior. For example, an automated taxi might have the goal of delivering a passenger to Marin County and might know that it is in San Francisco and that the Golden Gate Bridge is the only link between the two locations. Then we can expect it to cross the Golden Gate Bridge because it knows that that will achieve its goal. Notice that this analysis lMPLEMENTATloN is independent of how the taxi works at the implementation level. It doesn't matter whether LEVEL its geographical knowledge is implemented as linked lists or pixel maps, or whether it reasons by manipulating strings of symbols stored in registers or by propagating noisy signals in a network of neurons. Section 7.2. The Wumpus World 197 As we mentioned in the introduction to the chapter, one can build a knowledge-based agent simply by TELirzg it what it needs to know. The agent's initial program, before it starts to receive percepts, is built by adding one by one the sentences that represent the designer's knowledge of the environment. Designing the representation language to make it easy to express this knowledge in the form of sentences simplifies the construction problem DECLARATIVE enormously. This is called the declarative approach to system building. In contrast, the procedural approach encodes desired behaviors directly as program code; minimizing the role of explicit representation and reasoning can result in a much more efficient system. We will see agents of both kinds in Section 7.7. In the 1970s and 1980s, advocates of the two approaches engaged m heated debates. We now uiiderstand that a successful agent must combine both declarative and procedural elements in its design. In addition to TELL it what it needs to know, we can provide a knowledge-based agent with mechanisms that allow it to learn for itself. These mechanisms, which are dis- cussed in Chapter 18, create general knowledge about the environment out of a series of percepts. This knowledge can be incorporated into the agent's knowledge base and used for decision making. In this way, the agent can be fully autononious. All these capabilities-representation, reasoning, and learning-rest on the centuries- long development of the theory and technology of logic. Before explaining that theory and technology, however, we will create a simple world with which to illustrate them. WUMPUS WORLD The wumpus world is a cave consisting of rooms connected by passageways. Lurking some- where in the cave is the wumpus, a beast that eats anyone who enters its room. The wumpus can be shot by an agent, but the agent has only one ar,row. Some rooms contain bottomless pits that will trap anyone who wanders into these rooins (except for the wumpus, which is too big to fall in). The only mitigating feature of living in this environment is the possibility of finding a heap of gold. Although the wumpus world is rather tame by modern computer game standards, it makes an excellent testbed environment for intelligent agents. Michael Genesereth was the first to suggest this. A sample wumpus world is shown in Figure 7.2. The precise definition of the task environment is given, as suggested in Chapter 2, by the PEAS description: 0 Performance measure: +I000 for piclung up the gold, -1000 for falling into a pit or being eaten by the wumpus, -1 for each action taken and -10 for using up the arrow. 0 Environment: A 4 x 4 grid of rooms. The agent always starts in the square labeled 1,1, facing to the right. The locations of the gold and the wumpus are chosen ran- domly, with a uniform distribution, from the squares other than the start square. In addition, each square other than the start can be a pit, wnth probability 0.2. 0 Actuators: The agent can move forward, turn left by 90°, or turn right by 90". The agent dies a miserable death if it enters a square containing a pit or a live wumpus. (It is safe, albeit smelly, to enter a square with a dead wunlpus.) Moving forward has no 198 Chapter 7. Logical Agents effect if there is a wall in front of the agent. The action Grab can be used to pick up an object that is in the same square as the agent. The action Shoot can be used to fire an arrow in a straight line in the direction the agent is facing. The arrow continues until it either hits (and hence kills) the wumpus or hits a wall. The agent only has one arrow, so only the first Shoot action has any effect. 0 Sensors: The agent has five sensors, each of which gives a single bit of information: - In the square containing the wumpus and in the directly (not diagonally) adjacent squares the agent will perceive a stench. - In the squares directly adjacent to a pit, the agent will perceive a breeze. - In the square where the gold is, the agent will perceive a glitter. - When an agent walks into a wall, it will perceive a bump. - When the wumpus is killed, it emits a woeful scream that can be perceived any- where in the cave. The percepts will be given to the agent in the form of a list of five symbols; for example, if there is a stench and a breeze, but no glitter, bump, or scream, the agent will receive the percept Stench, Breeze, None, None, None. Exercise 7.1 asks you to define the wumpus environment along the various dimensions given in Chapter 2. The principal difficulty for the agent is its initial ignorance of the configuration of the environment; overcoming this ignorance seems to require logical reasoning. In most instances of the wumpus world, it is possible for the agent to retrieve the gold safely. Occa- sionally, the agent must choose between going home empty-handed and risking death to find the gold. About 21% of the environments are utterly unfair, because the gold is in a pit or surrounded by pits. Let us watch a knowledge-based wumpus agent exploring the environment shown in Figure 7.2. The agent's initial knowledge base contains the rules of the environment, as listed - Figure 7.2 A typical wurnpus world. The agent is in the bottom left corner. I Section 7.2. The Wumpus World 199 previously; in particular, it knows that it is in 1,1 and that 1,1 is a safe square. We will see how its knowledge evolves as new percepts arrive and actions are taken. The first percept is None, None, None, None, None, from which the agent can con- clude that its neighboring squares are safe. Figure 7.3(a) shovts the agent's state of knowledge at this point. We list (some of) the sentences in the howledge base using letters such as B (breezy) and OK (safe, neither pit nor wumpus) marked in the appropriate squares. Fig- ure 7.2, on the other hand, depicts the world itself. Figure 7.3 The first step taken by the agent in the wumpus world. (a) The initial sit- / I uation, after percept None, None, None, None, None. (b) After one move, with percept None, Breeze, None, None, None. From the fact that there was no stench or breeze in 1,1, the agent can infer that 1,2 and 2,1 are free of dangers. They are marked with an OK to indicate this. A cautious agent will move only into a square that it knows is OK. Let us suppose the agent decides to move forward to 2,1, giving the scene in Figure 7.3(b). The agent detects a breeze in 2,1, so there must be a prt in a neighboring square. The pit cannot be in I, 11, by the rules of the game, so there must be a pit in 2,2 or 3,1 or both. The notation P? in Figure 7.3(b) indicates a possible pit in those squares. At this point, there is only one known square that is OK and has not been visited yet. So the prudent agent will turn around, go back to I, 11, and then proceed to 1,2. The new percept in 1,2 is Stench, None, None, None, None, resulting in the state of knowledge shown in Figure 7.4(a). The stench in 1,2 means that there must be a wumpus nearby. But the wumpus cannot be in 1,1, by the rules of the game, and it cannot be in 2,2 (or the agent would have detected a stench when it was in 2,1). Therefore, the agent can infer that the wumpus is in 1,3. The notation W indicates this. Moreover, the lack of a Breeze in 1,2 implies that there is no pit in 2,2. Yet we already inferred that there must be a pit in either 2,2 or 3,1, so this means it must be in 3,1. This is a fairly difficult inference, because it combines knowledge gained at different times in different places and 200 Chapter 7. Logical Agents 3,4 4,4 =Agent 1,4 2,4 1,4 3,4 4,4 2,4 p? B =Breeze G = Glitter, Gold OK = Safe square 23 3,3 4,3 P = Pit 2,3a 4,3 '3 W 1,s W 3,3 p? S =Stench S G V = Visited B W = Wumpus ly2a 2,2 32 42 12 22 32 42 S v v OK OK OK OK 1,1 2,l 41 1,l 2-1 4,l 3,l p 3,1 p v V v v OK OK OK OK (a) 0 Figure 7.4 Two later stages in the progress of the agent. (a) After the third move, with percept Stench, None, None, None, None. (b) After the fifth move, with percept Stench, Breeze, Glitter, None, None. relies on the lack of a percept to make one crucial step. The inference is beyond the abilities of most animals, but it is typical of the kind of reasoning that a logical agent does. The agent has now proved to itself that there is neither a pit nor a wumpus in 2,2, so it is OK to move there. We will not show the agent's state of knowledge at 2,2; we just assume that the agent turns and moves to 2,3, giving us Figure 7.4(b). In 2,3, the agent detects a glitter, so it should grab the gold and thereby end the game. In each case where the agent draws a conclusion from the available information, that the available information is correct. This is a conclusion is guaranteed to be correct fundamental property of logical reasoning. In the rest of this chapter, we describe how to build logical agents that can represent the necessary information and draw the conclusions that were described in the preceding paragraphs. This section provides an overview of all the fundamental concepts of logical representation and reasoning. We postpone the technical details of any particular form of logic until the next section. We will instead use informal examples from the wumpus world and from the familiar realm of arithmetic. We adopt this rather unusual approach because the ideas of logic are far more general and beautiful than is commonly supposed. In Section 7.1, we said that knowledge bases consist of sentences. These sentences SYNTAX are expressed according to the syntax of the representation language, which specifies all the sentences that are well formed. The notion of syntax is clear enough in ordinary arithmetic: "x + y = 4" is a well-formed sentence, whereas "x2y+ =" is not. The syntax of logical Section 7.3. Logic 20 1 languages (and of arithmetic, for that matter) is usually designed for writing papers and books. There are literally dozens of different syntaxes, some with lots of Greek letters and exotic mathematical symbols, and some with rather visually appealing diagrams with arrows and bubbles. In all cases, however, sentences in an agent's knowledge base are real physical configurations of (parts of) the agent. Reasoning will involve generating and manipulating those configurations. SEMANTICS A logic must also define the semantics of the language. Loosely speaking, semantics The has to do with the "meaning" of sentences. In logic:, the definition is more precise. TRUTH semantics of the language defines the truth of each s,entence with respect to each possible PosslaLE WORLD world. For example, the usual semantics adopted for arithimetic specifies that the sentence "x + y = 4" is true in a world where x is 2 and y is 2, but false in a world where x is 1 and y is 1.' In standard logics, every sentence must be either true or false in each possible world-there is no "in beteen." MODEL When we need to be precise, we will use the term mo(de1 in place of "possible world." (We will also use the phrase "m is a model of a" to mean that sentence a is true in model m.) Whereas possible worlds might be thought of as (:potentially) real environments that the agent might or might not be in, models are mathematical ab:stractions, each of which simply fixes the truth or falsehood of every relevant sentence. Informally, we may think of x and y as the number of men and women sitting at a table playilng bridge, for example, and the sentence x + y = 4 is true when there are four in total; formally, the possible models are just all possible assignments of numbers to the variables 17: and y. Each such assignment fixes the truth of any sentence of arithmetic whose variables are x and y. Now that we have a notion of truth, we are ready to tallk about logical reasoning. This ENTAILMENT involves the relation of logical entailment between sentences-the idea that a sentence fol- lows 1ogicaEly from another sentence. In mathematical notation, we write as to mean that the sentence a entails the sentence p. The folrmal definition of entailment is this: a I= p if and only if, in every model in which a is true, ,f3 is also true. Another way to say this is that if a is true, then p must also be true. Informally, the truth of ,,3 is "contained" in the truth of a. The relation of entailment is farnilia from. arithmetic; we are happy with the idea that the sentence x + y = 4 entails the sentence 4 = x: + y. Obviously, in any model where x + y = 4such as the model in which x is 2 amd y is 2- it is the case that 4 = x t y. We will see shortly that a knowledge base can be considered a statement, and we often talk of a knowledge base entailing a sentence. We can apply the same kind of analysis to the wumpus-world reasoning example given in the preceding section. Consider the situation in Figure 7.3(b): the agent has detected 1,1 and a breeze in 2,1 I. These percepts, combined with the agent's knowledge nothing in of the rules of the wumpus world (the PEAS description on page 197), constitute the KB. The The reader will no doubt have noticed the similarity between the notion of truth of sentences and the notion of satisfaction of constraints in Chapter 5. This is no accident-constralmnt languages are indeed logics and constraint solving is a form of logical reasoning. FUZZY logic, discussed in Chapter 14, allows for degrees of truth. 202 Chapter 7. Logical Agents (a (b) Figure 7.5 Possible models for the presence of pits in squares 1,2, 2,2, and 3,1, given observations of nothing in 1,1 and a breeze in 2,1. (a) Models of the knowledge base and a1 (no pit in 1,2). (b) Models of the knowledge base and a2 (no pit in 2,2). agent is interested (among other things) in whether the adjacent squares 1,2, 2,2, and 3,1 contain pits. Each of the three squares might or might not contain a pit, so (for the purposes of this example) there are 23 = 8 possible models. These are shown in Figure 7.5.3 The KB is false in models that contradict what the agent knows-for example, the KB is false in any model in which 1,2 contains a pit, because there is no breeze in 1,1. There are in fact just three models in which the KB is true, and these are shown as a subset of the models in Figure 7.5. Now let us consider two possible conclusions: a1 = "There is no pit in 1,2 ." a2 = "There is no pit in 2,2." We have marked the models of a1 and a: in Figures 7.5(a) and 7.5(b) respectively. By inspection, we see the following: in every model in which KB is true, a1 is also true. Hence, KB + cul : there is no pit in I ,2. We can also see that in some models in which KB is true, a2 is false. az: the agent cannot conclude that there is no pit in 2,2. (Nor can it conclude Hence, KB that there is a pit in 2,21.) The preceding example not only illustrates entailment, but also shows how the defini- tion of entailment can be applied to derive conclusions-that is, to carry out logical infer- LOGICAL INFERENCE ence. The inference algorithm illustrated in Figure 7.5 is called model checking, because it MODELCHECKING enumerates all possible models to check that a is true in all models in which KB is true. Although the figure shows the models as partial wumpus worlds, they are really nothing more than assignments of true and false to the sentences "there is a pit in 1,2" etc. Models, in the mathematical sense, do not need to have 'orrible 'airy wumpuses in them. The agent can calculate theprobability that there is a pit in 2,2; Chapter 13 shows how. Section 7.3. Logic 203 In understanding entailment and inference, it might help to think of the set of all conse- quences of KB as a haystack and of a as a needle. Entailment is like the needle being in the haystack; inference is like finding it. This distinction is embodied in some formal notation: if an inference algorithm i can derive a from KB, we write which is pronounced "a is derived from KB by i" or "2 derives a from KB." SOUND An inference algorithm that derives only entailed sentences is called sound or truth- TRUTH.PRESERVING preserving. Soundness is a highly desirable property. An unsound inference procedure es- sentially makes things up as it goes along-it announces the discovery of nonexistent needles. It is easy to see that model checking, when it is applicable,5 is a sound procedure. COMPLETENESS The property of completeness is also desirable: an inference algorithm is complete if it can derive any sentence that is entailed. For real haystacks, which are finite in extent, it seems obvious that a systematic examination can always decide whether the needle is in haystack of consequences is infinite, the haystack. For many knowledge bases, however, the and completeness becomes an important issue.6 Fortunately, there are complete inference procedures for logics that are sufficiently expressive to handle many knowledge bases. We have described a reasoning process whose conclusions are guaranteed to be true in any world in which the premises are true; in particular, if KB is true in the real world, then any sentence ol derivedfrorn KB by a sound inference procedure is also true in the real w world. So, while an inference process operates on "syntax internal physical configurations such as bits in registers or patterns of electrical blips in brains-the process corresponds 7 to the real-world relationship whereby some aspect of the real world is the case by virtue of other aspects of the real world being the case. This correspondence between world and representation is illustrated in Figure 7.6. Sentences - - - - -' Sentence Entails Representation 2 r l World Aspects of the - Aspect of the real world Follows real world Figure 7.6 Sentences are physical configurations of the agent, and reasoning is a process of constructing new physical configurations from old ones. Logical reasoning should en- sure that the new configurations represent aspects of the world that actually follow from the Model checking works if the space of models is finite-for example, in wumpus worlds of fixed size. For arithmetic, on the other hand, the space of models is infinite: even if we restrict ourselves to the integers, there are infinitely many pairs of values for x and y in the sentence x + y = 4. Compare with the case of infinite search spaces in Chapter 3, where depth-first search is not complete. As Wittgenstein (1922) put it in his famous Tractatus: "The world is everything that is the case." 204 Chapter 7. Logical Agents The final issue that must be addressed by an account of logical agents is that of ground- GROUNDING ing-the connection, if any, between logical reasoning processes and the real environment in which the agent exists. In particular, how do we know that KB is true in the real world? (Af- ter all, KB is just "syntax" inside the agent's head.) This is a philosophical question about which many, many books have been written. (See Chapter 26.) A simple answer is that the agent's sensors create the connection. For example, our wumpus-world agent has a smell sen- sor. The agent program creates a suitable sentence whenever there is a smell. Then, whenever that sentence is in the knowledge base, it is true in the real world. Thus, the meaning and truth of percept sentences are defined by the processes of sensing and sentence construction that produce them. What about the rest of thc agent's knowledge, such as its belief that wumpuses cause smells in adjacent squares? This is not a direct representation of a single percept, but a general rule-derived, perhaps, from perceptual experience but not identical to a statement of that experience. General rules like this are produced by a sentence construction process called learning, which is the subject of Part VI. Learning is fallible. It could be the case that wumpuses cause smells except on February 29 in leap years, which is when they take their baths. Thus, KB may not be true in the real world, but with good learning procedures there is reason for optimism. L PRoPoslTloNAL OGIC We now present a very simple logic called propositional logic.8 We cover the syntax of propositional logic and its semantics-the way in which the truth of sentences is determined. Then we look at entailment-the relation between a sentence and another sentence that fol- lows from it- and see how this leads to a simple algorithm for logical inference. Everything takes place, of course, in the wumpus world. Syntax ATOMIC SENTENCES The syntax of propositional logic defines the allowable sentences. The atomic sentences- PROPOSITION the indivisible syntactic elements-consist of a single proposition symbol. Each such sym- SYMBOL bol stands for a proposition that can be true or false. We will use uppercase names for symbols: P, Q, R, and so on. The names are arbitrary but are often chosen to have some mnemonic value to the reader. For example, we might use W1,3 to stand for the proposition that the wumpus is in 1,3. (Remember that symbols such as TVI,3 are atomic, i.e., W, I, and 3 are not meaningful parts of the symbol.) There are two proposition symbols with fixed meanings: True is the always-true proposition and False is the always-false proposition. COMPLEX Complex sentences are constructed from simpler sentences using logical connectives. SENTENCES LOGICAL There are five connectives in common use: A literal is either an NEGATION 1 (not). A sentence such as 1W1,3 is called the negation of LITERAL atomic sentence (a positive literal) or a negated atomic sentence (a negative literal). Propositional logic is also called Boolean logic, after the logician George Boole (1815-1864). Section 7.4. Propositional Logic: A Very Simple Logic 205 A (and). A sentence whose main connective is A, such as W1,3 A P3,, is called a con- CONJUNCTION junction; its parts are the conjuncts. (The A looks like an "A" for "And.") DISJUNCTION V (or). A sentence using V, such as (WI, A P3,) \/ W2,2.. is a disjunction of the disjuncts (WI, A P3,l) and W2,z. (Historically, the V comes from the Latin "vel," which means "or." For most people, it is easier to remember as an upside-down A.) IMPLICATION + (implies). A sentence such as (WI, A PJ) + -1W2,2 is called an implication (or con- PREMISE ditional). Its premise or antecedent is (W1,3 A P3,i), ind its conclusion or consequent CONCLUSION is TW,. Implications are also known as rules lor if-then statements. The implication symbol is sometimes written in other books as 3 or -t. BICONDITIONAL % (if and only if). The sentence Wlj3 H 7Wz,2 is a biconditional. Figure 7.7 gives a formal grammar of propositional logic; see page 984 if you are not familiar with the BW notation. Sentence -+ AtomicSentence 1 Complexsentence AtomicSentence -+ True 1 False 1 Symbol Symbol -+ P 1 Q 1 R ) . . . ComplexSentence -+ 7 Sentence ( Sentence A Sentence ) ( ( Sentence V Sentence ) 1 / ( Sentence + Sentence ) 1 ( Sentence H Sentence )l Figure 7.7 A BNF (Backus-Naur Form) grammar of sentences in propositional logic. Notice that the grammar is very strict about parentheses: every sentence constructed with binary connectives must be enclosed in parentheses. This ensures that the syntax is completely unambiguous. It also means that we have to write ((A A B) + C) instead of A A B + C, for example. To improve readability, we will often omit parentheses, relying instead on an order of precedence for the connectives. This is similar to the precedence used in arithmetic-for example, ab + c is read as ((ab) + c) rather than a(b + c) because multiplication has higher precedence than addition. The order of precedence in propositional logic is (from highest to lowest): 1, A, V, +, and . I-Ience, the sentence 1PVQAR + S is equivalent to the sentence ((+I v (Q R)) =+ S. Precedence does not resolve ambiguity in sentences such as A A .B A C, which could be read as ((A A B) A C) or as (A A (B A C)). Because these two readings mean the same thing according to the semantics given in the next section, sentences such as A A B A C are allowed. We also allow A V B V C and A B H C. Sentences such as A + B + C are not 206 Chapter 7. Logical Agents allowed because the two readings have different meanings; we insist on parentheses in this case. Finally, we will sometimes use square brackets instead of parentheses when it makes the sentence clearer. Semantics Having specified the syntax of propositional logic, we now specify its semantics. The se- mantics defines the rules for determining the truth of a sentence with respect to a particular model. In propositional logic, a model simply fixes the truth value-true or false-for every proposition symbol. For example, if the sentences in the knowledge base make use of the proposition symbols P2,2, and P3,1, then one possible model is =false, P2,2 =false, P3,1 = true) . ml = With three proposition symbols, there are 23 = 8 possible models-exactly those depicted in Figure 7.5. Notice, however, that because we have pinned down the syntax, the models become purely mathematical objects with no necessary connection to wurnpus worlds. is just a symbol; it might mean "there is a pit in I ,2" or "I'm in Paris today and tomorrow." The semantics for propositional logic must specify how to compute the truth value of any sentence, given a model. This is done recursively. All sentences are constructed from atomic sentences and the five connectives; therefore, we need to specify how to compute the truth of atomic sentences and how to compute the truth of sentences formed with each of the five connectives. Atomic sentences are easy: True is true in every model and False is false in every model. The truth value of every other proposition symbol must be specified directly in the model. For example, in the model ml given earlier, Pl,2 is false. For complex sentences, we have rules such as For any sentence s and any model m, the sentence is is true in m if and only if s is false in m. Such rules reduce the truth of a complex sentence to the truth of simpler sentences. The TRUTH TABLE rules for each connective can be summarized in a truth table that specifies the truth value of a complex sentence for each possible assignment of truth values to its components. Truth tables for the five logical connectives are given in Figure 7.8. Using these tables, the truth value of any sentence s can be computed with respect to any model m by a simple process of PI, A (Pz,2 V evaluated in ml, gives recursive evaluation. For example, the sentence true A (false V true) = true A true = true. Exercise 7.3 asks you to write the algorithm PL-TRUE?(S, m), which computes the truth value of a propositional logic sentence s in a model rn. Previously we said that a knowledge base consists of a set of sentences. We can now see that a logical knowledge base is a conjunction of those sentences. That is, if we start with TELL(KB, Sl) . . . TELL(KB, S,) then we have KB = S1 A . . . A S,. an empty KB and do This means that we can treat knowledge bases and sentences interchangeably. The truth tables for "and," "or,'' and "not" are in close accord with our intuitions about the English words. The main point of possible confusion is that P V Q is true when P is true Section 7.4. Propositional Logic: A Very Simple Logic 207 P 1P Q PAQ PvQ P+Q PHQ true false false true false false true true true false true true false false false false false true false false true true true false true true true true Figure 7.8 Truth tables for the five logical connectives. To use the table to compute, for example, the value of P V Q when P is true and Q is false, first look on the left for the row where P is true and Q is false (the third row). Then look in that row under the P V Q column to see the result: true. Another way to look at this is to think of each row as a model, and the entries under each column for that row as saying whether the corresponding sentence is true in that model. or Q is true or both. There is a different connective called "e:tclusive or" ("xor" for short) that yields false when both disjuncts are true.9 There is no consensus on the symbol for exclusive or; two choices are i/ and . The truth table for + may seem puzzling at first, because it might not quite fit one's intuitive understanding of "P implies Q" or "if P then Q." For one thing, propositional logic does not require any relation of causation or relevance between P and Q. The sentence "5 is odd implies Tokyo is the capital of Japan" is a true sentence of propositional logic (under the normal interpretation), even though it is a decidedly odd sentence of English. Another point of confusion is that any implication is true whenever its antecedent is false. For example, "5 is even implies Sam is smart" is true, regardless of whether Sam is smart. This seems bizarre, but it makes sense if you think of "P + Q" as saying, "If P is true, then I am claiming that Q is true. Otherwise I am making no claim." The only way for this sentence to be false is if P is true but Q is false. The truth table for a biconditional, P Q, shows that it is true whenever both P + Q and Q + P are true. In English, this is often written as "P if and only if Q" or "P iff Q." The rules of the wumpus world are best written using H. For example, a square is breezy if a neighboring square has a pit, and a square is breezy only a neighboring square has a pit. So we need biconditionals such as where B1, means that there is a breeze in 1,1. Notice that the one-way implication is true in the wumpus world, but incomplete. It does not rule out models in which B1, is false and is true, which would violate the rules of the wumpus world. Another way of putting it is that the implication requires the presence of pits if there is a breeze, whereas the biconditional also requires the absence of pits if there is no breeze. Latin has a separate word, aut, for exclusive or. 208 Chapter 7. Logical Agents A simple knowledge base Now that we have defined the semantics for propositional logic, we can construct a knowledge base for the wumpus world. For simplicity, we will deal only with pits; the wumpus itself is left as an exercise. We will provide enough knowledge to carry out the inference that was done informally in Section 7.3. First, we need to choose our vocabulary of proposition symbols. For each i, j: Let Pilj be true if there is a pit in i, j. Let Bi,j be true if there is a breeze in i, j . The knowledge base includes the following sentences, each one labeled for convenience: There is no pit in I, 11: A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square; for now, we include just the relevant squares: The preceding sentences are true in all wumpus worlds. Now we include the breeze percepts for the first two squares visited in the specific world the agent is in, leading up to the situation in Figure 7.3(b). The knowledge base, then, consists of sentences R1 through R5. It can also be considered as a single sentence-the conjunction R1 A Rz A R3 A R4 A R5-because it asserts that all the individual sentences are true. Inference a for some sentence a. Recall that the aim of logical inference is to decide whether KB For example, is P2, entailed? Our first algorithm for inference will be a direct implementa- tion of the definition of entailment: enumerate the models, and check that a is true in every model in which KB is true. For propositional logic, models are assignments of true or false to every proposition symbol. Returning to our wumpus-world example, the relevant proposi- tion symbols are B1,l, Bzll, Pl,2, P2,1, P2,z, and P3,1. With seven symbols, there are 27 = 128 possible models; in three of these, Ei'B is true (Figure 7.9). In those three models, 7P1, is true, hence there is no pit in 1,2. On the other hand, P2, is true in two of the three models and false in one, so we cannot yet tell whether there is a pit in 2,2. Figure 7.9 reproduces in a more precise form the reasoning illustrated in Figure 7.5. A general algorithm for deciding entailment in propositional logic is shown in Figure 7.10. Like the BACKTRACKING-SEARCH algorithm on page 76, TT-ENTAILS? performs a recursive enumeration of a finite space of assignments to variables. The algorithm is sound, because it Section 7.4. Propositional Logic: A Very Simple Logic 209 Figure 7.9 A truth table constructed for the knowledge base given in the text. KB is true is false, if R1 through R5 xre true, which occurs in just 3 of the 128 rows. In all 3 rows, so there is no pit in 1,2. On the other hand, there might (or might not) be a pit in 2,2. function TT-ENTAILS?(KB, a) returns true or false inputs: KB, the knowledge base, a sentence in propositional logic a, the query, a sentence in propositional logic I symbols t a list of the proposition symbols in KB and a return TT-CHECK-ALL(KB, a, symbols, I) function TT-CHECK-ALL(KB, a, symbols, model) returns true or false if E?(symbols) then if PL-TRUE?(KB, model) then return PL-TRUE?(&, model) else return true else do P t FRs(symbols); rest +- RsT(symbols) return TT-CHECK-ALL(KB, a, rest, EXTEND(P, true, model)) and TT-CHECK-ALL(KB, a, rest, EXTEND(P, false, :model)) Figure 7.10 A truth-table enumeration algorithm for deciding propositional entailment. TT stands for truth table. PL-TRUE? returns true if a sentensce holds within a model. The variable model represents a partial model-an assignment to only some of the variables. The function call EXTEND(P, true, model) returns a new partial m'odel in which P has the value true. implements directly the definition of entailment, and complete, because it works for any KB and a and always terminates-there are only finitely many models to examine. Of course, "finitely many" is not always the same a,s "few." If KB and a contain n sym- bols in all, then there are 2n models. Thus, the time calmplexity of the algorithm is O(2n). (The space complexity is only O(n) because the enumeration is depth-first.) Later in this 210 Chapter 7. Logical Agents commutativity of A (a A P) r (P A a) commutativity of V (a V p) r (p V a) associativity of A ((a A P) A y) s (a A ((3 A y)) associativity of V ((a V P) V y) s (a V (p V 7)) 1 (a) a double-negation elimination contraposition (a + p) r (TP + la) implication elimination (a + p) r (la V 0) biconditional elimination (a & P) - ((a + P) A (P + a)) DeMorgan '(a A p) r (la V +) (a v p) r (la A 1P) De Morgan distributivity of A over V (a A (P V ?)) = ((a A p) V (a A 7)) distributivity of V over A (a v (p A r ((a V p) A (a V 7)) Figure 7.11 Standard logical equivalences. The symbols a, P, and y stand for arbitrary sentences of propositional logic. chapter, we will see algorithms that are much more efficient in practice. Unfortunately, every known inference algorithm for propositional logic has a worst-case complexity that is expo- nential in the size of the input. We do not expect to do better than this because propositional entailment is co-NP-complete. (See Appendix A.) Equivalence, validity, and satisfiability Before we plunge into the details of logical inference algorithms, we will need some addi- tional concepts related to entailment. Like entailment, these concepts apply to all forms of logic, but they are best illustrated for a particular logic, such as propositional logic. LOGICAL The first concept is logical equivalence: two sentences a and /3 are logically equivalent EQUIVALENCE if they are true in the same set of models. We write this as a r P. For example, we can easily show (using truth tables) that P A Q and Q A P are logically equivalent; other equivalences are shown in Figure 7.11. They play much the same role in logic as arithmetic identities do in ordinary mathematics. An alternative definition of equivalence is as follows: for any two sentences a and p, a r ,b' if and only if a + ,8 and P a . (Recall that means entailment.) The second concept we will need is validity. A sentence is valid if it is true in all VALIDITY models. For example, the sentence P V 1P is valid. Valid sentences are also known as tautologies-they are necessarily true and hence vacuous. Because the sentence Due is true TAUTOLOGY in all models, every valid sentence is logically equivalent to Due. What good are valid sentences? From our definition of entailment, we can derive the DEDUCTION deduction theorem, which was known to the ancient Greeks: THEOREM For any sentences a and P, a )= /3 if and only ifthe sentence (a: + P) is valid. (Exercise 7.4 asks for a proof.) We can think of the inference algorithm in Figure 7.10 as Section 7.5. Reasoning Patterns in Propositional Logic 21 1 checking the validity of (KB + a). Conversely, every valid implication sentence describes a legitimate inference. SATISFIABILIY The final concept we will need is satisfiability. A sentence is satisfiable if it is true in some model. For example, the knowledge base given earlier, (R1 A Rz A Rg A R4 A R5), is satisfiable because there are three models in which it is true, as shown in Figure 7.9. If SATISFIES a sentence a is true in a model m, then we say that m satisfies a, or that m is a model of a. Satisfiability can be checked by enumerating the possible models until one is found that satisfies tlie sentence. Determining the satisfiability of sentences in propositional logic was the first problem proved to be NP-complete. Many problems in computer science are really satisfiability problems. For example, all the constraint satisfaction problems in Chapter 5 are essentially asking whether the constraints are satisfiable by some assignment. With appropriate transformations, search problems can also be solved by checking satisfiability. Validity and satisffiability are of course connected: a is valid iff -a is unsatisfiable; contrapositively, a is satisfiable iff la is not valid. We also have the following useful result: a + ,l3 ifand only ifthe sentence (a A 1,B) is wnsatisjable. Proving ,B from a by checking the unsatisfiability of (a A +I) corresponds exactly to the REDUCTIO AD ABSURDUM standard mathematical proof technique of reductio ad absurdurn (literally, "reduction to an REFUTATION absurd thing"). It is also called proof by refutation or proof by contradiction. One assumes a sentence ,B to be false and shows that this leads to a contradiction with known axioms a. This contradiction is exactly what is meant by saying that the sentence (a A 1/3) is unsatisfiable. This section covers standard patterns of inference that cani be applied to derive chains of conclusions that lead to the desired goal. These patterns of inference are called inference INFERENCE RULES rules. The best-known rule is called Modus Ponens and is written as follows: MODUS PONENS aP? a P The notation means that, whenever any sentences of the form a + P and a are given, then the sentence /3 can be inferred. For example, if ( WumpusAhead A WumpusAlive) + Shoot and ( WumpusAhead A WumpusAlive) are given, then Shoot can be inferred. AND.ELIMINATION Another useful inference rule is And-Eliminatio, which says that, from a conjunction, any of the conjuncts can be inferred: aAP a For example, from ( WumpusAhead A WumpusAlive), WumpusAlive can be inferred. By considering the possible truth values of a and P, one can show easily that Modus Ponens and And-Elimination are sound once and for all. These rules can then be used in any particular instances where they apply, generating sound inferences without the need for enumerating models. 212 Chapter 7. Logical Agents All of the logical equivalences in Figure 7.1 1 can be used as inference rules. For exam- ple, the equivalence for biconditional elimination yields the two inference rules a ++ P (a =+ P) /I (P =+ a) and (a P) A (P =+ a) a P Not all inference rules work in both directions like this. For example, we cannot run Modus Ponens in the opposite direction to obtain a + P and a from 0. Let us see how these inference rules and equivalences can be used in the wumpus world. We start with the knowledge base containing Rl through R5, and show how to prove TP,, that is, there is no pit in 1,2. First, we apply biconditional elimination to R2 to obtain Then we apply And-Elimination to R6 to obtain Logical equivalence for contrapositives gives Now we can apply Modus Ponens with Rs and the percept R4 (i.e., lB1,1), to obtain Finally, we apply De Morgan's rule, giving the conclusion That is, neither 1,2 nor 2,1 contains a pit. The preceding derivation-a sequence of applications of inference rules-is called a PROOF proof. Finding proofs is exactly like finding solutions to search problems. In fact, if the successor function is defined to generate all possible applications of inference rules, then all of the search algorithms in Chapters 3 and 4 can be applied to find proofs. Thus, searching for proofs is an alternative to enumerating models. The search can go forward from the initial knowledge base, applying inference rules to derive the goal sentence, or it can go backward from the goal sentence, trying to find a chain of inference rules leading from the initial knowledge base. Later in this section, we will see two families of algorithms that use these techniques. The fact that inference in propositional logic is NP-complete suggests that, in the worst case, searching for proofs is going to be no more efficient than enumerating models. In many practical cases, however, finding a proof can be highly efficient simply because it can ignore For example, the proof given irrelevant propositions, no matter how many of them there are. earlier leading to A 1P2,1 does not mention the propositions B2,1, P2,2, or P3,1. They can be ignored because the goal proposition, appears only in sentence R4; the other propositions in R4 appear only in R4 and R2; so R1, R3, and R5 have no bearing on the proof. The same would hold even if we added a million more sentences to the knowledge base; the simple truth-table algorithm, on the other hand, would be overwhelmed by the exponential explosion of models. This property of logical systems actually follows from a much more fundamental prop- MONOTONICITY erty called monotonicity. Monotonicity says that the set of entailed sentences can only in- Section 7.5. Reasoning Patterns in Propositional Logic 213 crease as information is added to the knowledge base.'' For any sentences a and P, if KBka then KB4?ka. For example, suppose the knowledge base contains the additional assertion P stating that there are exactly eight pits in the world. This knowledge might help the agent draw additional con- clusions, but it cannot invalidate any conclusion a already inferred-such as the conclusion that there is no pit in 1,2. Monotonicity means that inference rules can be applied whenever suitable premises are found in the knowledge base- the conclusion of the rule must follow regardless of what else is in the knowledge base. Resolution We have argued that the inference rules covered so far are sound, but we have not discussed the question of completeness for the inference algorithms that use them. Search algorithms such as iterative deepening search (page 78) are complete in the sense that they will find any reachable goal, but if the available inference rules are inadequate, then the goal is not reachable-no proof exists that uses only those infereince rules. For example, if we removed the biconditional elimination rule, the proof in the preceding section would not go through. The current section introduces a single inference rule, resolution, that yields a complete inference algorithm when coupled with any complete search algorithm. We begin by using a simple version of the resolution mule in the wumpus world. Let us consider the steps leading up to Figure 7.4(a): the agent retilrns from 2,1 to 1,1 and then goes to 1,2, where it perceives a stench, but no breeze. We add the following facts to the knowledge base: By the same process that led to Rlo earlier, we can now derive the absence of pits in 2,2 and I ,3 (remember that I, 11 is already known to be pitless): We can also apply biconditional elimination to R3, followed by modus ponens with R5, to obtain the fact that there is a pit in 1,1, 2,2, or 3,1: Now comes the first application of the resolution rule: the literal 1P2,2 in R13 resolves with the literal P2,2 in R15 to give In English; if there's a pit in one of ,I, 2,2, and 3,1, and it's not in 2,2, then it's in l,1 or 3,2. Similarly, the literal lP1, in R1 resolves with the literal P1,1 in R16 to give lo Nonmonotonic logics, which violate the monotonicity property, capture a common property of human rea- soning: changing one's mind. They are discussed in Section 10.7.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.