Knowledge representation in Artificial intelligence notes

knowledge representation and reasoning in artificial intelligence and knowledge representation for agents and multi-agent systems, knowledge representation reasoning pdf free download
HalfoedGibbs Profile Pic
HalfoedGibbs,United Kingdom,Professional
Published Date:02-08-2017
Your Website URL(Optional)
KNOWLEDGE 1 0 REPIESENTATION In which we show how to useJirst-order logic to represent the most important aspects of the real world, such as action, space, time, mental events, and shopping. The last three chapters described the technology for knowledge-based agents: the syn- tax, semantics, and proof theory of propositional and first-order logic, and the implementation of agents that use these logics. In this chapter we address the question of what content to put into such an agent's knowledge base-how to represent facts about the world. Section 10.1 introduces the idea of a general ontology, which organizes everything in the world into a hierarchy of categories. Section 10.2 covers the basic categories of ob- jects, substances, and measures. Section 10.3 discusses representations for actions, which are central to the construction of knowledge-based agents, and also explains the more general notion of events, or space-time chunks. Section 10.4 discusses knowledge about beliefs, and Section 10.5 brings all the knowledge together in the context of an Internet shopping environ- ment. Sections 10.6 and 10.7 cover specialized reasoning systems for representing uncertain and changing knowledge. In "toy" domains, the choice of representation is not that important; it is easy to come up with a consistent vocabulary. On the other hand, complex domains such as shopping on the Internet or controlling a robot in a changing physical environment require more general and flexible representations. This chapter shows how to create these representations, con- centrating on general concepts-such as Actions, Time, Physical Objects, and Beliefs- that occur in many different domains. Representing these abstract concepts is sometimes called ontological engineering-it is related to the knowledge engineering process described in Section 8.4, but operates on a grander scale. The prospect of representing everything in the world is daunting. Of course, we won't actually write a complete description of everything-that would be far too much for even a 1000-page textbook-but we will leave placeholders where new knowledge for any domain can fit in. For example, we will define what it means to be a physical object, and the details Section 10.1. Ontological Engineering 321 Sets Numbers RepvesentationalObjects lntewul Places Physicalobjects Processes i I / I Categories Sentences Measurements Moments Things Times Weights Animals Agents Solid Liquid Gas I 1 Humans Figure 10.1 The ulpper ontology of the world, showing the topics to be covered later in the chapter. Each arc indicates that the lower concept is a specialization of the upper one. of different types of objects-robots, televisions, books, or whatever-can be filled in later. UPPERONTOLOGY The general framework of concepts is called an upper ontology, because of the convention of drawing graphs with the general concepts at the top and the more specific concepts below them, as in Figure 10.1. Before considering the ontology further, we should state one important caveat. We have elected to use first-order logic to discuss the content and organization of knowledge. Certain aspects of the real world are hard to capture in FOL. The principal difficulty is that almost all generalizations have exceptions, or hold only to a degree. For example, although "tomatoes are red" is a useful rule, some tomatoes are green, yellow, or orange. Similar exceptions can be found to almost all the general statements in this chapter. The ability to handle e:rceptions and uncertainty is extremely important, but is orthogonal to the task of understanding the gen- eral ontology. For this reason, we will delay the discussion of exceptions until Section 10.6, and the more general topic of uncertain information until Chapter 13. Of what use is an upper ontology? Consider again the ontology for circuits in Sec- tion 8.4. It makes a large number of simplifying assumptions. For example, time is omitted completely. Signals are fixed, and do not propagate. The structure of the circuit remains constant. If we wanted to make this more general, consider signals at particular times, and iinclude the wire lengths and propagation delays. This would allow us to simulate the timing properties of the circuit, and indeed such simulations are often carried out by circuit design- ers. We could also introduce more interesting classes of gates, for example by descrlbing the technology (TTL, MOS, CMOS, and so on) as well as the input/output specification. If we wanted to discuss reliability or diagnosis, we would include the possibility that the structure of the circuit or the properties of the gates might change spontaneously. To account for stray capacitances, we would need to move from a purely topological representation of connectiv- ity to a more realistic description of geometric properties. 322 Chapter 10. Knowledge Representation If we look at the wumpus world, similar considerations apply. Although we do include time, it has a very simple structure: Nothing happens except when the agent acts, and all changes are instantaneous. A more general ontology, better suited for the real world, would allow for simultaneous changes extended over time. We also used a Pit predicate to say which squares have pits. We could have allowed for different kinds of pits by having several individuals belonging to the class of pits, each having different properties. Similarly, we might want to allow for other animals besides wumpuses. It might not be possible to pin down the exact species from the available percepts, so we would need to build up a wumpus- world biological taxonomy to help the agent predict behavior from scanty clues. For any special-purpose ontology, it is possible to make changes like these to move toward greater generality. An obvious question then arises: do all these ontologies converge on a general-purpose ontology? After centuries of philosophical and computational investi- gation, the answer is "Possibly." In this section, we will present one version, representing a synthesis of ideas from those centuries. There are two major characteristics of general- purpose ontologies that distinguish them from collections of special-purpose ontologies: A general-purpose ontology should be applicable in more or less any special-purpose domain (with the addition of domain-specific axioms). This means that, as far as possi- ble, no representational issue can be finessed or brushed under the carpet. In any sufficiently demanding domain, different areas of knowledge must be unified, because reasoning and problem solving could involve several areas simultaneously. A robot circuit-repair system, for instance, needs to reason about circuits in terms of elec- trical connectivity and physical layout, and about time, both for circuit timing analysis and estimating labor costs. The sentences describing time therefore must be capable of being combined with those describing spatial layout and must work equally well for nanoseconds and minutes and for angstroms and meters. After we present the general ontology we use it to describe the Internet shopping domain. This domain is more than adequate to exercise our ontology, and leaves plenty of scope for the reader to do some creative knowledge representation of his or her own. Consider for example that the Internet shopping agent must know about myriad subjects and authors to buy books at, about all sorts of foods to buy groceries at, and about everything one might find at a garage sale to hunt for bargains at' 10.2 CATEGORIES AND OBJECTS CATEGORIES The organization of objects into categories is a vital part of knowledge representation. Al- though interaction with the world takes place at the level of individual objects, much rea- soning takes place at the level of categories. For example, a shopper might have the goal of buying a basketball, rather than a particular basketball such as BB9. Categories also serve to make predictions about objects once they are classified. One infers the presence of certain We apologize if, due to circumstances beyond our control, some of these online stores are no longer functioning by the time you read this. Section 10.2. Categories and Objects 323 objects from perceptual input, infers category membership from the perceived properties of the objects, and then uses category information to make predictions about the objects. For example, from its green, mottled skin, large size, and ovoid shape, one can infer that an object is a watermelon; from this, one infers that it would be useful for fruit salad. There are two chalices for representing categories in first-order logic: predicates and objects. That is, we can use the predicate BasketbalE(b), or we can reify the c,ategory as an object, Basketballs. We could then say Member(b, Basketballs) (which we will abbre- viate as b E Basketballs) to say that b is a member of the category of basketballs. We say Subset(Basketballs, Balls) (abbreviated as Basketballs c Balls) to say that Basketballs is a subcategory, or subset, of Balls. So you can think of a category as being the set of its mem- bers, or you can think of it as a more complex object that just happens to have the Member and Subset relations defined for it. INHERIIANCE Categories serve to organize and simplify the knowledge base through inheritance. If we say that all instances of the category Food are edible, and if we assert that Fruit is a subclass of Food and Apples is a subclass of Fruit, then we know that every apple is edible. We say that the individual apples inherit the property of edibility, in this case from their membership in the Food category. TAXONOMY Subclass relations organize categories into a taxonomy, or taxonomic hierarchy. Tax- onomies have been used explicitly for centuries in technical fields. For example, systematic biology aims to provide a taxonomy of all living and extinct species; library science has de- veloped a taxonomy of all fields of knowledge, encoded as the Dewey Decimal system; and tax authorities and other government departments have developed extensive taxoriornies of occupations and commercial products. Taxonomies are also an important aspect of general commonsense knowledge. First-order logic makes it easy to state facts about categories, either by relating objects to categories or by quantifying over their members: An object is a member of a category. For example: BB9 E Basketballs A category is a subclass of another category. For example: Basketballs C Balls All members of a category have some properties. For example: x E Basketballs + Round (x) Members of a category can be recognized by some properties. For example: Orange (x) A Round (z) A Diameter(x) = 9.5" A x E Balls + x E Basketballs A category as a whale has some properties. For example: Dogs E DomesticatedSpecies :Notice that because Dogs is a category and is a member of DomesticatedSpecies, lthe latter lnust be a category of categories. One can even have categories of categories of categories, but they are not much use. Although subclass a.nd member relations are the most important ones for categories, we also want to be able to state relations between categories that are not subclasses of each other. For example, if we just say that Males and Females are subclasses of Animlals. then 324 Chapter 10. Knowledge Representation we have not said that a male cannot be a female. We say that two or more categories are DISJOINT disjoint if they have no members in common. And even if we know that males and females are disjoint, we will not know that an animal that is not a male must be a female, unless EXHAUSTIVE we say that males and females constitute an exhaustive decomposition of the animals. A oECoMPoslTloN disjoint exhaustive decomposition is known as a partition. The following examples illustrate PARTITION these three concepts: Disjoint (Animals, Vegetables)) ExhaustiveDecomposition(Americans, Canadians, Mexicans), NorthAmericans) Partition (Males, Females), Animals) . (Note that the ExhaustiveDecomposition of NorthAmericans is not a Partition, because some people have dual citizenship.) The three predicates are defined as follows: Disjoint (s) ++ ('d el, cz cl E s A cz E s A cl cz + Intersection(cl , cz) = )) ExhaustiveDecomposition(s, c) ('di i E c % 3 cz ca E s A i E cz) Partition(s, c) Disjoint (s) A ExhaustiveDecomposition(s, C) . Categories can also be defined by providing necessary and sufficient conditions for membership. For example, a bachelor is an unmarried adult male: x E Bachelors Unmarried(x) A x E Adults A x E Males . As we discuss in the sidebar on natural kinds, strict logical definitions for categories are neither always possible nor always necessary. Physical composition The idea that one object can be part of another is a familiar one. One's nose is part of one's head, Romania is part of Europe, and this chapter is part of this book. We use the general PartOf relation to say that one thing is part of another. Objects can be grouped into PartOf hierarchies, reminiscent of the Subset hierarchy: Part Of (Bucharest, Romania) PartOf (Romania, EasternEurope) PartOf (EasternEurope, Europe) PartOf (Europe, Earth) . The PartOf relation is transitive and reflexive; that is, PartOf(z, Y) A PartOf (y, z) + partof (x, z) . Part Of (x, x) . Therefore, we can conclude PartOf (Bucharest, Earth). COMPOSITE OBJECT Categories of composite objects are often characterized by structural relations among parts. For example, a biped has two legs attached to a body: 3 11,12, b Leg(ll) A Leg(1a) A Body(b) A Biped (a) PartOf (b, a) A PartOf (II, a) A PartOf (lz, a) Attached(l1, b) A Attached(l2, b) A 11 12 A 'd l3 Leg(i3) A PartOf (13, a) (13 = 11 v 13 = lz) . Section 10.2. Categories and Objects 325 The notation for "exactly two" is a little awkward; we are forced to say that there are two legs, that they are not the same, and that if anyone proposes a third leg, it must be the same as one of the other two. In Section 10.6, we will see how a formalism called description logic makes it easier to represent constraints like "exactly two." We can define a Partpartation relation analogous to the Partatzon relation for cate- gories. (See Exercise 10.6.) An object is composed of the parts in its PartPartition and can be viewed as deriving some properties from those parts. For example, the mass of a compos- ite object is the sum of the masses of the parts. Notice that this is not the case with categories, which have no mass, even though their elements might. It is also useful to define composite objects with definite parts but no particular struc- ture. For example, we might want to say, "The apples in this bag weigh two pounds." The temptation would be to ascribe this weight to the set of apples in the bag, but this would be a mistake because the set is an abstract mathematical concept that has elements buit does not BUNCH have weight. Instead, we need a new concept, which we will call a bunch. For example, if the apples are Applel, Apples, and Apples, then BunchOf(Applel, Apples, Apples)) denotes the composite object with the three apples as parts (not elements). We can then use the bunch as a normal, albeit unstructured, object. Notice that BzinchOf (x) = x. Fuflhermore, BunchOf(App1es) is the composite object consisting of all apples-not to be confilsed with Apples, the category or set of all apples. We can define BunchOf in terms of the PartOf relation. Obviously, each element of s is part of BunchOf (s): 'dx x E s + PartOf (x, BunchOf is)) . Furthermore, BunchOf (s) is the smallest object satisfying this condition. In other words, BunchOf(s) must be part of any object that has all the elements of s as parts: 'd y 'dx z E s PartOf (x, y) + PartOf(Bunch0f (s), y) . LOGICAL MINIMIMION 'These axioms are an example of a general technique called logical minimizatiorn, which means defining an object as the smallest one satisfying certain conditions. Kn both scientific and commonsense theories of the world, objects have height, mass, cost, MEASURES and so on. The values that we assign for these properties are called measures. Ordi- nary quantitative measures are quite easy to represent. We imagine that the universe in- cludes abstract "measure objects," such as the length that is the length of this line seg- ment: I. We can call this length 1.5 inches or 3.81 centimeters. Thus, the same length has different names in our language. Logically, this can be done by eombin- UNITSFUNCTION ing a units function with a number. (An alternative scheme is explored in Exercise 10.8.) If the line segment is called L1, we can write Length(L1) = Inches(l.5) = Centzmeters(3.81) . Conversion between units is done by equating multiples of one unit to another: Centimeters(2.54 x d) = Inchesid) . 326 Chapter 10. Knowledge Representation Some categories have strict definitions: an object is a triangle if and only if it is a polygon with three sides. On the other hand, most categories in the real world have no clear-cut definition; these are called natural kind categories. For example, tomatoes tend to be a dull scarlet; roughly spherical; with an indentation at the top where the stem was; about two to four inches in diameter; with a thin but tough skin; and with flesh, seeds, and juice inside. There is, however, variation: some tomatoes are orange, unripe tomatoes are green, some are smaller or larger than average, and cherry tomatoes are uniformly small. Rather than having a complete definition of tomatoes, we have a set of features that serves to identify objects that are clearly typical tomatoes, but might not be able to decide for other objects. (Could there be a tomato that is furry, like a peach?) This poses a problem for a logical agent. The agent cannot be sure that an object it has perceived is a tomato, and even if it were sure, it could not be cer- tain which of the properties of typical tomatoes this one has. This problem is an inevitable consequence of operating in partially observable environments. One useful approach is to separate what is true of all instances of a cate- gory from what is true only of typical instances. So in addition to the category Tomatoes, we will also have the category Typical( Tomatoes). Here, the Typical function maps a category to the subclass that contains only typical instances: Typical (c) C_ c . Most knowledge about natural kinds will actually be about their typical instances: x E Typical( Tomatoes) =+ Red(%) A Round(x) . Thus, we can write down useful facts about categories without exact definitions. The difficulty of providing exact definitions for most natural categories was explained in depth by Wittgenstein (1953), in his book Philosophical Znvestiga- tions. He used the example of games to show that members of a category shared 7 "family resemblances ' rather than necessary and sufficient characteristics. The utility of the notion of strict definition was also challenged by Quine (1953). He pointed out that even the definition of "bachelor" as an un- married adult male is suspect; one might, for example, question a statement such as "the Pope is a bachelor." While not strictly false, this usage is certainly infe- licitous because it induces unintended inferences on the part of the listener. The tension could perhaps be resolved by distinguishing between logical definitions suitable for internal knowledge representation and the more nuanced criteria for felicitous linguistic usage. The latter may be achieved by "filtering" the assertions derived from the former. It is also possible that failures of linguistic usage serve as feedback for modifying internal definitions, so that filtering becomes unnecessary. Section 10.2. Categories and Objelcts 3 27 Similar axioms can be written for pounds and kilograms, seconds and days, and dollars and cents. Measures can be used to describe objects as follows: Darneter(Baslcetball1) = Inches(9.5) . LitPrice(Basketbal1) = (19) . d E Days + Duration(d) = Hours(24) . Note that (I) is not a dollar bill One can have two dollar bills, but there is only one object named (I). Note also that, while Inches(0) and Centimelers(0) refer to the same zero length, they are not identical to other zero measures, such as Seconds(0). Simple, quantitative measures are easy to represent. Other measures present more of a problem, because they have no agreed scale of values. Exercises have difficulty, desserts have deliciousness, and poems have beauty, yet numbers cannot be assigned to these qualities. One might, in a moment of pure accountancy, dismiss such properties as useless for the purpose of logical reasoning; or, still worse, attempt to impose a numerical scale on beauty. This would be a grave mistake, because it is unnecessary. The most important aspect of measures is not the particular numerical values, but the fact that measures can be ordered. Although measures are not numbers, we can still compare them using an ordering sym- bol such as . For example, we might well believe that Norvig's exercises are tougher than Russell's, and that one scores less on tougher exercises: el E Exercises A ez E Exercises A Wrote(Norvig, el) A Wrote(Russel1, ea) =+ Dz&culty(ell) Dificulty(e2) . el E Exercises A e2 E Exercises A Dificulty(el) Diculty(e) + ExpectedScore (el) ExpectedScore (e2) . This is enough to allow one to decide which exercises to do, even though no numerilcal values for difficulty were ever used. (One does, however, have to discover who wrote which exer- cises.) These sorts of monotonic relationships among measures form the basis for the field of qualitative physics, a subfield of A1 that investigates how to reason about physical systems without plunging into detailed equations and numerical simulations. Qualitative physics is discussed in the historical notes section. Substances and objects The real world can be seein as consisting of primitive objects (particles) and composilte objects built from them. By reasoning at the level of large objects such as apples and cars, we can overcome the complexity involved in dealing with vast numbers of primitive objects individually. There is, however, a significant portion of reality that seems to defy any obvious INDIVIDUITION individuation-division into distinct objects. We give this portion the generic name stuff. STUFF For example, suppose I have some butter and an aardvark in front of me. I can say there is one aardvark, but there is no obvious number of "butter-objects," because any part of a butter-object is also a butter-object, at least until we get to very small parts indeed. This is the major distinction between stuff and things. If we cut an aardvark in half, we do not get two aardvarks (unfortunately). The English language distinguishes clearly between stuff and things. We say "an aard- vark," but, except in pretentious California restaurants, one cannot say "a butter." linguists 328 Chapter 10. Knowledge Representation COUNT NOUNS distinguish between count nouns, such as aardvarks, holes, and theorems, and mass nouns, MASS NOUNS such as butter, water, and energy. Several competing ontologies claim to handle this distinc- tion. We will describe just one; the others are covered in the historical notes section. To represent stuff properly, we begin with the obvious. We will need to have as objects in our ontology at least the gross "lumps" of stuff we interact with. For example, we might recognize a lump of butter as the same butter that was left on the table the night before; we might pick it up, weigh it, sell it, or whatever. In these senses, it is an object just like the aardvark. Let us call it Butter . We will also define the category Butter. Informally, s its elements will be all those things of which one might say "It's butter," including Butter . s With some caveats about very small parts that we will omit for now, any part of a butter-object is also a butter-object: x E Butter A PartOf (y, x) + y E Butter . We can now say that butter melts at around 30 degrees centigrade: x E Butter + MeltingPoint(x, Centigrade(30)) . We could go on to say that butter is yellow, is less dense than water, is soft at room tempera- ture, has a high fat content, and so on. On the other hand, butter has no particular size, shape, or weight. We can define more specialized categories of butter such as UnsaltedButter, which is also a kind of stuff. On the other hand, the category PoundOfButter, which in- cludes as members all butter-objects weighing one pound, is not a substance If we cut a pound of butter in half, we do not, alas, get two pounds of butter. INTRINSIC What is actually going on is this: there are some properties that are intrinsic: they belong to the very substance of the object, rather than to the object as a whole. When you cut a substance in half, the two pieces retain the same set of intrinsic properties-things EXTRINSIC like density, boiling point, flavor, color, ownership, and so on. On the other hand, extrinsic properties are the opposite: properties such as weight, length, shape, function, and so on are not retained under subdivision. A class of objects that includes in its definition only intrinsic properties is then a sub- stance, or mass noun; a class that includes any extrinsic properties in its definition is a count noun. The category Stu8 is the most general substance category, specifying no intrinsic properties. The category Thing is the most general discrete object category, specifying no extrinsic properties. All physical objects belong to both categories, so the categories are coextensive-they refer to the same entities. 10.3 ACTIONS, SITUATIONS, AND EVENTS Reasoning about the results of actions is central to the operation of a knowledge-based agent. Chapter 7 gave examples of propositional sentences describing how actions affect the wurnpus world-for example, Equation (7.3) on page 227 states how the agent's location is changed by forward motion. One drawback of propositional logic is the need to have a dif- ferent copy of the action description for each time at which the action might be executed. This section describes a representation method that uses first-order logic to avoid that problem. Section 10.3. Actions, Situations, and Events 3 29 The ontology of situation calculus One obvious way to avoid multiple copies of axioms is simply to quantify over time-to say, "Vt, such-and-such is the result at t + 1 of doing the action at t." Instead of dealing with explicit times like t 4 1, we will concentrate in this section on situations, which denote SITUATION the states resulting from executing actions. This approach is called situation callculus and CALCULUS involves the following ontology: As in Chapter 8, actions are logical terms such as Forwad and Turn(Right). For now, we will assume that the environment contains only one agent. (If there is more than one, an additional argument can be inserted to say which agent is doing the action.) SITUATIONS Situations are logical terms consisting of the initial situation (usually called So) and all situations that are generated by applying an action to a situation. The function Result(a, s) (sometimes called Do) names the situation that results when action a is executed in situation s. Figure 10.2 illustrates this idea. FLUENTS Fluents are functions and predicates that vary from one situation to the next, such as the location of the agent or the aliveness of the wumpus. The dictionary says a fluent is something that fllows, like a liquid. In this use, it means flowing or changing across situations. By convention, the situation is always the last argument of a fluent. For example, lHoldzng(G1, So) says that the agent is not holding the gold GI in the initial situation So. Age( Wumpus, So) refers to the wumpus's age in So. Atemporal or eternal predicates and functions are also allowed. Examples include the predicate Gold (GI) and the function LeftLeg Of ( Wumpus). Result(Foiward, So) Fonvard Figure 10.2 In situation calculus, each situation (except So) is the result of an action. 330 Chapter 10. Knowledge Representation In addition to single actions, it is also helpful to reason about action sequences. We can define the results of sequences in terms of the results of individual actions. First, we say that executing an empty sequence leaves the situation unchanged: Result (I, s) = s . Executing a nonempty sequence is the same as executing the first action and then executing the rest in the resulting situation: Result(al seq , s) = Result (seq, Result (a, s)) . A situation calculus agent should be able to deduce the outcome of a given sequence of PROJECTION actions; this is the projection task. With a suitable constructive inference algorithm, it should PLANNING also be able toJind a sequence that achieves a desired effect; this is the planning task. We will use an example from a modified version of the wumpus world where we do not worry about the agent's orientation and where the agent can Go from one location to an adjacent one. Suppose the agent is at I, 11 and the gold is at l, 21. The aim is to have the gold in I, 11. The fluent predicates are At (0, x, s) and Holding (0, s). Then the initial knowledge base might include the following description: At(Agent, I, 11, So) /I At(G1, I, 21, So) This is not quite enough, however, because it doesn't say what isn't true in So. (See page 355 for further discussion of this point.) The complete description is as follows: We also need to state that G1 is gold and that I, 11 and I, 21 are adjacent: Gold (GI) /I Adjacent(l, 11, I, 21) /I Adjacent (1, 21, I, 11) . One would like to be able to prove that the agent achieves its aim by going to I, 21, grabbing the gold, and returning to I, 11. That is, More interesting is the possibility of constructing a plan to get the gold, which is achieved by answering the query "what sequence of actions results in the gold being at 1,1?" 3 seq At (GI, I, 11, Result (seq , So)) Let us see what has to go into the knowledge base for these queries to be answered. Describing actions in situation calculus In the simplest version of situation calculus, each action is described by two axioms: a possi- POSSIBILITYAXIOM bility axiom that says when it is possible to execute the action, and an effect axiom that says EFFECT AXIOM what happens when a possible action is executed. We will use Poss(a, s) to mean that it is possible to execute action a in situation s. The axioms have the following form: POSSIBILITY AXIOM: Preconditions + Poss(a, s) . EFFECT AXIOM: Poss(a, s) + Changes that result from taking action. Section 10.3. Actions, Situations, and Events 33 1 We will present these axioms for the modified wumpus world. To shorten our sentences, we will omit universal quantifiers whose scope is the entire sentence. We assume that Ihe variable s ranges over situations, a ranges over actions, o over objects (including agents), gr over gold, and x and y over locations. The possibility axioms for this world state that an agent can go between adjacent loca- tions, grab a piece of gold in the current location, and release some gold that it is holding: At(Agent, x, s) A Adjacent (x, y) + Poss(Go(x,y),s). + Poss(Grab(g), s) . Gold(g) A At(Agent, x, s) f\ At(g, x, s) Holding (g, s) + Poss (Release(g) , s) . The effect axioms state that, if an action is possible, then certain properties (fluents) will hold in the situation that results from executing the action. Going from z to y results in being at y, grabbing the gold results in holding the gold, and releasing the gold results in not holding it: Poss(Go(x, y), s) + At(Agent, y, Result(Go(x, y), s)) . Poss( Grab(g), s) + Holding(g, Result(Grab(g), s)) . Poss(Release(g) , s) + lHolding(g, Result(Release(g), s)) . Having stated these axioms, can we prove that our little plan achieves the goal? Unl'ortunately not At first, everything works fine; Go(l, I, I, 21) is indeed possible in So and the effect axiom for Go allows us to conclude that the agent reaches I,%: At(Agent, I, 21, Result(Go(l, 11,1? 211, So)) . Now we consider the Grab(G1) action. We have to show that it is possible in the new situation-that is, At(G1, I, 21, Result(Go(C1,11, I, 211, So)) . Alas, nothing in the knowledge base justifies such a conclusion. Intuitively, we understand that the agent's Go action should have no effect on the gold's location, so it should still be at I, 21, where it was in So. The problem is that the efect axioms say what changes, but don't say what stays the same. FRAME PROBLEM Representing all the things that stay the same is called the frame problem. We must find an efficient solution to the frame problem because, in the real world, almost everything stays the same almost all the time. Each action affects only a tiny fraction of all fluents. FRAME AXIOM One approach is to write explicit frame axioms that do say what stays the same. For example, the agent's movements leave other objects stationary unless they are held: At(o, x, s) A (o Agent) A lHolding(o, s) =+ At(o, x, Result(Go(y, z), s)) . If there are F fluent predlicates and A actions, then we will need O(AF) frame axioms. On the other hand, if each action has at most E effects, where E is typically much less than F, then we should be able to represent what happens with a much smaller knowledge base of size O(AE). This is the representational frame problem. The closely related inferential SFJAL P INFERENTIALFRAME ROBLEM frame problem is to project the results of a t-step sequence of actions in time O(Ejt), rather than time O(Ft) or O(AEt). We will address each problem in turn. Even then, another problem remains-that of ensuring that all necessary conditions for an action's success have been specified. For example, Go fails if the agent dies en route. This is the qualification P QUALIFiCATloN ROBLEM problem, for which there is no complete solution. 332 Chapter 10. Knowledge Representation Solving the representational frame problem The solution to the representational frame problem involves just a slight change in viewpoint on how to write the axioms. Instead of writing out the effects of each action, we consider how each fluent predicate evolves over time.3 The axioms we use are called successor-state succEssOR-sTATE axioms. They have the following form: AXIOM SUCCESSOR-STATE AXIOM: Action is possible + (Fluent is true in result state Action S efSect made it true V It was true before and action left it alone) . After the qualification that we are not considering impossible actions, notice that this defini- tion uses , not J. This means that the axiom says that the fluent will be true if and only if the right-hand side holds. Put another way, we are specifying the truth value of each fluent in the next state as a function of the action and the truth value in the current state. This means that the next state is completely specified from the current state and hence that there are no additional frame axioms needed. The successor-state axiom for the agent's location says that the agent is at y after exe- cuting an action either if the action is possible and consists of moving to y or if the agent was already at y and the action is not a move to somewhere else: Poss (a, s) + (At(Agent, y, Result(a, s)) a = Go(x, y) V (At(Agent1 Yl s) A a Go(y, 4)) . The axiom for Holding says that the agent is holding g after executing an action if the action was a grab of g and the grab is possible or if the agent was already holding g, and the action is not releasing it: Successor-state axioms solve the representational frame problem because the total size of the axioms is O(AE) literals: each of the E effects of each of the A actions is mentioned exactly once. The literals are spread over F different axioms, so the axioms have average size AE/F. The astute reader will have noticed that these axioms handle the At fluent for the agent, but not for the gold; thus, we still cannot prove that the three-step plan achieves the goal of IMPLICIT EFFECT having the gold in I, 11. We need to say that an implicit effect of an agent moving from x to y is that any gold it is carrying will move too (as will any ants on the gold, any bacteria P RAM'F1cAT1oN ROBLEM on the ants, etc.). Dealing with implicit effects is called the ramification problem. We will discuss the problem in general later, but for this specific domain, it can be solved by writing a more general successor-state axiom for At. The new axiom, which subsumes the previous version, says that an object o is at y if the agent went to y and o is the agent or something the This is essentially the approach we took in building the Boolean circuit-based agent in Chapter 7. Indeed, axioms such as Equation (7.4) and Equation (7.5) can be viewed as successor-state axioms. Section 10.3. Actions, Situations, and Events 333 agent was holding; or if o was already at y and the agent didn't go elsewhere, with o being the agent or something the agent was holding. Poss(a, s) J At(o, y, Result(a, s)) H (a = Go(rc, y) A (0 = Agent V Holding(o, s))) V (At(0, y, s) A 1(3z y z A a= Go(y, z) A (o = .Agent V Holding(o, s)))) . There is one more technicality: an inference process that uses these axioms must be able to prove nonidentities. The simplest kind of nonidentity is between constants-for example, Agent GI. The general semantics of first-order logic allows distinct constants to refer to the same object, so the knowledge base must include an axiom to prevent this. The unique UNIQL'ENAMES names axiom states a disequality for every pair of constants in the knowledge base. When AXIOVI this is assumed by the theorem prover, rather than written down in the knowledlge base, it is called a unique names assumption. We also need to state disequalities between action terms: Go(l, I, I, 21) is a different action from Go(l, 2,jl, 11) or Grab(G1). First, we say that each type of action is distinct-that no Go action is a Grab action. For each pair of action names A and B, we would have Next, we say that two action terms with the same action name refer to the same action only if they involve all the same objects: uNIQuEACTlON AXIOMS These are called, collectively, the unique action axioms. The combination of initial state description, successor-state axioms, unique name axiom, and unique action axionns suffices to prove that the proposed plan achieves the goal. Solving the inferential frame problem Successor-state axioms solve the representational frame problem, but not the inferential frame problem. Consider a t-step plan p such that St = Result(p, So). To decide which fluents are true in St, we need to consider each of the F frame axioms on each of the t tjme steps. Because the axioms have average size AE/F, this gives us O(AEt) inferential wlork. Most of the work involves copying fluents unchanged from one situation to the next. To solve the inferential frame problem, we have two possibilities. First, we :auld dis- card situation calculus and invent a new formalism for writing axioms. This has been done with formalisms such as 1 he fluent calculus. Second, we could alter the inference mechanism to handle frame axioms rnose efficiently. A hint that this should be possible is that the simple approach is O(AEt); why should it depend on the number of actions, A, when we know exactly which one action is executed at each time step? To see how to improve matters, we first look at the format of the frame axioms: Poss(a, s) J F,(Result(a, s)) ++ (a = A1 V a = A2 . . .) v Ft(s) A (a A3) A (a f Ad). . . 334 Chapter 10. Knowledge Representation That is, each axiom mentions several actions that can make the fluent true and several actions that can make it false. We can formalize this by introducing the predicate PosEflect(a, Fi), meaning that action a causes Fi to become true, and NegEflect (a, Fi) meaning that a causes Fi to become false. Then we can rewrite the foregoing axiom schema as: Poss(a, s) + Fi (Result (a, s)) PosEJg'ect(a, Fi) V Fi (s) A lNegEect(a, Fi) PosEflect(A1, Fi) PosEJSect (Aa, Fi) NegEJSect (A3, Fi) NegEflect(A4, Fi) . Whether this can be done automatically depends on the exact format of the frame axioms. To make an efficient inference procedure using axioms like this, we need to do three things: 1. Index the PosEJg'ect and NegEflect predicates by their first argument so that when we are given an action that occurs at time t, we can find its effects in O(1) time. 2. Index the axioms so that once you know that Fi is an effect of an action, you can find Fi in O(1) time. Then you need not even consider the axioms for fluents the axiom for that are not an effect of the action. 3. Represent each situation as a previous situation plus a delta. Thus, if nothing changes from one step to the next, we need do no work at all. In the old approach, we would need to do O(F) work in generating an assertion for each fluent Fi(Result (a, s)) from the preceding Fi (s) assertions. Thus at each time step, we look at the current action, fetch its effects, and update the set of true fluents. Each time step will have an average of E of these updates, for a total complexity of O(Et) . This constitutes a solution to the inferential frame problem. Time and event calculus Situation calculus works well when there is a single agent performing instantaneous, dis- crete actions. When actions have duration and can overlap with each other, situation calculus becomes somewhat awkward. Therefore, we will cover those topics with an alternative for- EVENTCALCULUS malism known as event calculus, which is based on points in time rather than on situations. 7 (The terms "event ' and "action" may be used interchangeably. Informally, "event" connotes a wider class of actions, including ones with no explicit agent. These are easier to handle in event calculus than in situation calculus.) In event calculus, fluents hold at points in time rather than at situations, and the calculus is designed to allow reasoning over intervals of time. The event calculus axiom says that a fluent is true at a point in time if the fluent was initiated by an event at some time in the past and was not terminated by an intervening event. The Initiates and Terminates relations play a role similar to the Result relation in situation calculus; Initiates(e, f, t) means that the occurrence of event e at time t causes fluent f to become true, while Terminates (w , f, t) means that f ceases to be true. We use Happens(e, t) to mean that event e happens at time t, Section 10.3. Actions, Situations, and Events 335 and we use Clipped( f, t, t2) to mean that f is terminated by some event sometime between t and t2. Formally, the axiom is: This gives us functionality that is similar to situation calculus, but with the ability to talk about time points and intervals, so we can say Happens(TurnOfl(LightSwitchl), 1:OO) to say that a lightswitch was turned off at exactly 1:00. Many extensions to event calculus have been made to address problems of indirect effects, events with duration, concurrent events, continuously changing events, nondetermin- istic effects, causal constraints, and other complications. We will revisit some of these issues in the next subsection. It is fair to say that, at present, completely satisfactory solutions are not yet available for most of them, but no insuperable obstacles have been encountered. Generalized events So far, we have looked at two main concepts: actions and objects. Now it is time to see how they fit into an encoimpassing ontology in which both actions and objects can be thought of as aspects of a physical universe. We think of a particular universe as having both a spatial and a temporal dimension. The wumpus world has its spatial component laid out in a two-dimensional grid and has discrete time; our world has three spatial dimensions and one GENERALIZED temporal dirnensin, all continuous. A generalized event is composed from aspects of some EVENT "space-time chunk''a piece of this multidimensional space-time universe. This allxtraction generalizes most of the concepts we have seen so far, including actions, locations, times, fluents, and physical objects. Figure 10.3 gives the general idea. From now on, we will use the simple term "event" to refer to generalized events. For example, World War I1 is an event that took place at various points in space-time, SUBEVEITS as indicated by the irregularly shaped grey patch. We can break it down into sub event: SubEvent(BattleOfBritain, World WarII) . Similarly, World War I1 is a subevent of the 20th century: SubEvent( World WarII, TwentiethCentury) The 20th century is an interval of time. Intervals are chunks of space-time that include all of space between two time points. The function Period(e) denotes the smallest interval enclosing the event e. Duration(i) is the length of time occupied by an interval, sl13 we can say Duration(Period ( World WarII)) Years(5). Some physicists studying string theory argue for 10 dimensions or more, and some argue for a discrete world, but 4-D continuous space-time is an adequate representation for commonsense reasoning purposes. Note that SubEvent is a special case of the ParlOf relation and is also transitive and reflexive. 336 Chapter 10. Knowledge Representation I I I I I I time I '- Twentiethcentury I Figure 10.3 Generalized events. A universe has spatial and temporal dimensions; in this figure we show only a single spatial dimension. All events are PartOf the universe. An event, such as World WarII, occurs in a portion of space-time with somewhat arbitrary and time- varying borders. An Interval, such as the TwentiethCentury, has a fixed, limited temporal extent and maximal spatial extent, and a Place, such as Australia, has a roughly fixed spatial extent and maximal temporal extent. Australia is aplace; a chunk with some fixed spatial borders. The borders can vary over time, due to geological or political changes. We use the predicate In to denote the subevent relation that holds when one event's spatial projection is PartOf of another's: In(Sydney, Australia) . The function Location(e) denotes the smallest place that encloses the event e. Like any other sort of object, events can be grouped into categories. For example, World WarII belongs to the category Wars. To say that a civil war occurred in England in the 1640s, we would say 3 w w E Civilwars A SubEuent(w, 1640s) A In(Location(w), England) . The notion of a category of events answers a question that we avoided when we described the effects of actions in Section 10.3: what exactly do logical terms such as Go(l, 11, l, 21) refer to? Are they events? The answer, perhaps surprisingly, is no. We can see this by considering a plan with two "identical" actions, such as Go(l, 11, I, 211, Go(l, 21,1, Go(l, 11, I, 2111 . In this plan, Go(l, 1,1,2) cannot be the name of an event, because there are two dzferent events occurring at different times. Instead, Go (l,l, I, 21) is the name of a category of events-all those events where the agent goes from I, 11 to I, 21. The three-step plan says that instances of these three event categories will occur. Section 10.3. Actions, Situations, ;and Events 337 Notice that this is ithe first time we have seen categories named by complex terms rather than just constant symbols. This presents no new difficulties; in fact, we can use the argument structure to our advantage. Eliminating arguments creates a more general category: Go(%, y) C GoFrom(x) . Go(x,y) C_ GoY) Similarly, we can add arguments to create more specific categories. For example, 11:o describe actions by other agents, we can add an agent argument. Thus, to say that Shankar flew from New York to New Delhi yesterday, we would write: 3 e e E Fly(Shankar, New York, NewDelhi) A SubEvent(e, Yesterday) . The form of this formula is so common that we will create an abbreviation for it: E(c, i) will mean that an element of the category of events c is a subevent of the event or intermval i: E(c, i) 3 e e E c A SubEvent(e, i) . Thus, we have: E (Fly (Shankar, New York, NewDelhi) , Yesterday) . Processes DISCRETE EVENTS The events we have seen so far are what we call discrete events-they have a definite struc- ture. Shankar's trip has a beginning, middle, and end. If interrupted halfway, the event would be different-it would not be a trip from New York to New Delhi, but instead a trip from New York to somewhere over Europe. On the other hand, the category of events denoted by Flying(Shankar) has a different quality. If we take a small interval of Shankar's flight, say, the third 20-minute segment (while he waits anxiously for a second bag of peanuts), that event is still a member of Flying(Shankar). In fact, this is true for any subinterval. PROCESS Categories of events with this property are called process categories or liquid event LIQUID EVENT categories. Any subinterval of a process is also a member of the same process category. We can employ the same notation used for discrete events to say that, for example, Shankar was flying at some time yesterday: E(Flying(Shankar), Yesterday) . We often want to say that some process was going on throughout some interval, rather than just in some subinterval of it. To do this, we use the predicate T: T ( Working(Stuart) , TodayLunchHour) . T(c, i) means that some e:vent of type c occurred over exactly the interval i-that is, the event begins and ends at the same time as the interval. The distinction between liquid and nonliquid events is exactly analogous to the differ- ence between substances, or stuff, and individual objects. In fact, some have called liquid TEMPORAL event types temporal substances, whereas things like butter are spatial substances. SUBSTANCES SPATIAL As well as describing processes of continuous change, liquid events can describe pro- SUBSTANCES STATES cesses of continuous non-change. These are often called states. For example, "Shankar being in New York" is a category of states that we denote by In(Shankar, NewYork). To say he was in New York all day, we would write T(In(Shankar, New York), Today) . 338 Chapter 10. Knowledge Representation We can form more complex states and events by combining primitive ones. This approach FLUENTCALCULUS is called fluent calculus. Fluent calculus reifies combinations of fluents, not just individual fluents. We have already seen a way of representing the event of two things happening at once, namely, the function Both (el, ez) . In fluent calculus, this is usually abbreviated with the infix notation el o ez. For example, to say that someone walked and chewed gum at the same time, we can write 3 p, i (p E People) A T ( WaR(p) o ChewGum(p) , i) . The "o" function is commutative and associative, just like logical conjunction. We can also define analogs of disjunction and negation, but we have to be more careful-there are two reasonable ways of interpreting disjunction. When we say "the agent was either walking or chewing gum for the last two minutes" we might mean that the agent was doing one of the actions for the whole interval, or we might mean that the agent was alternating between the two actions. We will use OneOf and Either to indicate these two possibilities. Figure 10.4 diagrams the complex events. - - 9 - - (a) (b) (c) A depiction of complex events. (a) T(Both(p, q), i), also denoted as T(p 0 Figure 10.4 q, i). (b) T(O.neOf (P, q), i). (c) T(Either(p, q), i). Intervals Time is important to any agent that takes action, and there has been much work on the rep- resentation of time intervals. We will consider two kinds: moments and extended intervals. The distinction is that only moments have zero duration: Partition(Moments, Extendedlntervals), Intervals) i E Moments ej Duration(i) = Seconds(0) . Next we invent a time scale and associate points on that scale with moments, giving us abso- lute times. The time scale is arbitrary; we will measure it in seconds and say that the moment at midnight (GMT) on January 1, 1900, has time 0. The functions Start and End pick out the earliest and latest moments in an interval, and the function Time delivers the point on the time scale for a moment. The function Duration gives the difference between the end time and the start time. Interval (i) =+ Duration(i) = ( Time(End (i)) - Time(Start(i))) . Time(Start(AD1900)) = Seconds(0) . Time(Start(AD2001)) = Seconds(3187324800) . Tzme(End(AD2001)) = Seconds(3218860800) . Duration(AD2001) = Seconds(31536000) . Section 10.3. Actions. Situations. and Events 339 To make these numbers easier to read, we also introduce a function Date, which takes six arguments (hours, minutes, seconds, day, month, and year) and returns a time point: Time(Start(AD2001)) = Date(0, O,O,l, Jan, 2001) Date(O,20,21,24,1,1995) = Seconds(3000000000) . Two intervals Meet if the end time of the first equals the start time of the second. It is possible to define predicates such as Before, After, During, and Overlap solely in terms of Meet, but it is more intuitive to define them in terms of points on the time scale. (See Figure 10.5 for a graphical representation.) Meet j) c Tzme(End(i)) = Time(Start(j)) . Before(i, j) Tzme(End(i)) Time(Start(j)) . After(j, i) -++ Before(i, j) . During(i, j) e Time (Start (j)) Time (Stard(i)) A Time(End(i)) Time(End(j)) 3 3 During(3, i) A Durzng(3, j) . Overlap(i, j) Figure 10.5 Predicates on time intervals. For example, to say that the reign of Elizabeth I1 followed that of George VI, and the reign of Elvis overlapped with the 1950s, we can write the following: After(Reign0f (ElizabethII), ReignOf (George la)) . Overlap(Fi,fties, ReignOf (Elvis)) . Start (Fifties) = Start (ADl95O) . End (Fifiies) = End(AD1959) . Fluents and objects We mentioned that physical objects can be viewed as generalized events, in the sense that a physical object is a chunk: of space-time. For example, USA can be thought of as an event that began in, say, 1776 as a union of 13 states and is still in progress today as a union of 50.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.