Section 7.2.The Wumpus World 199 previously.in particular it knows that it is in[]and that []isasafe square.Wewill see ge evo tions are tak The first perc ept is None, e from which the agent can con clude that itsn Ing squares Figure s the agent's state of knowledge at this point e list (some of)the sentences in the knov dge base using letters such as b (breezy)an K(safe,neither pit nor wumpus)marked in the appropriate squares.Fig- ure 7.2,on the other hand,depicts the world itself 囚=Agen g 23 12 22 32 4 1.2 22 32 p 42 OK OK 2,1 3.1 1.1 2.1 3.1r 47 OK OK OK OK (b) Figure 7.3 The firs n e) N From the fact that there was no stench or breeze in [1,1],the agent can infer that [12] and 2,1]are free of dangers.They are marked with an OK to indicate this.A cautious agent will move only into a square that it knows is OK.Let us suppose the agent decides to move forward to [2,1],giving the scene in Figure 7.3(b). The agent detects a breeze in 2,1],so there must be a pit in a neighboring square.The pit cannot be in [1,1],by the rules of the game,so there must be a pit in [2.2]or [3,1]or both. The notation P?in Figure 7.3(b)indicates a possible pit in those squares.At this point,there is only one known square that is OK and has not been visited yet.So the prudent agent will turn around,go back to [,1],and then proceed to [1,2]. The new percept in [1.2]is [Stench,None,None,None,Nonel.resulting in the state of knowledge shown in Figure 7.4(a).The stench in [1.2]means that there must be a wumpus nearby.But the wumpus cannot be in [1.1].by the rules of the game,and it cannot be in [2.2] (or the agent would have detected a stench when it was in [2.1D).Therefore,the agent can infer that the wumpus is in [1.31.The notation W!indicates this.Moreover,the lack of a Breeze in [12]implies that there is no pit in 221.Yet we already inferred that there must be a pit in either 22]or 3.1].so this means it must be in 3.1].This is a fairly difficult nfer ence,because it combines knowledge gained at different times in different places and
Section 7.2. The Wumpus World 199 previously; in particular, it knows that it is in [1,1] and that [1,1] is a safe square. We will see how its knowledge evolves as new percepts arrive and actions are taken. The first percept is [None, None, None, None, None], from which the agent can conclude that its neighboring squares are safe. Figure 7.3(a) shows the agent’s state of knowledge at this point. We list (some of) the sentences in the knowledge base using letters such as B (breezy) and OK (safe, neither pit nor wumpus) marked in the appropriate squares. Figure 7.2, on the other hand, depicts the world itself. A B G P S W = Agent = Breeze = Glitter, Gold = Pit = Stench = Wumpus OK = Safe square V = Visited A OK 1,1 2,1 3,1 4,1 1,2 2,2 3,2 4,2 1,3 2,3 3,3 4,3 1,4 2,4 3,4 4,4 OK OK B P? A P? OK OK OK 1,1 2,1 3,1 4,1 1,2 2,2 3,2 4,2 1,3 2,3 3,3 4,3 1,4 2,4 3,4 4,4 V (a) (b) Figure 7.3 The first step taken by the agent in the wumpus world. (a) The initial situation, after percept [None, None, None, None, None]. (b) After one move, with percept [None, Breeze, None, None, None]. From the fact that there was no stench or breeze in [1,1], the agent can infer that [1,2] and [2,1] are free of dangers. They are marked with an OK to indicate this. A cautious agent will move only into a square that it knows is OK. Let us suppose the agent decides to move forward to [2,1], giving the scene in Figure 7.3(b). The agent detects a breeze in [2,1], so there must be a pit in a neighboring square. The pit cannot be in [1,1], by the rules of the game, so there must be a pit in [2,2] or [3,1] or both. The notation P? in Figure 7.3(b) indicates a possible pit in those squares. At this point, there is only one known square that is OK and has not been visited yet. So the prudent agent will turn around, go back to [1,1], and then proceed to [1,2]. The new percept in [1,2] is [Stench, None, None, None, None], resulting in the state of knowledge shown in Figure 7.4(a). The stench in [1,2] means that there must be a wumpus nearby. But the wumpus cannot be in [1,1], by the rules of the game, and it cannot be in [2,2] (or the agent would have detected a stench when it was in [2,1]). Therefore, the agent can infer that the wumpus is in [1,3]. The notation W! indicates this. Moreover, the lack of a Breeze in [1,2] implies that there is no pit in [2,2]. Yet we already inferred that there must be a pit in either [2,2] or [3,1], so this means it must be in [3,1]. This is a fairly difficult inference, because it combines knowledge gained at different times in different places and
200 Chapter 7.Logical Agents 2 3 14 2A 34 fe squar 3.3 ▣ 33m 4.3 2 22 2 22 32 OK OK 2,1 31 OK (a) at the third ith (Stench.Breeze.Glitter.None.Nonel. relies on the lack of a percept to make one crucial step.The inference is beyond the abilities of most animals but it is typical of the kind of reasoning that a logical agent d The agent has nov agcnt's pit oved to itself that there isn nor a in[221s it is O to move there.We will not show the 12.21:we ssume that the agent turns and moves to [2.3].giving us Figure 7.4(b).In 2.3],the agent letects a glitter.so it should grab the gold and th nd th vailable information tha This i butidIogcalg of logical In the describe how t that。 ssary mn f at w ibed in the preceding paragraphs 7.3 LOGIC This section provides an overview of all the fundamental concepts of logical representation and reasoning.We postpone the technical details of any particular form of logic until the next section.We will instead use informal examples from the wumpus world and from the familiar realm of arithmetic.We adopt this rather unusual approach because the ideas of logic are far more general and beautiful than is commonly supposed. In Section 7.1,we said that knowledge bases consist of sentences.These sentences are expressed according to the syntax of the representation language,which specifies all the sentences that are well formed.The notion of syntax is clear enough in ordinary arithmetic: "+y=4"is a well-formed sentence,whereas"r2y+="is not.The syntax of logical
200 Chapter 7. Logical Agents B P! B A OK OK OK 1,1 2,1 3,1 4,1 1,2 2,2 3,2 4,2 1,3 2,3 3,3 4,3 1,4 2,4 3,4 4,4 V OK W! V P! A OK OK OK 1,1 2,1 3,1 4,1 1,2 2,2 3,2 4,2 1,3 2,3 3,3 4,3 1,4 2,4 3,4 4,4 V S OK W! V V V B S G P? P? (a) (b) S A B G P S W = Agent = Breeze = Glitter, Gold = Pit = Stench = Wumpus OK = Safe square V = Visited Figure 7.4 Two later stages in the progress of the agent. (a) After the third move, with percept [Stench, None, None, None, None]. (b) After the fifth move, with percept [Stench, Breeze, Glitter , None, None]. relies on the lack of a percept to make one crucial step. The inference is beyond the abilities of most animals, but it is typical of the kind of reasoning that a logical agent does. The agent has now proved to itself that there is neither a pit nor a wumpus in [2,2], so it is OK to move there. We will not show the agent’s state of knowledge at [2,2]; we just assume that the agent turns and moves to [2,3], giving us Figure 7.4(b). In [2,3], the agent detects a glitter, so it should grab the gold and thereby end the game. In each case where the agent draws a conclusion from the available information, that conclusion is guaranteed to be correct if the available information is correct. This is a fundamental property of logical reasoning. In the rest of this chapter, we describe how to build logical agents that can represent the necessary information and draw the conclusions that were described in the preceding paragraphs. 7.3 LOGIC This section provides an overview of all the fundamental concepts of logical representation and reasoning. We postpone the technical details of any particular form of logic until the next section. We will instead use informal examples from the wumpus world and from the familiar realm of arithmetic. We adopt this rather unusual approach because the ideas of logic are far more general and beautiful than is commonly supposed. In Section 7.1, we said that knowledge bases consist of sentences. These sentences SYNTAX are expressed according to the syntax of the representation language, which specifies all the sentences that are well formed. The notion of syntax is clear enough in ordinary arithmetic: “x + y = 4” is a well-formed sentence, whereas “x2y+ =” is not. The syntax of logical
Section 7.3.Logic 201 ere are rally do rent sy some with lots rs and exoti rams with arrows an bubblcs.and some with rather visualy appealne dge pase arc ever,sentences in an agent's knowl rea configurations of(parts of)the agent.Reasoning will involve generating and manipulating those configurations SEMANTICS ogic mu t also define the semantics of the language.Loosely speaking.semantic has to do with the“meaning”of sentences. In logic,the definition is more precise. AUTH POSSIBLE WORLD "+y=4"is true in a world where x is 2 and y is 2.but false in a world where z is 1 and In standard logics.very sentene must be citherre r false in each possible world-there is no"in between MODEL When we need to be precise,we will use the term model in place of"possible world. (We will also use the phrase"m is a model of a"to mean that sentence a is true in model m.)Whereas possible worlds might be thought of as(potentially)real environments that the agent might or might not be in,models are mathematical abstractions,each of which simply fixes the truth or falsehood of every relevant sentence.Informally,we may think of r and y as the number of men and women sitting at a table playing bridge,for example,and the sentence+=4 is true when there are four in total;formally,the possible models are just all possible assignments of numbers to the variables and y.Each such assignment fixes the truth of any sentence of arithmetic whose variables are r and y. Now that we have a notion of truth,we are ready to talk about logical reasoning.This involves the relation of logical entailment between sentences-the idea that a sentence fol- lows logically from another sentence.In mathematical notation,we write as a卡3 to mean that the sentence o entails the sentence 3.The formal definition of entailment is this:B if and only if,in every model in which o is true,B is also true.Another way to say this is that if a is true,then B must also be true.Informally,the truth ofB is"contained" in the truth of o.The relation of entailment is familiar from arithmetic;we are happy with the idea that the sentence r+y=4 entails the sentence 4=+y.Obviously,in any model where+y=4-such as the model in which x is 2 and y is 2-it is the case that 4=+y We will see shortly that a knowledge base can be considered a statement,and we often talk of a knowledge base entailing a sentence. We can apply the same kind of analysis to the wumpus-world reasoning example given in the preceding section.Consider the situation in Figure 7.3(b):the agent has detected nothing in [1,1]and a breeze in [2,1].These percepts,combined with the agent's knowledge of the rules ofthe wumpus world (the PEAS description on page 197),constitute the KB.The n the notion of truth of se and the notion o
Section 7.3. Logic 201 languages(and of arithmetic, for that matter) is usually designed for writing papers and books. There are literally dozens of different syntaxes, some with lots of Greek letters and exotic mathematical symbols, and some with rather visually appealing diagrams with arrows and bubbles. In all cases, however, sentences in an agent’s knowledge base are real physical configurations of (parts of) the agent. Reasoning will involve generating and manipulating those configurations. SEMANTICS A logic must also define the semantics of the language. Loosely speaking, semantics has to do with the “meaning” of sentences. In logic, the definition is more precise. The TRUTH semantics of the language defines the truth of each sentence with respect to each possible POSSIBLE WORLD world. For example, the usual semantics adopted for arithmetic specifies that the sentence “x + y = 4” is true in a world where x is 2 and y is 2, but false in a world where x is 1 and y is 1.1 In standard logics, every sentence must be either true or false in each possible world—there is no “in between.” 2 MODEL When we need to be precise, we will use the term model in place of “possible world.” (We will also use the phrase “m is a model of α” to mean that sentence α is true in model m.) Whereas possible worlds might be thought of as (potentially) real environments that the agent might or might not be in, models are mathematical abstractions, each of which simply fixes the truth or falsehood of every relevant sentence. Informally, we may think of x and y as the number of men and women sitting at a table playing bridge, for example, and the sentence x + y = 4 is true when there are four in total; formally, the possible models are just all possible assignments of numbers to the variables x and y. Each such assignment fixes the truth of any sentence of arithmetic whose variables are x and y. Now that we have a notion of truth, we are ready to talk about logical reasoning. This ENTAILMENT involves the relation of logical entailment between sentences—the idea that a sentence follows logically from another sentence. In mathematical notation, we write as α |= β to mean that the sentence α entails the sentence β. The formal definition of entailment is this: α |= β if and only if, in every model in which α is true, β is also true. Another way to say this is that if α is true, then β must also be true. Informally, the truth of β is “contained” in the truth of α. The relation of entailment is familiar from arithmetic; we are happy with the idea that the sentence x + y = 4 entails the sentence 4 = x + y. Obviously, in any model where x + y = 4—such as the model in which x is 2 and y is 2—it is the case that 4 = x + y. We will see shortly that a knowledge base can be considered a statement, and we often talk of a knowledge base entailing a sentence. We can apply the same kind of analysis to the wumpus-world reasoning example given in the preceding section. Consider the situation in Figure 7.3(b): the agent has detected nothing in [1,1] and a breeze in [2,1]. These percepts, combined with the agent’s knowledge of the rules of the wumpus world (the PEAS description on page 197), constitute the KB. The 1 The reader will no doubt have noticed the similarity between the notion of truth of sentences and the notion of satisfaction of constraints in Chapter 5. This is no accident—constraint languages are indeed logics and constraint solving is a form of logical reasoning. 2 Fuzzy logic, discussed in Chapter 14, allows for degrees of truth
202 Chapter 7.Logical Agents 人1 回回 (a) (b) Figure7.5 le mod s for the presence of pits (no pit in (b)Models of the knowledge base and(n pit in 2) ge oas agent is interested (among other things)in whether the adjacent squares [1.2].[2.2].and [3.1] contain pits.Each of the three squares might or might not contain a pit,so(for the purposes of this example)there are 23=8 possible models.These are shown in Figure 7.5.3 The KB is false in models that contradict what the agent knows-for example,the KB is false in any model in which [1,2]contains a pit,because there is no breeze in [1,1].There are in fact just three models in which the KB is true,and these are shown as a subset of the models in Figure 7.5.Now let us consider two possible conclusions: a=“There is no pit in[l,2]” a2 ="There is no pit in [2.2]." We have marked the models of and a in Figures 7.5(a)and 7.5(b)respectively.By inspection,we see the following: in every model in which KB is true,is also true. Hence,KB:there is no pit in [1,2].We can also see tha in some models in which KB is true,is false. Hence,KB2:the agent cannot conclude that there is no pit in [2,2].(Nor can it conclude that there is a pit in 22].) The preceding example not only illustrates entailment,but also shows how the defini- tion of entailment can be applied to derive conclusions-that is,to carry out logical infer- LOGICAL INFERENCE ence.The inference algorithm illustrated in Figure 7.5 is called model checking.because it MODEL CHECKING enumerates all possible models to check that a is true in all models in which KB is true. ol true a Thehethat there isa pitn Chapterhow
202 Chapter 7. Logical Agents 1 2 3 1 2 PIT 1 2 3 1 2 PIT 1 2 3 1 2 PIT PIT PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 KB α1 Breeze Breeze Breeze Breeze Breeze Breeze Breeze Breeze 1 2 3 1 2 PIT 1 2 3 1 2 PIT 1 2 3 1 2 PIT PIT PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 KB Breeze α2 Breeze Breeze Breeze Breeze Breeze Breeze Breeze (a) (b) Figure 7.5 Possible models for the presence of pits in squares [1,2], [2,2], and [3,1], given observations of nothing in [1,1] and a breeze in [2,1]. (a) Models of the knowledge base and α1 (no pit in [1,2]). (b) Models of the knowledge base and α2 (no pit in [2,2]). agent is interested (among other things) in whether the adjacent squares [1,2], [2,2], and [3,1] contain pits. Each of the three squares might or might not contain a pit, so (for the purposes of this example) there are 2 3 = 8 possible models. These are shown in Figure 7.5.3 The KB is false in models that contradict what the agent knows—for example, the KB is false in any model in which [1,2] contains a pit, because there is no breeze in [1,1]. There are in fact just three models in which the KB is true, and these are shown as a subset of the models in Figure 7.5. Now let us consider two possible conclusions: α1 = “There is no pit in [1,2].” α2 = “There is no pit in [2,2].” We have marked the models of α1 and α2 in Figures 7.5(a) and 7.5(b) respectively. By inspection, we see the following: in every model in which KB is true, α1 is also true. Hence, KB |= α1: there is no pit in [1,2]. We can also see that in some models in which KB is true, α2 is false. Hence, KB 6|= α2: the agent cannot conclude that there is no pit in [2,2]. (Nor can it conclude that there is a pit in [2,2].)4 The preceding example not only illustrates entailment, but also shows how the definition of entailment can be applied to derive conclusions—that is, to carry out logical inferLOGICAL INFERENCE ence. The inference algorithm illustrated in Figure 7.5 is called model checking, because it MODEL CHECKING enumerates all possible models to check that α is true in all models in which KB is true. 3 Although the figure shows the models as partial wumpus worlds, they are really nothing more than assignments of true and false to the sentences “there is a pit in [1,2]” etc. Models, in the mathematical sense, do not need to have ’orrible ’airy wumpuses in them. 4 The agent can calculate the probability that there is a pit in [2,2]; Chapter 13 shows how
Section7.3.Logic 203 In understanding entament and inference.it might help to think of the set ofall cons m通 quences haystack,inference An inference algorithm that derives only entailed sentences is called sound or truth RUTH-PRESERVIN An unsound inference procedure es the discovery of nonexistent needles It is easy to see that model checking.when it is applicable.is a sound procedure. COMPLETENESS The property of completeness is also desirable:an inference algorithm is complete if it can derive any sentence that is entailed.For real haystacks,which are finite in extent it seems obvious that a systematic examination can always decide whether the needle is in the haystack.For many knowledge bases,however,the haystack of consequences is infinite. and completeness becomes an important issue.Fortunately,there are complete inference procedures for logics that are sufficiently expressive to handle many knowledge bases. We have described a reasoning process whose conclusions are guaranteed to be true in any world in which the premises are true;in particular,if KB is true in the real world. then any sentence o derived from KB by a sound inference procedure is also true in the real world.So,while an inference process operates on"syntax"- -internal physical configurations such as bits in registers or patterns of electrical blips in brains-the process corresponds to the real-world relationship whereby some aspect of the real world is the case?by virtue of other aspects of the real world being the case.This correspondence between world and representation is illustrated in Figure 7.6. 可可 Followort Figure 7.6 Sentences are physical configurations of the agent,and reasoning is a process of constructing new physical configurations from old ones.Logical reasoning should en sure that the athreprest aspects of the world that actually folow om the re infinitey many pairs of alues thes v=4 (1922)
Section 7.3. Logic 203 In understanding entailment and inference, it might help to think of the set of all consequences of KB as a haystack and of α as a needle. Entailment is like the needle being in the haystack; inference is like finding it. This distinction is embodied in some formal notation: if an inference algorithm i can derive α from KB, we write KB `i α , which is pronounced “α is derived from KB by i” or “i derives α from KB.” SOUND An inference algorithm that derives only entailed sentences is called sound or truthTRUTH-PRESERVING preserving. Soundness is a highly desirable property. An unsound inference procedure essentially makes things up as it goes along—it announces the discovery of nonexistent needles. It is easy to see that model checking, when it is applicable,5 is a sound procedure. COMPLETENESS The property of completeness is also desirable: an inference algorithm is complete if it can derive any sentence that is entailed. For real haystacks, which are finite in extent, it seems obvious that a systematic examination can always decide whether the needle is in the haystack. For many knowledge bases, however, the haystack of consequences is infinite, and completeness becomes an important issue.6 Fortunately, there are complete inference procedures for logics that are sufficiently expressive to handle many knowledge bases. We have described a reasoning process whose conclusions are guaranteed to be true in any world in which the premises are true; in particular, if KB is true in the real world, then any sentence α derived from KB by a sound inference procedure is also true in the real world. So, while an inference process operates on “syntax”—internal physical configurations such as bits in registers or patterns of electrical blips in brains—the process corresponds to the real-world relationship whereby some aspect of the real world is the case7 by virtue of other aspects of the real world being the case. This correspondence between world and representation is illustrated in Figure 7.6. Follows Sentences Sentence Entails Semantics Semantics Representation World Aspects of the real world Aspect of the real world Figure 7.6 Sentences are physical configurations of the agent, and reasoning is a process of constructing new physical configurations from old ones. Logical reasoning should ensure that the new configurations represent aspects of the world that actually follow from the aspects that the old configurations represent. 5 Model checking works if the space of models is finite—for example, in wumpus worlds of fixed size. For arithmetic, on the other hand, the space of models is infinite: even if we restrict ourselves to the integers, there are infinitely many pairs of values for x and y in the sentence x + y = 4. 6 Compare with the case of infinite search spaces in Chapter 3, where depth-first search is not complete. 7 As Wittgenstein (1922) put it in his famous Tractatus: “The world is everything that is the case