LOGICAL AGENTS In which we design agents that can form representat ons of the world.use apro cess of inference to This chapter introduces knowledge-based agents.The concepts that we discu -the repre icial intelligenc processes that bring knowiedge to m are centrar Humans, now things and do rea ey enabl behaviors that would be very leve otherwis solving agents to perform well in complex environments A reflex agents could ony find its way from Arad to Bucharest by dumb luck.The knowledge of problem-solving agents is. however,very specific and inflexible.A chess program can calculate the legal moves of its king.but does not know in any useful sense that no piece can be on two different squares at the same time.Knowledge-based agents can benefit from knowledge expressed in very general forms,combining and recombining information to suit myriad purposes.Often,this process can be quite far removed from the needs of the moment-as when a mathematician proves a theorem or an astronomer calculates the earth's life expectancy. Knowledge and reasoning also play a crucial role in dealing with partially observable environments.A knowledge-based agent can combine general knowledge with current per- cepts to infer hidden aspects of the current state prior to selecting actions.For example,a physician diagnoses a patient-that is,infers a disease state that is not directly observable- prior to choosing a treatment.Some of the knowledge that the physician uses in the form of rules learned from textbooks and teachers,and some is in the form of patterns of association that the physician may not be able to consciously describe.If its inside the physician's head, it counts as knowledge. Understanding natural language also requires inferring hidden state,namely,the inten- tion of the speaker.When we hear,"John saw the diamond through the window and coveted it,"we know "it"refers to the diamond and not the window-we reason,perhaps uncon- sciously,with our knowledge of relative value.Similarly,when we hear,"John threw the brick through the window and broke it,"we know"it"refers to the window.Reasoning allows 194
7 LOGICAL AGENTS In which we design agents that can form representations of the world, use a process of inference to derive new representations about the world, and use these new representations to deduce what to do. This chapter introduces knowledge-based agents. The concepts that we discuss—the representation of knowledge and the reasoning processes that bring knowledge to life—are central to the entire field of artificial intelligence. Humans, it seems, know things and do reasoning. Knowledge and reasoning are also important for artificial agents because they enable successful behaviors that would be very hard to achieve otherwise. We have seen that knowledge of action outcomes enables problemsolving agents to perform well in complex environments. A reflex agents could only find its way from Arad to Bucharest by dumb luck. The knowledge of problem-solving agents is, however, very specific and inflexible. A chess program can calculate the legal moves of its king, but does not know in any useful sense that no piece can be on two different squares at the same time. Knowledge-based agents can benefit from knowledge expressed in very general forms, combining and recombining information to suit myriad purposes. Often, this process can be quite far removed from the needs of the moment—as when a mathematician proves a theorem or an astronomer calculates the earth’s life expectancy. Knowledge and reasoning also play a crucial role in dealing with partially observable environments. A knowledge-based agent can combine general knowledge with current percepts to infer hidden aspects of the current state prior to selecting actions. For example, a physician diagnoses a patient—that is, infers a disease state that is not directly observable— prior to choosing a treatment. Some of the knowledge that the physician uses in the form of rules learned from textbooks and teachers, and some is in the form of patterns of association that the physician may not be able to consciously describe. If its inside the physician’s head, it counts as knowledge. Understanding natural language also requires inferring hidden state, namely, the intention of the speaker. When we hear, “John saw the diamond through the window and coveted it,” we know “it” refers to the diamond and not the window—we reason, perhaps unconsciously, with our knowledge of relative value. Similarly, when we hear, “John threw the brick through the window and broke it,” we know “it” refers to the window. Reasoning allows 194
Section 7.1.Knowledge-Based Agents 195 ving agents have difficulty with this tudying knc ems exponentia eir flexibility.They are able to ac cept ne form of explicitly des d goals,they can achieve competence quickly by being told or learning new knowledge about the enviroment,and they can adapt to changes in the environment by updating the relevar t know dge We begin in Section 7.1 with the overall agent design.Section 72 introduces a simple new environment,the wumpus world,and illustrates the operation of a knowledge-base agent without going into any technical detail.Then,in Section 7.3,we explain the general principles of logic.Logic will be the primary vehicle for representing knowledge throughout Part IlI of the book.The knowledge of logical agents is always definite-each proposition is either true or false in the world,although the agent may be agnostic about some propositions Logic has the pedagogical advantage of being simple example of a representation for knowledge-based agents,but logic has some severe limitations.Clearly,a large portion of the reasoning carried out by humans and other agents in partially observable environments de- pends on handling knowledge that is uncertain.Logic cannot represent this uncertainty well, so in Part V we cover probability,which can.In Part VI and Part VII we cover many repre- sentations,including some based on continuous mathematics such as mixtures of Gaussians, neural networks,and other representations. Section 7.4 of this chapter defines a simple logic called propositional logic.While much less expressive than first-order logic(Chapter 8).propositional logic serves to illustrate all the basic concepts of logic.There is also a well-developed technology for reasoning in propositional logic,which we describe in sections 7.5 and 7.6.Finally,Section 7.7 combines the concept of logical agents with the technology of propositional logic to build some simple agents for the wumpus world.Certain shortcomings in propositional logic are identified, motivating the development of more powerful logics in subsequent chapters. 7.1 KNOWLEDGE-BASED AGENTS KNos The central component of a knowledge-based agent is its knowledge base,or KB.Informally. SENTENCE a knowledge base is a set of sentences.(Here"sentence"is used as a technical term.It is related but is not identical to the sentences of English and other natural languages.)Each sen- o tence is expressed in a language called a knowledge representation language and represents some assertion about the world. There must be a way to add new sentences to the knowledge base and a way to query what is known The standard names for these tasks are Tell and ask respectively Both NFERENCE tasks may involve inferencethat is.deriving new sentences from old.In logical agents. LOGICAL AGENTS which are the main subject of study in this chapter,inference must obey the fundamental requirement that when one Asks a question of the knowledge base.the answer should follow from what has been told (or rather,TELLed)to the knowledge base previously.Later in the
Section 7.1. Knowledge-Based Agents 195 us to cope with the virtually infinite variety of utterances using a finite store of commonsense knowledge. Problem-solving agents have difficulty with this kind of ambiguity because their representation of contingency problems is inherently exponential. Our final reason for studying knowledge-based agents is their flexibility. They are able to accept new tasks in the form of explicitly described goals, they can achieve competence quickly by being told or learning new knowledge about the environment, and they can adapt to changes in the environment by updating the relevant knowledge. We begin in Section 7.1 with the overall agent design. Section 7.2 introduces a simple new environment, the wumpus world, and illustrates the operation of a knowledge-based agent without going into any technical detail. Then, in Section 7.3, we explain the general principles of logic. Logic will be the primary vehicle for representing knowledge throughout Part III of the book. The knowledge of logical agents is always definite—each proposition is either true or false in the world, although the agent may be agnostic about some propositions. Logic has the pedagogical advantage of being simple example of a representation for knowledge-based agents, but logic has some severe limitations. Clearly, a large portion of the reasoning carried out by humans and other agents in partially observable environments depends on handling knowledge that is uncertain. Logic cannot represent this uncertainty well, so in Part V we cover probability, which can. In Part VI and Part VII we cover many representations, including some based on continuous mathematics such as mixtures of Gaussians, neural networks, and other representations. Section 7.4 of this chapter defines a simple logic called propositional logic. While much less expressive than first-order logic (Chapter 8), propositional logic servesto illustrate all the basic concepts of logic. There is also a well-developed technology for reasoning in propositional logic, which we describe in sections 7.5 and 7.6. Finally, Section 7.7 combines the concept of logical agents with the technology of propositional logic to build some simple agents for the wumpus world. Certain shortcomings in propositional logic are identified, motivating the development of more powerful logics in subsequent chapters. 7.1 KNOWLEDGE-BASED AGENTS KNOWLEDGE BASE The central component of a knowledge-based agent is its knowledge base, or KB. Informally, SENTENCE a knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is related but is not identical to the sentences of English and other natural languages.) Each sentence is expressed in a language called a knowledge representation language and represents KNOWLEDGE REPRESENTATION LANGUAGE some assertion about the world. There must be a way to add new sentences to the knowledge base and a way to query what is known. The standard names for these tasks are TELL and ASK, respectively. Both INFERENCE tasks may involve inference—that is, deriving new sentences from old. In logical agents, LOGICAL AGENTS which are the main subject of study in this chapter, inference must obey the fundamental requirement that when one ASKs a question of the knowledge base, the answer should follow from what has been told (or rather, TELLed) to the knowledge base previously. Later in the
196 Chapter 7.Logical Agents ept)returns an action static KB a know ,a counter,initially0,indicating time TELL(KB.MAKE-PERCEPT-SENTENCE(percept,t)) .MAKEACTION-SENTENCEL -+1 return action Figure 7.1 A generic knowledge-based agen be more precise about the crucial word“follow:”For now,take it to mean th ager program. Like all our age t es a p cep s input an d retur s an actic The agent ma dge base,K 6%29 me knowl program ng First,it TELLS know it perceiv it Asks the know ledge base what actio erform In the p answe this query,e reas oning may be don about th curre state orld,about th outcomes of】 lon sequence so on e the a cho en.the agen The second TEL is necessary to le the knowledge base know that the hypothetical action ha s actually been executed The details of the representation language are hidden inside two functions that imple ment the interface between the sensors and actuators and the core representation and reason ing system.MAKE-PERCEPT-SENTENCE takes a percept and a time and returns a sentence asserting that the agent perceived the percept at the given time.MAKE-ACTION-QUERY takes a time as input and returns a sentence that asks what action should be performed at that time.The details of the inference mechanisms are hidden inside TELL and ASK.Later sections will reveal these details. The agent in Figure 7.1 appears quite similar to the agents with internal state described in Chapter 2.Because of the definitions of TELL and ASK.however,the knowledge-based agent is not an arbitrary program for calculating actions.It is amenable to a description at the KNOWLEDGE LEVEL knowledge level,where we need specify only what the agent knows and what its goals are, in order to fix its behavior.For example,an automated taxi might have the goal of delivering a passenger to Marin County and might know that it is in San Francisco and that the Golden Gate Bridge is the only link between the two locations.Then we can expect it to cross the Golden Gate Bridge because it knows that that will achieve its goal.Notice that this analysis is independent of how the taxi works at the implementation level.It doesn't matter whether its geographical knowledge is implemented as linked lists or pixel maps,or whether it reasons by manipulating strings of symbols stored in registers or by propagating noisy signals in a network of neurons
196 Chapter 7. Logical Agents function KB-AGENT(percept) returns an action static: KB, a knowledge base t, a counter, initially 0, indicating time TELL(KB, MAKE-PERCEPT-SENTENCE(percept,t)) action ← ASK(KB, MAKE-ACTION-QUERY(t)) TELL(KB, MAKE-ACTION-SENTENCE(action,t)) t ←t + 1 return action Figure 7.1 A generic knowledge-based agent. chapter, we will be more precise about the crucial word “follow.” For now, take it to mean that the inference process should not just make things up as it goes along. Figure 7.1 shows the outline of a knowledge-based agent program. Like all our agents, it takes a percept as input and returns an action. The agent maintains a knowledge base, KB, which may initially contain some background knowledge. Each time the agent program is BACKGROUND KNOWLEDGE called, it does two things. First, it TELLs the knowledge base what it perceives. Second, it ASKs the knowledge base what action it should perform. In the process of answering this query, extensive reasoning may be done about the current state of the world, about the outcomes of possible action sequences, and so on. Once the action is chosen, the agent records its choice with TELL and executes the action. The second TELL is necessary to let the knowledge base know that the hypothetical action has actually been executed. The details of the representation language are hidden inside two functions that implement the interface between the sensors and actuators and the core representation and reasoning system. MAKE-PERCEPT-SENTENCE takes a percept and a time and returns a sentence asserting that the agent perceived the percept at the given time. MAKE-ACTION-QUERY takes a time as input and returns a sentence that asks what action should be performed at that time. The details of the inference mechanisms are hidden inside TELL and ASK. Later sections will reveal these details. The agent in Figure 7.1 appears quite similar to the agents with internal state described in Chapter 2. Because of the definitions of TELL and ASK, however, the knowledge-based agent is not an arbitrary program for calculating actions. It is amenable to a description at the KNOWLEDGE LEVEL knowledge level, where we need specify only what the agent knows and what its goals are, in order to fix its behavior. For example, an automated taxi might have the goal of delivering a passenger to Marin County and might know that it is in San Francisco and that the Golden Gate Bridge is the only link between the two locations. Then we can expect it to cross the Golden Gate Bridge because it knows that that will achieve its goal. Notice that this analysis is independent of how the taxi works at the implementation level. It doesn’t matter whether IMPLEMENTATION LEVEL its geographical knowledge is implemented as linked lists or pixel maps, or whether it reasons by manipulating strings of symbols stored in registers or by propagating noisy signals in a network of neurons
Section 7.2. The Wumpus World 197 Aswe mentioned in the chapter. agent simply by TELLing it wha agent initia program, s is built by add ing one by one the sent that represen signer's kno environm signing the repre anguage easy to expres this knowl dge in the form of sentence s simplifies the struction prol ECLARATIVE enormously.This is called the declarative approach to system building. In con rast,the procedural approach encodes de ctly as program co minimizing th role of explicit representation and reasoning can result in a much more efficient system will see agents of both kinds in Section 7.7.In the 1970s and 1980s.advocates of the two approaches engaged in heated debates. We now understand that a successful agent must combine both declarative and procedural elements in its design. In addition to TELLing it what it needs to know,we can provide a knowledge-based agent with mechanisms that allow it to learn for itself.These mechanisms,which are dis cussed in Chapter 18,create general knowledge about the environment out of a series of percepts.This knowledge can be incorporated into the agent's knowledge base and used for decision making.In this way,the agent can be fully autonomous. All these capabilities-representation,reasoning,and learning-rest on the centuries- long development of the theory and technology of logic.Before explaining that theory and technology,however,we will create a simple world with which to illustrate them. 7.2 THE WUMPUS WORLD UMPUS WORLD The wumpus world isa cave consisting of s.Lurking some wumpus,a b ats anyone enter he wun an a he ag one arow oms contain bott tha Tap anyone wh wan into oms cept for the wumpus,which h in). he only m tigating feature of liv ring in th nent Is e possibility ing a h ap of gold Although the wumpus world her tame by modern co it makes an excellent testbed environment for intelligent agents.Michae th was the first to suggest this. A sample wumpus world is shown in Figure 7.2. The precise e definition of the task environment is given,as suggested in Chapter 2,by the PEAS description: easure:+0 or picking up the gold.or or being eaten by the w vumpus,-1 for each n taken and-10 for using up the arrow Environment:A 4x4 grid of rooms.The agent always starts in the square labeled [1,1].facing to the right.The locations of the gold and the wumpus are chosen ran- domly,with a uniform distribution,from the squares other than the start square.In addition.each square other than the start can be a pit.with probability 0.2. ◇Actuators:The agent can move forward,tumn left by 90°,or turn right by 90°.Thc agent dies a miserable death if it enters a square containing a pit or a live wumpus.(It is safe,albeit smelly,to enter a square with a dead wumpus.)Moving forward has no
Section 7.2. The Wumpus World 197 As we mentioned in the introduction to the chapter, one can build a knowledge-based agent simply by TELLing it what it needs to know. The agent’s initial program, before it starts to receive percepts, is built by adding one by one the sentences that represent the designer’s knowledge of the environment. Designing the representation language to make it easy to express this knowledge in the form of sentences simplifies the construction problem DECLARATIVE enormously. This is called the declarative approach to system building. In contrast, the procedural approach encodes desired behaviors directly as program code; minimizing the role of explicit representation and reasoning can result in a much more efficient system. We will see agents of both kinds in Section 7.7. In the 1970s and 1980s, advocates of the two approaches engaged in heated debates. We now understand that a successful agent must combine both declarative and procedural elements in its design. In addition to TELLing it what it needs to know, we can provide a knowledge-based agent with mechanisms that allow it to learn for itself. These mechanisms, which are discussed in Chapter 18, create general knowledge about the environment out of a series of percepts. This knowledge can be incorporated into the agent’s knowledge base and used for decision making. In this way, the agent can be fully autonomous. All these capabilities—representation, reasoning, and learning—rest on the centurieslong development of the theory and technology of logic. Before explaining that theory and technology, however, we will create a simple world with which to illustrate them. 7.2 THE WUMPUS WORLD WUMPUS WORLD The wumpus world is a cave consisting of rooms connected by passageways. Lurking somewhere in the cave is the wumpus, a beast that eats anyone who enters its room. The wumpus can be shot by an agent, but the agent has only one arrow. Some rooms contain bottomless pits that will trap anyone who wanders into these rooms (except for the wumpus, which is too big to fall in). The only mitigating feature of living in this environment is the possibility of finding a heap of gold. Although the wumpus world is rather tame by modern computer game standards, it makes an excellent testbed environment for intelligent agents. Michael Genesereth was the first to suggest this. A sample wumpus world is shown in Figure 7.2. The precise definition of the task environment is given, as suggested in Chapter 2, by the PEAS description: ♦ Performance measure: +1000 for picking up the gold, –1000 for falling into a pit or being eaten by the wumpus, –1 for each action taken and –10 for using up the arrow. ♦ Environment: A 4 × 4 grid of rooms. The agent always starts in the square labeled [1,1], facing to the right. The locations of the gold and the wumpus are chosen randomly, with a uniform distribution, from the squares other than the start square. In addition, each square other than the start can be a pit, with probability 0.2. ♦ Actuators: The agent can move forward, turn left by 90◦ , or turn right by 90◦ . The agent dies a miserable death if it enters a square containing a pit or a live wumpus. (It is safe, albeit smelly, to enter a square with a dead wumpus.) Moving forward has no
198 Chapter 7.Logical Agents effect if there is a wall in front of the agent.The action Grab can be used to pick up an hiect that is in the c as the The can be ed to g.The vall The age one arrov so only the first has anye Sensors:The agent has five sensors,each of which gives a single bit of information: -In the square containing the wumpus and in the directly (not diagonally)adjacent squares the agent will perceive a stench. In the squares directly adjacent to a pit,the agent will perceive a breeze In the square where the gold is.the agent will perceive a glitter When an agent walks into a wall.it will perceive a bump. -When the wumpus is killed,it emits a woeful scream that can be perceived any- where in the cave The percepts will be given to the agent in the form of a list of five symbols;for example if there is a stench and a breeze,but no glitter,bump,or scream,the agent will receive the percept [Stench,Breeze,None,None,Nonel. Exercise 7.1 asks you to define the wumpus environment along the various dimensions given in Chapter 2.The principal difficulty for the agent is its initial ignorance of the configuration of the environment,overcoming this ignorance seems to require logical reasoning.In most instances of the wumpus world,it is possible for the agent to retrieve the gold safely.Occa- sionally,the agent must choose between going home empty-handed and risking death to find the gold.About 21%of the environments are utterly unfair,because the gold is in a pit or surrounded by pits. Let us wa tch a knowledge-based wumpus agent exploring the environment shown in Figure 7.2.The agent's initial knowledge base contains the rules of the environment.as listed PIT PIT 1 2 3 4 Figure 7.2 A typical wumpus world.The agent is in the bottom left comer
198 Chapter 7. Logical Agents effect if there is a wall in front of the agent. The action Grab can be used to pick up an object that is in the same square as the agent. The action Shoot can be used to fire an arrow in a straight line in the direction the agent is facing. The arrow continues until it either hits (and hence kills) the wumpus or hits a wall. The agent only has one arrow, so only the first Shoot action has any effect. ♦ Sensors: The agent has five sensors, each of which gives a single bit of information: – In the square containing the wumpus and in the directly (not diagonally) adjacent squares the agent will perceive a stench. – In the squares directly adjacent to a pit, the agent will perceive a breeze. – In the square where the gold is, the agent will perceive a glitter. – When an agent walks into a wall, it will perceive a bump. – When the wumpus is killed, it emits a woeful scream that can be perceived anywhere in the cave. The percepts will be given to the agent in the form of a list of five symbols; for example, if there is a stench and a breeze, but no glitter, bump, or scream, the agent will receive the percept [Stench, Breeze, None, None, None]. Exercise 7.1 asks you to define the wumpus environment along the various dimensions given in Chapter 2. The principal difficulty for the agent is its initial ignorance of the configuration of the environment; overcoming this ignorance seems to require logical reasoning. In most instances of the wumpus world, it is possible for the agent to retrieve the gold safely. Occasionally, the agent must choose between going home empty-handed and risking death to find the gold. About 21% of the environments are utterly unfair, because the gold is in a pit or surrounded by pits. Let us watch a knowledge-based wumpus agent exploring the environment shown in Figure 7.2. The agent’s initial knowledge base contains the rules of the environment, as listed PIT 1 2 3 4 1 2 3 4 START Stench Stench Breez e Gold PIT PIT Breez e Breez e Breez e Breez e Breez e Stench Figure 7.2 A typical wumpus world. The agent is in the bottom left corner