COGNITIVE SCIENCE 14,179-211 (1990) Finding Structure in Time JEFFREY L.ELMAN University of California,San Diego Time underlies many interesting human behaviors.Thus,the question of how to represent time in connectlonist models is very important.One approach is to rep- resent time implicitly by its effects on processing rather than explicitly(as in a spatial representation).The current report develops a proposal along these lines first described by Jordan(1986)which involves the use of recurrent links in order to provide networks with a dynamic memory.In this approach.hidden unit pat- terns are fed back to themselves:the internal representations which develop thus reflect task demands in the context of prior internal states.A set of simula- tions is reported which range from relatively simple problems(temporal version of XOR)to discovering syntactic/semantic features for words.The networks are able to learn interesting internal representations which incorporate task demands with memory demands;indeed,in this approach the notion of memory is inextri- cably bound up with task processing.These representations reveal a rich struc. ture,which allows them to be highly context-dependent,while also expressing generalizations across classes of items.These representations suggest a method for representing lexical categories and the type/token distinction. INTRODUCTION Time is clearly important in cognition.It is inextricably bound up with many behaviors (such as language)which express themselves as temporal sequences.Indeed,it is difficult to know how one might deal with such basic problems as goal-directed behavior,planning,or causation without some way of representing time. The question of how to represent time might seem to arise as a special problem unique to parallel-processing models,if only because the parallel nature of computation appears to be at odds with the serial nature of tem- I would like to thank Jay McClelland,Mike Jordan,Mary Hare,Dave Rumelhart,Mike Mozer.Steve Poteet,David Zipser,and Mark Dolson for many stimulating discussions.I thank McClelland,Jordan,and two anonymous reviewers for helpful critical comments on an earlier draft of this article. This work was supported by contract N000114-85-K-0076 from the Office of Naval Re- search and contract DAAB-07-87-C-H027 from Army Avionics,Ft.Monmouth. Correspondence and requests for reprints should be sent to Jeffrey L.Elman,Center for Research in Language,C-008,University of California,La Jolla,CA 92093. 179
COGNITIVE SCIENCE 14, 179-211 (1990) Finding Structure in Time JEFFREYL.ELMAN University of California, San Diego Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach Is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands: indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent, while also expressing generalizations across classes of items. These representatfons suggest a method for representing lexical categories and the type/token distinction. INTRODUCTION Time is clearly important in cognition. It is inextricably bound up with many behaviors (such as language) which express themselves as temporal sequences. Indeed, it is difficult to know how one might deal with such basic problems as goal-directed behavior, planning, or causation without some way of representing time. The question of how to represent time might seem to arise as a special problem unique to parallel-processing models, if only because the parallel nature of computation appears to be at odds with the serial nature of temI would like to thank Jay McClelland, Mike Jordan, Mary Hare, Dave Rumelhart. Mike Mozer, Steve Poteet, David Zipser, and Mark Dolson for many stimulating discussions. I thank McClelland, Jordan, and two anonymous reviewers for helpful critical comments on an earlier draft of this article. This work was supported by contract NO001 14-85-K-0076 from the Office of Naval Research and contract DAAB-O7-87-C-HO27 from Army Avionics, Ft. Monmouth. Correspondence and requests for reprints should be sent to Jeffrey L. Elman, Center for Research in Language, C-008, University of California, La Jolla, CA 92093. 179
180 ELMAN poral events.However,even within traditional(serial)frameworks,the repre- sentation of serial order and the interaction of a serial input or output with higher levels of representation presents challenges.For example,in models of motor activity,an important issue is whether the action plan is a literal specification of the output sequence,or whether the plan represents serial order in a more abstract manner (e.g.,Fowler,1977,1980;Jordan Rosen- baum,1988;Kelso,Saltzman,Tuller,1986;Lashley,1951;MacNeilage, 1970;Saltzman Kelso,1987).Linguistic theoreticians have perhaps tended to be less concerned with the representation and processing of the temporal aspects to utterances (assuming,for instance,that all the information in an utterance is somehow made available simultaneously in a syntactic tree);but the research in natural language parsing suggests that the problem is not trivially solved (e.g.,Frazier Fodor,1978;Marcus,1980).Thus,what is one of the most elementary facts about much of human activity-that it has temporal extend-is sometimes ignored and is often problematic. In parallel distributed processing models,the processing of sequential inputs has been accomplished in several ways.The most common solution is to attempt to"parallelize time"by giving it a spatial representation.How- ever,there are problems with this approach,and it is ultimately not a good solution.A better approach would be to represent time implicitly rather than explicitly.That is,we represent time by the effect it has on processing and not as an additional dimension of the input. This article describes the results of pursuing this approach,with particu- lar emphasis on problems that are relevant to natural language processing. The approach taken is rather simple,but the results are sometimes complex and unexpected.Indeed,it seems that the solution to the problem of time may interact with other problems for connectionist architectures,including the problem of symbolic representation and how connectionist representa- tions encode structure.The current approach supports the notion outlined by Van Gelder (in press)(see also,Elman,1989;Smolensky,1987,1988), that connectionist representations may have a functional compositionality without being syntactically compositional. The first section briefly describes some of the problems that arise when time is represented externally as a spatial dimension.The second section describes the approach used in this work.The major portion of this article presents the results of applying this new architecture to a diverse set of prob- lems.These problems range in complexity from a temporal version of the Exclusive-OR function to the discovery of syntactic/semantic categories in natural language data. THE PROBLEM WITH TIME One obvious way of dealing with patterns that have a temporal extent is to represent time explicitly by associating the serial order of the pattern with
180 ELMAN poral events. However, even within traditional (serial) frameworks, the representation of serial order and the interaction of a serial input or output with higher levels of representation presents challenges. For example, in models of motor activity, an important issue is whether the action plan is a literal specification of the output sequence, or whether the plan represents serial order in a more abstract manner (e.g., Fowler, 1977, 1980; Jordan 8c Rosenbaum, 1988; Kelso, Saltzman, & Tuller, 1986; Lashley, 1951; MacNeilage, 1970; Saltzman & Kelso, 1987). Linguistic theoreticians have perhaps tended to be less concerned with the representation and processing of the temporal aspects to utterances (assuming, for instance, that all the information in an utterance is somehow made available simultaneously in a syntactic tree); but the research in natural language parsing suggests that the problem is not trivially solved (e.g., Frazier 8c Fodor, 1978; Marcus, 1980). Thus, what is one of the most elementary facts about much of human activity-that it has temporal extend-is sometimes ignored and is often problematic. In parallel distributed processing models, the processing of sequential inputs has been accomplished in several ways. The most common solution is to attempt to “parallelize time” by giving it a spatial representation. However, there are problems with this approach, and it is ultimately not a good solution. A better approach would be to represent time implicitly rather than explicitly. That is, we represent time by the effect it has on processing and not as an additional dimension of the input. This article describes the results of pursuing this approach, with particular emphasis on problems that are relevant to natural language processing. The approach taken is rather simple, but the results are sometimes complex and unexpected. Indeed, it seems that the solution to the problem of time may interact with other problems for connectionist architectures, including the problem of symbolic representation and how connectionist representations encode structure. The current approach supports the notion outlined by Van Gelder (in press) (see also, Elman, 1989; Smolensky, 1987, 1988), that connectionist representations may have a functional compositionality without being syntactically compositional. The first section briefly describes some of the problems that arise when time is represented externally as a spatial dimension. The second section describes the approach used in this work. The major portion of this article presents the results of applying this new architecture to a diverse set of problems. These problems range in complexity from a temporal version of the Exclusive-OR function to the discovery of syntactic/semantic categories in natural language data. THE PROBLEM WITH TIME One obvious way of dealing with patterns that have a temporal extent is to represent time explicitly by associating the serial order of the pattern with
FINDING STRUCTURE IN TIME 181 the dimensionality of the pattern vector.The first temporal event is repre- sented by the first clement in the pattern vector,the second temporal event is represented by the second position in the pattern vector,and so on.The entire pattern vector is processed in parallel by the model.This approach has been used in a variety of models (e.g.,Cottrell,Munro,Zipser,1987; Elman Zipser,1988;Hanson Kegl,1987). There are several drawbacks to this approach,which basically uses a spatial metaphor for time.First,it requires that there be some interface with the world,which buffers the input,so that it can be presented all at once.It is not clear that biological systems make use of such shift registers.There are also logical problems:How should a system know when a buffer's con- tents should be examined? Second,the shift register imposes a rigid limit on the duration of patterns (since the input layer must provide for the longest possible pattern),and furthermore,suggests that all input vectors be the same length.These prob- lems are particularly troublesome in domains such as language,where one would like comparable representations for patterns that are of variable length.This is as true of the basic units of speech (phonetic segments)as it is of sentences. Finally,and most seriously,such an approach does not easily distinguish relative temporal position from absolute temporal position.For example, consider the following two vectors. [0111000001 [0001110001 These two vectors appear to be instances of the same basic pattern,but dis- placed in space (or time,if these are given a temporal interpretation).How- ever,as the geometric interpretation of these vectors makes clear,the two patterns are in fact quite dissimilar and spatially distant.PDP models can, of course,be trained to treat these two patterns as similar.But the similarity is a consequence of an external teacher and not of the similarity structure of the patterns themselves,and the desired similarity does not generalize to novel patterns.This shortcoming is serious if one is interested in patterns in which the relative temporal structure is preserved in the face of absolute temporal displacements. What one would like is a representation of time that is richer and does not have these problems.In what follows here,a simple architecture is described,which has a number of desirable temporal properties,and has yielded interesting results. i The reader may more easily be convinced of this by comparing the locations of the vectors [10 0],[0 1 0],and [00 1]in 3-space.Although these patterns might be considered"temporally displaced"versions of the same basic pattern,the vectors are very different
FINDING STRUCTURE IN TIME 181 the dimensionality of the pattern vector. The first temporal event is represented by the first element in the pattern vector, the second temporal event is represented by the second position in the pattern vector, and so on. The entire pattern vector is processed in parallel by the model. This approach has been used in a variety of models (e.g., Cottrell, Munro, & Zipser, 1987; Elman & Zipser, 1988; Hanson & Kegl, 1987). There are several drawbacks to this approach, which basically uses a spatial metaphor for time. First, it requires that there be some interface with the world, which buffers the input, so that it can be presented all at once. It is not clear that biological systems make use of such shift registers. There are also logical problems: How should a system know when a buffer’s contents should be examined? Second, the shift register imposes a rigid limit on the duration of patterns (since the input layer must provide for the longest possible pattern), and furthermore, suggests that all input vectors be the same length. These problems are particularly troublesome in domains such as language, where one would like comparable representations for patterns that are of variable length. This is as true of the basic units of speech (phonetic segments) as it is of sentences. Finally, and most seriously, such an approach does not easily distinguish relative temporal position from absolute temporal position. For example, consider the following two vectors. [0111000001 [0001110001 These two vectors appear to be instances of the same basic pattern, but displaced in space (or time, if these are given a temporal interpretation). However, as the geometric interpretation of these vectors makes clear, the two patterns are in fact quite dissimilar and spatially distant.’ PDP models can, of course, be trained to treat these two patterns as similar. But the similarity is a consequence of an external teacher and not of the similarity structure of the patterns themselves, and the desired similarity does not generalize to novel patterns. This shortcoming is serious if one is interested in patterns in which the relative temporal structure is preserved in the face of absolute temporal displacements. What one would like is a representation of time that is richer and does not have these problems. In what follows here, a simple architecture is described, which has a number of desirable temporal properties, and has yielded interesting results. I The reader may more easily be convinced of this by comparing the locations of the vectors [lo 01, [O 101, and [O 0 l] in 3-space. Although these patterns might be considered “temporally displaced” versions of the same basic pattern, the vectors are very different
182 ELMAN NETWORKS WITH MEMORY The spatial representation of time described above treats time as an explicit part of the input.There is another,very different possibility:Allow time to be represented by the effect it has on processing.This means giving the pro- cessing system dynamic properties that are responsive to temporal sequences. In short,the network must be given memory. There are many ways in which this can be accomplished,and a number of interesting proposals have appeared in the literature (e.g.,Jordan,1986; Pineda,1988;Stornetta,Hogg,Huberman,1987;Tank Hopfield,1987; Waibel,Hanazawa,Hinton,Shikano,Lang,1987;Watrous Shastri, 1987;Williams Zipser,1988).One of the most promising was suggested by Jordan(1986).Jordan described a network(shown in Figure 1)contain- ing recurrent connections that were used to associate a static pattern (a "Plan")with a serially ordered output pattern (a sequence of"Actions"). The recurrent connections allow the network's hidden units to see its own previous output,so that the subsequent behavior can be shaped by previous responses.These recurrent connections are what give the network memory. This approach can be modified in the following way.Suppose a network (shown in Figure 2)is augmented at the input level by additional units;call these Context Units.These units are also "hidden"in the sense that they interact exclusively with other nodes internal to the network,and not the outside world. Imagine that there is a sequential input to be processed,and some clock which regulates presentation of the input to the network.Processing would then consist of the following sequence of events.At time t,the input units receive the first input in the sequence.Each unit might be a single scalar value or a vector,depending upon the nature of the problem.The context units are initially set to 0.5.2 Both the input units and context units activate the hidden units;the hidden units then feed forward to activate the output units.The hidden units also feed back to activate the context units.This constitutes the forward activation.Depending upon the task,there may or may not be a learning phase in this time cycle.If so,the output is compared with a teacher input,and back propagation of error(Rumelhart,Hinton, Williams,1986)is used to adjust connection strengths incrementally.Recur- rent connections are fixed at 1.0 and are not subject to adjustment.3 At the next time step,t+1,the above sequence is repeated.This time the context The activation function used here bounds values between 0.0 and 1.0. A little more detail is in order about the connections between the context units and hidden units.In the networks used here,there were one-for-one connections between each hidden unit and each context unit.This implies that there are an equal number of context and hidden units. The upward connections between the context units and the hidden units were fully distributed. such that each context unit activates all the hidden units
182 ELMAN NETWORKS WITH MEMORY The spatial representation of time described above treats time as an explicit part of the input. There is another, very different possibility: Allow time to be represented by the effect it has on processing. This means giving the processing system dynamic properties that are responsive to temporal sequences. In short, the network must be given memory. There are many ways in which this can be accomplished, and a number of interesting proposals have appeared in the literature (e.g., Jordan, 1986; Pineda, 1988; Stornetta, Hogg, & Huberman, 1987; Tank & Hopfield, 1987; Waibel, Hanazawa, Hinton, Shikano, & Lang, 1987; Watrous & Shastri, 1987; Williams & Zipser, 1988). One of the most promising was suggested by Jordan (1986). Jordan described a network (shown in Figure 1) containing recurrent connections that were used to associate a static pattern (a “Plan”) with a serially ordered output pattern (a sequence of “Actions”). The recurrent connections allow the network’s hidden units to see its own previous output, so that the subsequent behavior can be shaped by previous responses. These recurrent connections are what give the network memory. This approach can be modified in the following way. Suppose a network (shown in Figure 2) is augmented at the input level by additional units; call these Context Units. These units are also “hidden” in the sense that they interact exclusively with other nodes internal to the network, and not the outside world. Imagine that there is a sequential input to be processed, and some clock which regulates presentation of the input to the network. Processing would then consist of the following sequence of events. At time t, the input units receive the first input in the sequence. Each unit might be a single scalar value or a vector, depending upon the nature of the problem. The context units are initially set to O.S.* Both the input units and context units activate the hidden units; the hidden units then feed forward to activate the output units. The hidden units also feed back to activate the context units. This constitutes the forward activation. Depending upon the task, there may or may not be a learning phase in this time cycle. If so, the output is compared with a teacher input, and back propagation of error (Rumelhart, Hinton, & Williams, 1986) is used to adjust connection strengths incrementally. Recurrent connections are fixed at 1 .O and are not subject to adjustment.’ At the next time step, t+ 1, the above sequence is repeated. This time the context a The activation function used here bounds values between 0.0 and 1.0. * A little more detail is in order about the connections between the context units and hidden units. In the networks used here, there were one-for-one connections between each hidden unit and each context unit. This implies that there are an equal number of context and hidden units. The upward connections between the context units and the hidden units were fully distributed, such that each context unit activates all the hidden units
FINDING STRUCTURE IN TIME 183 OUTPUT HIDDEN INPUT STATE Flgure 1.Architecture used by Jordan(1986).Connections from output to state units are one-for-one,with a fixed weight of 1.0 Not all connections are shown. units contain values which are exactly the hidden unit values at time t.These context units thus provide the network with memory. Internal Representation of Time.In feed forward networks employing hidden units and a learning algorithm,the hidden units develop internal representations for the input patterns that recode those patterns in a way which enables the network to produce the correct output for a given input. In the present architecture,the context units remember the previous internal state.Thus,the hidden units have the task of mapping both an external
FINDING STRUCTURE IN TIME 183 INPUT Figure 1. Architecture used by Jordan (1986). Connections from output to state units ore one-for-one, with a fixed weight of 1 .O Not all connections ore shown. units contain values which are exactly the hidden unit values at time t. These context units thus provide the network with memory. Internal Representation of Time. In feed forward networks employing hidden units and a learning algorithm, the hidden units develop internal representations for the input patterns that recode those patterns in a way which enables the network to produce the correct output for a given input. In the present architecture, the context units remember the previous internal state. Thus, the hidden units have the task of mapping both an external