ENCAPSULATION AND INACCESSIBILITY o Informational encapsulation and limited central accessibility are two sides of the same coin. Both features pertain to the character of information flow across computational mechanisms, albeit in opposite directions. Encapsulation involves restriction on the flow of information into a mechanism, whereas inaccessibility involves restriction on the flow of information out of it o A cognitive system is informationally encapsulated to the extent that in the course of processing a given set of inputs it cannot access information stored elsewhere; all it has to go on is the information contained in those inputs plus whatever information might be stored within the system itself
ENCAPSULATION AND INACCESSIBILITY. Informational encapsulation and limited central accessibility are two sides of the same coin. Both features pertain to the character of information flow across computational mechanisms, albeit in opposite directions. Encapsulation involves restriction on the flow of information into a mechanism, whereas inaccessibility involves restriction on the flow of information out of it. A cognitive system is informationally encapsulated to the extent that in the course of processing a given set of inputs it cannot access information stored elsewhere; all it has to go on is the information contained in those inputs plus whatever information might be stored within the system itself
AN EXAMPLE o In the case of perception --understood as a kind of non-demonstrative (i.e, defeasible, or non monotonic)inference from sensory premises' to perceptual 'conclusions-the claim that perceptual systems are informationally encapsulated is equivalent to the claim that "the data that can bear on the confirmation of perceptual hypotheses includes, in the general case, considerably less than the organism may know(Fodor, 1983, p. 69). The classic illustration of this property comes from the study of visual illusions, which typically persist even after the viewer is explicitly informed about the character of the stimulus In the Muller-Lyerillusion, for example, the two lines continue to look as if they were of unequal length even after one has convinced oneself otherwise, e. g, by measuring em with a ruler
AN EXAMPLE In the case of perception—understood as a kind of non-demonstrative (i.e., defeasible, or nonmonotonic) inference from sensory ‘premises’ to perceptual ‘conclusions’—the claim that perceptual systems are informationally encapsulated is equivalent to the claim that “the data that can bear on the confirmation of perceptual hypotheses includes, in the general case, considerably less than the organism may know” (Fodor, 1983, p. 69). The classic illustration of this property comes from the study of visual illusions, which typically persist even after the viewer is explicitly informed about the character of the stimulus. In the Müller-Lyer illusion, for example, the two lines continue to look as if they were of unequal length even after one has convinced oneself otherwise, e.g., by measuring them with a ruler
MANDATORINESS SPEED AND SUPERFICIALITY o The operation of a cognitive system is mandatory just in case it is automatic, that is, not under conscious control(cf Bargh Chartrand, 1999) This means that, like it or not, the systems operations are switched on by presentation of the relevant stimuli and those operations run to completion. For example, native speakers of English cannot hear the sounds of English being spoken as mere noise: if they hear those sounds at all, they hear them as English. Likewise, it's impossible to see a 3d array of objects in space as 2D patches of color, however hard one may try ( despite claims to the contrary by painters and other visual artists influenced by impressionism
MANDATORINESS, SPEED, AND SUPERFICIALITY. The operation of a cognitive system is mandatory just in case it is automatic, that is, not under conscious control (cf. Bargh & Chartrand, 1999). This means that, like it or not, the system's operations are switched on by presentation of the relevant stimuli and those operations run to completion. For example, native speakers of English cannot hear the sounds of English being spoken as mere noise: if they hear those sounds at all, they hear them as English. Likewise, it's impossible to see a 3D array of objects in space as 2D patches of color, however hard one may try (despite claims to the contrary by painters and other visual artists influenced by Impressionism)
SHALLOW' OUTPUTS o a further feature of modular systems is that their outputs are relatively ' shallow. Exactly what this means is unclear. But the depth of an output seems to be a function ofat least two properties: first, how much computation is required to produce it(i.e, shallow means computationally cheap); second, how constrained or specific its informational content is(i.e, shallow means informationally general(Fodor, 1983, p. 87 These two properties are correlated in that outputs with more specific content are typically more expensive for a system to produce, and vice versa. Some writers have interpreted shallowness to require non-conceptual character(e.g. Carruthers, 2006, p. 4). But this conflicts with Fodor's own gloss on the term, in which he suggests that the output of a plausibly modular system such as visual object recognition might be encoded at the level of basic-level concepts, like DOG and ChAiR(Rosch al 1976). What's ruled out here is not concepts, then, but highly theoretical concepts like PROTON, which are too specific and too expensive to meet the shallowness criterion
‘SHALLOW’ OUTPUTS A further feature of modular systems is that their outputs are relatively ‘shallow’. Exactly what this means is unclear. But the depth of an output seems to be a function of at least two properties: first, how much computation is required to produce it (i.e., shallow means computationally cheap); second, how constrained or specific its informational content is (i.e., shallow means informationally general) (Fodor, 1983, p. 87). These two properties are correlated, in that outputs with more specific content are typically more expensive for a system to produce, and vice versa. Some writers have interpreted shallowness to require non-conceptual character (e.g., Carruthers, 2006, p. 4). But this conflicts with Fodor's own gloss on the term, in which he suggests that the output of a plausibly modular system such as visual object recognition might be encoded at the level of ‘basic-level’ concepts, like DOG and CHAIR (Rosch et al., 1976). What's ruled out here is not concepts, then, but highly theoretical concepts like PROTON, which are too specific and too expensive to meet the shallowness criterion