首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Spatial representations in the visual system were probed in 4 experiments involving A. H., a woman with a developmental deficit in localizing visual stimuli. Previous research (M. McCloskey et al., 1995) has shown that A. H.'s localization errors take the form of reflections across a central vertical or horizontal axis (e.g., a stimulus 30 degrees to her left localized to a position 30 degrees to her right). The present experiments demonstrate that A. H.'s errors vary systematically as a function of where her attention is focused, independent of how her eyes, head, or body are oriented, or what potential reference points are present in the visual field. These results suggest that the normal visual system constructs attention-referenced spatial representations, in which the focus of attention defines the origin of a spatial coordinate system. A more general implication is that some of the brain's spatial representations take the form of coordinate systems.  相似文献   

2.
The purpose of the present study was to examine Kosslyn's (1987) claim that the left hemisphere (LH) is specialized for the computation of categorical spatial representations and that the right hemisphere (RH) is specialized for the computation of coordinate spatial representations. Categorical representations involve making judgements about the relative position of the components of a visual stimulus (e.g., whether one component is above/below another). Coordinate representations involve calibrating absolute distances between the components of a visual stimulus (e.g., whether one component is within 5 mm of another). Thirty-two male and 32 female undergraduates were administered two versions of a categorical or a coordinate task over three blocks of 36 trials. Within each block, items were presented to the right visual field-left hemisphere (RVF-LH), the left visual field-right hemisphere (LVF-RH), or a centralized position. Overall, results were more supportive of Kosslyn's assertions concerning the role played by the RH in the computation of spatial representations. Specifically, subjects displayed an LVF-RH advantage when performing both versions of the coordinate task. The LVF-RH advantage on the coordinate task, however, was confirmed to the first block of trials. Finally, it was found that males were more likely than females to display faster reaction times (RTs) on coordinate tasks, slower RTs on categorical tasks, and an LVF-RH advantage in computing coordinate tasks.  相似文献   

3.
The purpose of this investigation was to determine if the relations among the primitives used in face identification and in basic-level object recognition are represented using coordinate or categorical relations. In 2 experiments the authors used photographs of famous people's faces as stimuli in which each face had been altered to have either 1 of its eyes moved up from its normal position or both of its eyes moved up. Participants performed either a face identification task or a basic-level object recognition task with these stimuli. In the face identification task, 1-eye-moved faces were easier to recognize than 2-eyes-moved faces, whereas the basic-level object recognition task showed the opposite pattern of results. Results suggest that face identification involves a coordinate shape representation in which the precise locations of visual primitives are specified, whereas basic-level object recognition uses categorically coded relations.  相似文献   

4.
The problem of representing the spatial structure of images, which arises in visual object processing, is commonly described using terminology borrowed from propositional theories of cognition, notably, the concept of compositionality. The classical propositional stance mandates representations composed of symbols, which stand for atomic or composite entities and enter into arbitrarily nested relationships. We argue that the main desiderata of a representational system—productivity and systematicity—can (indeed, for a number of reasons, should) be achieved without recourse to the classical, proposition‐like compositionality. We show how this can be done, by describing a systematic and productive model of the representation of visual structure, which relies on static rather than dynamic binding and uses coarsely coded rather than atomic shape primitives.  相似文献   

5.
Open-bigram and spatial-coding schemes provide different accounts of how letter position is encoded by the brain during visual word recognition. Open-bigram coding involves an explicit representation of order based on letter pairs, while spatial coding involves a comparison function operating over representations of individual letters. We identify a set of priming conditions (subset primes and reversed interior primes) for which the two types of coding schemes give opposing predictions, hence providing the opportunity for strong scientific inference. Experimental results are consistent with the open-bigram account, and inconsistent with the spatial-coding scheme.  相似文献   

6.
Abstract— The extent to which infants combine visual (i e, retinal position) and nonvisual (eye or head position) spatial information in planning saccades relates to the issue of what spatial frame or frames of reference influence early visually guided action. We explored this question by testing infants from 4 to 6 months of age on the double-step saccade paradigm, which has shown that adults combine visual and eye position information into an egocentric (head- or trunk-centered) representation of saccade target locations. In contrast, our results imply that infants depend on a simple retinocentric representation at age 4 months, but by 6 months use egocentric representations more often to control saccade planning. Shifts in the representation of visual space for this simple sensorimotor behavior may index maturation in cortical circuitry devoted to visual spatial processing in general.  相似文献   

7.
Kosslyn (1987) theorized that the visual system uses two types of spatial relations. Categorical spatial relations represent a range of locations as an equivalence class, whereas coordinate spatial relations represent the precise distance between two objects. Data indicate a left hemisphere (LH) advantage for processing categorical spatial relations and a right hemisphere (RH) advantage for processing coordinate spatial relations. Although generally assumed to be independent processes, this article proposes a possible connection between categorical and coordinate spatial relations. Specifically, categorical spatial relations may be an initial stage in the formation of coordinate spatial relations. Three experiments tested the hypothesis that categorical information would benefit tasks that required coordinate judgments. Experiments 1 and 2 presented categorical information before participants made coordinate judgments and coordinate information before participants made categorical judgments. Categorical information sped the processing of a coordinate task under a range of experimental variables; however, coordinate information did not benefit categorical judgments. Experiment 3 used this priming paradigm to present stimuli in the left or right visual field. Although visual field differences were present in the third experiment, categorical information did not speed the processing of a coordinate task. The lack of priming effects in Experiment 3 may have been due to methodological changes. In general, support is provided that categorical spatial relations may act as an initial step in the formation of more precise distance representations, i.e., coordinate spatial relations.  相似文献   

8.
Knowing a word affects the fundamental perception of the sounds within it   总被引:4,自引:0,他引:4  
Understanding spoken language is an exceptional computational achievement of the human cognitive apparatus. Theories of how humans recognize spoken words fall into two categories: Some theories assume a fully bottom-up flow of information, in which successively more abstract representations are computed. Other theories, in contrast, assert that activation of a more abstract representation (e.g., a word) can affect the activation of smaller units (e.g., phonemes or syllables). The two experimental conditions reported here demonstrate the top-down influence of word representations on the activation of smaller perceptual units. The results show that perceptual processes are not strictly bottom-up: Computations at logically lower levels of processing are affected by computations at logically more abstract levels. These results constrain and inform theories of the architecture of human perceptual processing of speech.  相似文献   

9.
The fast-generation model for the matching of mixed-case letter pairs (e.g., Aa, Ab) states that one or both members of a pair activate visual representations in memory of the opposite case, supporting "same" or "different" responses through crossmatching to representations of the pair members themselves. Here the reaction time and error results of three experiments using simultaneous matches support a specific variant of the model in which generation proceeds from the uppercase letter. Furthermore, a manipulation of stimulus onset asynchrony in a fourth experiment using near-simultaneous matches indicates that fast generation produces a visual representation that occurs within 67 msec of initiation and that decays within 200 msec. A fifth experiment contrasts simultaneous and successive matches and in the case of successive matches finds evidence in support of a regeneration process acting after an initial decay. Models of mixed-case matching that are based on the phonetic representation of letter names, or on abstract-letter identities, completely fail to account for the results. Fast generation is distinguishable from slow generation in that it shows fast (vs. slow) dynamics, rapid decay (vs. maintainability), no imagery (vs. imagery), and (probably) automatic (vs. controlled) processing.  相似文献   

10.
The ability to read requires processing the letter identities in the word and their order, but it is by no means obvious that our long-term memory representations of words spellings consist of only these dimensions of information. The current investigation focuses on whether we process information about another dimension—letter doubling (i.e., that there is a double letter in WEED)—independently of the identity of the letter being doubled. Two experiments that use the illusory word paradigm are reported to test this question. In both experiments, participants are more likely to misperceive a target word with only singleton letters (e.g., WED) as a word with a double (e.g., WEED) when the target is presented with a distractor that contains a different double letter (e.g., WOOD) than when the distractor does not contain a double letter (e.g., WORD). This pattern of results is not predicted by existing computational models of word reading but is consistent with the hypothesis that written language separately represents letter identity and letter doubling information, as previously shown in written language production. These results support a view that the orthographic representations that underlie our ability to read are internally complex and suggest that reading and writing rely on a common level of orthographic representation.  相似文献   

11.
The number and type of connections involving different levels of orthographic and phonological representations differentiate between several models of spoken and visual word recognition. At the sublexical level of processing, Borowsky, Owen, and Fonos (1999) demonstrated evidence for direct processing connections from grapheme representations to phoneme representations (i.e., a sensitivity effect) over and above any bias effects, but not in the reverse direction. Neural network models of visual word recognition implement an orthography to phonology processing route that involves the same connections for processing sublexical and lexical information, and thus a similar pattern of cross-modal effects for lexical stimuli are expected by models that implement this single type of connection (i.e., orthographic lexical processing should directly affect phonological lexical processing, but not in the reverse direction). Furthermore, several models of spoken word perception predict that there should be no direct connections between orthographic representations and phonological representations, regardless of whether the connections are sublexical or lexical. The present experiments examined these predictions by measuring the influence of a cross-modal word context on word target discrimination. The results provide constraints on the types of connections that can exist between orthographic lexical representations and phonological lexical representations.  相似文献   

12.
The fast-generation model for the matching of mixed-case letter pairs (e.g., Aa, Ab) states that one or both members of a pair activate visual representations in memory of the opposite case, supporting “same” or “different” responses through crossmatching to representations of the pair members themselves. Here the reaction time and error results of three experiments using simultaneous matches support a specific variant of the model in which generation proceeds from the uppercase letter. Furthermore, a manipulation of stimulus onset asynchrony in a fourth experiment using near-simultaneous matches indicates that fast generation produces a visual representation that occurs within 67 msec of initiation and that decays within 200 msec. A fifth experiment contrasts simultaneous and successive matches and in the case of successive matches finds evidence in support of a regeneration process acting after an initial decay. Models of mixed-case matching that are based on the phonetic representation of letter names, or on abstract-letter identities, completely fail to account for the results. Fast generation is distinguishable from slow generation in that it shows fast (vs. slow) dynamics, rapid decay (vs. maintainability), no imagery (vs. imagery), and (probably) automatic (vs. controlled) processing.  相似文献   

13.
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.  相似文献   

14.
We propose that speech comprehension involves the activation of token representations of the phonological forms of current lexical hypotheses, separately from the ongoing construction of a conceptual interpretation of the current utterance. In a series of cross-modal priming experiments, facilitation of lexical decision responses to visual target words (e.g., time) was found for targets that were semantic associates of auditory prime words (e.g., date) when the primes were isolated words, but not when the same primes appeared in sentence contexts. Identity priming (e.g., faster lexical decisions to visual date after spoken date than after an unrelated prime) appeared, however, both with isolated primes and with primes in prosodically neutral sentences. Associative priming in sentence contexts only emerged when sentence prosody involved contrastive accents, or when sentences were terminated immediately after the prime. Associative priming is therefore not an automatic consequence of speech processing. In no experiment was there associative priming from embedded words (e.g., sedate-time), but there was inhibitory identity priming (e.g., sedate-date) from embedded primes in sentence contexts. Speech comprehension therefore appears to involve separate distinct activation both of token phonological word representations and of conceptual word representations. Furthermore, both of these types of representation are distinct from the long-term memory representations of word form and meaning.  相似文献   

15.
Research on visuospatial memory has shown that egocentric (subject-to-object) and allocentric (object-to-object) reference frames are connected to categorical (non-metric) and coordinate (metric) spatial relations, and that motor resources are recruited especially when processing spatial information in peripersonal (within arm reaching) than extrapersonal (outside arm reaching) space. In order to perform our daily-life activities, these spatial components cooperate along a continuum from recognition-related (e.g., recognizing stimuli) to action-related (e.g., reaching stimuli) purposes. Therefore, it is possible that some types of spatial representations rely more on action/motor processes than others. Here, we explored the role of motor resources in the combinations of these visuospatial memory components. A motor interference paradigm was adopted in which participants had their arms bent behind their back or free during a spatial memory task. This task consisted in memorizing triads of objects and then verbally judging what was the object: (1) closest to/farthest from the participant (egocentric coordinate); (2) to the right/left of the participant (egocentric categorical); (3) closest to/farthest from a target object (allocentric coordinate); and (4) on the right/left of a target object (allocentric categorical). The triads appeared in participants' peripersonal (Experiment 1) or extrapersonal (Experiment 2) space. The results of Experiment 1 showed that motor interference selectively damaged egocentric-coordinate judgements but not the other spatial combinations. The results of Experiment 2 showed that the interference effect disappeared when the objects were in the extrapersonal space. A third follow-up study using a within-subject design confirmed the overall pattern of results. Our findings provide evidence that motor resources play an important role in the combination of coordinate spatial relations and egocentric representations in peripersonal space.  相似文献   

16.
The representations that underlie our ability to read must encode not only the identities of the letters in a word, but also their relative positions. In recent years, many new proposals have been advanced concerning the representation of letter position in reading, but the available data do not distinguish among the competing proposals; multiple theories, each positing a different letter position representation scheme, are compatible with the evidence. In this article, we report two experiments that used the illusory word paradigm (e.g., Davis & Bowers, Journal of Experimental Psychology: Human Perception and Performance, 30: 923–941, 2004) to distinguish among alternative schemes for representing letter position in reading. The results support a scheme that uses both the beginning and the end of a word as anchoring points. This both-edges scheme has been implicated in letter position representation in spelling (Fischer-Baum, McCloskey, & Rapp, Cognition, 115: 466–490, 2010), as well as in position representation in verbal working memory (Henson, Memory & Cognition, 27: 915–927, 1999), suggesting that it may be a domain-general scheme for representing position in a sequence.  相似文献   

17.
Three experiments on visual field differences in motion perception are reported. Experiment 1 employed circular stimuli that grew or shrank either quickly or slowly. Experiments 2 and 3 employed circles that moved upward or downward either quickly or slowly. Judgments based on categorical equivalence classes (i.e., grow/shrink, upward/downward) generally yielded small and nonsignificant right visual field advantages. Judgments based on the precise coordinates of motion (i.e., quickly/slowly) yielded significant left visual field advantages across all three experiments. Results are interpreted in light of Kosslyn’s (1987) model of hemispheric differences in the processing of categorical versus coordinate spatial relations.  相似文献   

18.
A basic problem of visual perception is how human beings recognize objects after spatial transformations. Three central classes of findings have to be accounted for: (a) Recognition performance varies systematically with orientation, size, and position; (b) recognition latencies are sequentially additive, suggesting analogue transformation processes; and (c) orientation and size congruency effects indicate that recognition involves the adjustment of a reference frame. All 3 classes of findings can be explained by a transformational framework of recognition: Recognition is achieved by an analogue transformation of a perceptual coordinate system that aligns memory and input representations. Coordinate transformations can be implemented neurocomputationally by gain (amplitude) modulation and may be regarded as a general processing principle of the visual cortex.  相似文献   

19.
Detailed analyses of reading and nonlexical tasks by three patients with unilateral spatial neglect (USN) secondary to stroke indicate that the USN in each of these patients affects the left side (contralateral to brain damage) of the viewer, with respect to the viewer's head, mid-sagittal plane of the body, or line of sight. In one case, the neglect was further specified as concerning the left side of the viewer's line of sight (the left half of her residual visual field). Thus, the frame of reference of USN in these three cases appears to have viewer-centered (in at least one case, specifically retinotopic) coordinates. The performance of these patients is contrasted to that of other patients in the literature whose USN appears to have a frame of reference with stimulus-centered or object-centered coordinates. These results are interpreted within a model of visual processing (adapted from Marr, 1980 and others) with at least three coordinate frames. It is argued that USN can affect one or more of these coordinate frames independently.  相似文献   

20.
Copying text may seem trivial, but the task itself is psychologically complex. It involves a series of sequential visual and cognitive processes, which must be co-ordinated; these include visual encoding, mental representation and written production. To investigate the time course of word processing during copying, we recorded eye movements of adults and children as they hand-copied isolated words presented on a classroom board. Longer and lower frequency words extended adults' encoding durations, suggesting whole word encoding. Only children's short word encoding was extended by lower frequency. Though children spent more time encoding long words compared to short words, gaze durations for long words were extended similarly for high- and low-frequency words. This suggested that for long words children used partial word representations and encoded multiple sublexical units rather than single whole words. Piecemeal word representation underpinned copying longer words in children, but reliance on partial word representations was not shown in adult readers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号