首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An important task of perceptual processing is to parse incoming information into distinct units and to keep track of those units over time as the same, persisting representations. Within the study of visual perception, maintaining such persisting object representations is helped by “object files”—episodic representations that store (and update) information about objects' properties and track objects over time and motion via spatiotemporal information. Although object files are typically discussed as visual, here we demonstrate that object–file correspondence can be computed across sensory modalities. An object file can be initially formed with visual input and later accessed with corresponding auditory information, suggesting that object files may be able to operate at a multimodal level of perceptual processing.  相似文献   

2.
The correspondence problem is a classic issue in vision and cognition. Frequent perceptual disruptions, such as saccades and brief occlusion, create gaps in perceptual input. How does the visual system establish correspondence between objects visible before and after the disruption? Current theories hold that object correspondence is established solely on the basis of an object’s spatiotemporal properties and that an object’s surface feature properties (such as color or shape) are not consulted in correspondence operations. In five experiments, we tested the relative contributions of spatiotemporal and surface feature properties to establishing object correspondence across brief occlusion. Correspondence operations were strongly influenced both by the consistency of an object’s spatiotemporal properties across occlusion and by the consistency of an object’s surface feature properties across occlusion. These data argue against the claim that spatiotemporal cues dominate the computation of object correspondence. Instead, the visual system consults multiple sources of relevant information to establish continuity across perceptual disruption.  相似文献   

3.
Seeing, hearing and touching are phenomenally different, even if we are detecting the same spatial properties with each sense. This presents a prima facie problem for intentionalism, the theory that phenomenal character supervenes on representational content. The paper reviews some attempts to resolve this problem, and then looks in detail at Peter Carruthers' recent proposal that the senses can be individuated by the way in which they represent spatial properties and incorporate time. This proposal is shown to be ineffective in distinguishing auditory from either visual or tactual perception, and substantial classes of visual and tactual perceptions are found that the posited spatial and temporal features fail to individuate.  相似文献   

4.
5.
The perception of distance in open fields was widely studied with static observers. However, it is a fact that we and the world around us are in continuous relative movement, and that our perceptual experience is shaped by the complex interactions between our senses and the perception of our self-motion. This poses interesting questions about how our nervous system integrates this multisensory information to resolve specific tasks of our daily life, for example, distance estimation. This study provides new evidence about how visual and motor self-motion information affects our perception of distance and a hypothesis about how these two sources of information can be integrated to calibrate the estimation of distance. This model accounts for the biases found when visual and proprioceptive information is inconsistent.  相似文献   

6.
Perception, production, and understanding of sequences is fundamental to human behavior and depends, in large part, on the ability to detect serial order. Despite the importance of this issue across many domains of human functioning, the development of serial order skills has been neglected in developmental studies. The current article reviews evidence that the basic temporal and spatiotemporal skills that are necessary for the development of serial order skills emerge early in human development. The article then presents recent evidence from the authors laboratory showing that serial order perceptual skills emerge at the same time and improve rapidly. Consistent with a multisensory redundancy view of perception, when serial order perceptual abilities first emerge in infancy, they depend critically on the redundant specification of sequences in both the auditory and visual modalities. The findings suggest that infants ability to perceive the surface serial order characteristics of sequentially organized events provides the necessary antecedents to the development of more complex serial order skills that ultimately enable us to extract meanings from sequentially organized events and perform complex sequential actions.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

7.
The ability to determine how many objects are involved in physical events is fundamental for reasoning about the world that surrounds us. Previous studies suggest that infants can fail to individuate objects in ambiguous occlusion events until their first birthday and that learning words for the objects may play a crucial role in the development of this ability. The present eye-tracking study tested whether the classical object individuation experiments underestimate young infants’ ability to individuate objects and the role word learning plays in this process. Three groups of 6-month-old infants (N = 72) saw two opaque boxes side by side on the eye-tracker screen so that the content of the boxes was not visible. During a familiarization phase, two visually identical objects emerged sequentially from one box and two visually different objects from the other box. For one group of infants the familiarization was silent (Visual Only condition). For a second group of infants the objects were accompanied with nonsense words so that objects’ shape and linguistic labels indicated the same number of objects in the two boxes (Visual & Language condition). For the third group of infants, objects’ shape and linguistic labels were in conflict (Visual vs. Language condition). Following the familiarization, it was revealed that both boxes contained the same number of objects (e.g. one or two). In the Visual Only condition, infants looked longer to the box with incorrect number of objects at test, showing that they could individuate objects using visual cues alone. In the Visual & Language condition infants showed the same looking pattern. However, in the Visual vs Language condition infants looked longer to the box with incorrect number of objects according to linguistic labels. The results show that infants can individuate objects in a complex object individuation paradigm considerably earlier than previously thought and that linguistic cues enforce their own preference in object individuation. The results are consistent with the idea that when language and visual information are in conflict, language can exert an influence on how young infants reason about the visual world.  相似文献   

8.
Learning verbal semantic knowledge for objects has been shown to attenuate recognition costs incurred by changes in view from a learned viewpoint. Such findings were attributed to the semantic or meaningful nature of the learned verbal associations. However, recent findings demonstrate surprising benefits to visual perception after learning even noninformative verbal labels for stimuli. Here we test whether learning verbal information for novel objects, independent of its semantic nature, can facilitate a reduction in viewpoint-dependent recognition. To dissociate more general effects of verbal associations from those stemming from the semantic nature of the associations, participants learned to associate semantically meaningful (adjectives) or nonmeaningful (number codes) verbal information with novel objects. Consistent with a role of semantic representations in attenuating the viewpoint-dependent nature of object recognition, the costs incurred by a change in viewpoint were attenuated for stimuli with learned semantic associations relative to those associated with nonmeaningful verbal information. This finding is discussed in terms of its implications for understanding basic mechanisms of object perception as well as the classic viewpoint-dependent nature of object recognition.  相似文献   

9.
Objects are rarely viewed in isolation, and so how they are perceived is influenced by the context in which they are viewed and their interaction with other objects (e.g., whether objects are colocated for action). We investigated the combined effects of action relations and scene context on an object decision task. Experiment 1 investigated whether the benefit for positioning objects so that they interact is enhanced when objects are viewed within contextually congruent scenes. The results indicated that scene context influenced perception of nonaction-related objects (e.g., monitor and keyboard), but had no effect on responses to action-related objects (e.g., bottle and glass) that were processed more rapidly. In Experiment 2, we reduced the saliency of the object stimuli and found that, under these circumstances, scene context influenced responses to action-related objects. We discuss the data in terms of relatively late effects of scene processing on object perception.  相似文献   

10.
How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory features that optimally explain the unisensory features arising in individual sensory modalities. The model qualitatively accounts for several important aspects of multisensory perception: (a) it integrates information from multiple sensory sources in such a way that it leads to superior performances in, for example, categorization tasks; (b) its performances suggest that multisensory training leads to better learning than unisensory training, even when testing is conducted in unisensory conditions; (c) its multisensory representations are modality invariant; and (d) it predicts ‘‘missing” sensory representations in modalities when the input to those modalities is absent. Our rational analysis indicates that all of these aspects emerge as part of the optimal solution to the problem of learning to represent complex multisensory environments.  相似文献   

11.
Two experiments that explore the internal feature advantage (IFA) in familiar face processing are reported. The IFA involves more efficient processing of internal features for familiar faces over unfamiliar ones. Experiment 1 examined the possibility of a configural basis for this effect through use of a matching task for familiar and unfamiliar faces presented both upright and upside-down. Results revealed the predicted IFA for familiar faces when stimuli were upright, but this was removed when stimuli were inverted. Experiment 2 examined the degree of training required before the IFA was demonstrated. Latency results revealed that whilst 90–180 s of exposure was sufficient to generate an IFA of intermediate magnitude, 180–270 s of exposure was required before the IFA was equivalent to that demonstrated for a familiar face. Taken together, these results offer three conclusions: First, the IFA is reaffirmed as an objective indicator of familiarity; second, the IFA is seen to rest on configural processing; and finally, the development of the IFA with familiarity indicates a development of configural processing with familiarity. As such, insight is gained as to the type of processing changes that occur as familiarity is gradually acquired.  相似文献   

12.
How does perceptual learning take place early in life? Traditionally, researchers have focused on how infants make use of information within displays to organize it, but recently, increasing attention has been paid to the question of how infants perceive objects differently depending upon their recent interactions with the objects. This experiment investigates 10‐month‐old infants' use of brief prior experiences with objects to visually organize a display consisting of multiple geometrically shaped three‐dimensional blocks created for this study. After a brief exposure to a multipart portion of the display, each infant was shown two test events, one of which preserved the unit the infant had seen and the other of which broke that unit. Overall, infants looked longer at the event that broke the unit they had seen prior to testing than the event that preserved that unit, suggesting that infants made use of the brief prior experience to (a) form a cohesive unit of the multipart portion of the display they saw prior to test and (b) segregate this unit from the rest of the test display. This suggests that infants made inferences about novel parts of the test display based on limited exposure to a subset of the test display. Like adults, infants learn features of the three‐dimensional world through their experiences in it.  相似文献   

13.
Studies showing human behavior influenced by subliminal stimuli mainly focus on implicit processing per se, and little is known about its interaction with explicit processing. We examined this by using the Simon effect, wherein a task-irrelevant spatial distracter interferes with lateralized response. Lo and Yeh (2008) found that the visual Simon effect, although it occurred when participants were aware of the visual distracters, did not occur with subliminal visual distracters. We used the same paradigm and examined whether subliminal and supra-threshold stimuli are processed independently by adding a supra-threshold auditory distracter to ascertain whether it would interact with the subliminal visual distracter. Results showed auditory Simon effect, but there was still no visual Simon effect, indicating that supra-threshold and subliminal stimuli are processed separately in independent streams. In contrast to the traditional view that implicit processing precedes explicit processing, our results suggest that they operate independently in a parallel fashion.  相似文献   

14.
The Fuld Object Memory Evaluation (FOME) has considerable utility for cognitive assessment in older adults, but there are few normative data, particularly for the oldest old. In this study, 80 octogenarians and 244 centenarians from the Georgia Centenarian Study completed the FOME. Total and trial-to-trial performance on the storage, retrieval, repeated retrieval, and ineffective reminder indices were assessed. Additional data stratified by age group, education, and cognitive impairment are provided in the Supplemental data. Octogenarians performed significantly better than centenarians on all FOME measures. Neither age group benefitted from additional learning trials beyond Trial 3 for storage and Trial 2 for retention and retrieval. Ineffective reminders showed no change across learning trials for octogenarians, while centenarians improved only between Trials 1 and 2. This minimal improvement past Trial 2 indicates that older adults might benefit from a truncated version of the test that does not include trials three through five, with the added benefit of reducing testing burden in this population.  相似文献   

15.
The embodied, embedded, enactive, and extended approaches to cognition explicate many important details for a phenomenology of perception, and are consistent with some of the traditional phenomenological analyses. Theorists working in these areas, however, often fail to provide an account of how intersubjectivity might relate to perception. This paper suggests some ways in which intersubjectivity is important for an adequate account of perception.
Shaun GallagherEmail:
  相似文献   

16.
Object parts are signaled by concave discontinuities in shape contours. In seven experiments, we examined whether 5- and 6 1/2-month-olds are sensitive to concavities as special aspects of contours. Infants of both ages detected discrepant concave elements amid convex distractors but failed to discriminate convex elements among concave distractors. This discrimination asymmetry is analogous to the finding that concave targets among convex distractors pop out for adults, whereas convex targets among concave distractors do not. Thus, during infancy, as during adulthood, concavities appear to be salient regions of shape contours. The current study also found that infants' detection of concavity is impaired if the contours that define concavity and convexity are not part of closed shapes. Thus, for infants, as for adults, concavities and convexities are defined more readily in the contours of closed shapes. Taken together, the results suggest that some basic aspects of part perception from shape contours are available by at least 5 months of age.  相似文献   

17.
Observational learning was studied in 8-, 10-, 12-, 15- and 18-month-old infants. Using object-retrieval tasks of relatively comparable difficulty for each age group, we showed that between 10 and 12 months there is a change in the capacity to learn a new skill by observation.  相似文献   

18.
Phillips-Silver and Trainor (Phillips-Silver, J., Trainor, L.J., (2005). Feeling the beat: movement influences infants' rhythm perception. Science, 308, 1430) demonstrated an early cross-modal interaction between body movement and auditory encoding of musical rhythm in infants. Here we show that the way adults move their bodies to music influences their auditory perception of the rhythm structure. We trained adults, while listening to an ambiguous rhythm with no accented beats, to bounce by bending their knees to interpret the rhythm either as a march or as a waltz. At test, adults identified as similar an auditory version of the rhythm pattern with accented strong beats that matched their previous bouncing experience in comparison with a version whose accents did not match. In subsequent experiments we showed that this effect does not depend on visual information, but that movement of the body is critical. Parallel results from adults and infants suggest that the movement-sound interaction develops early and is fundamental to music processing throughout life.  相似文献   

19.
In the current study we look at whether subjective and proprioceptive aspects of selfrepresentation are separable components subserved by distinct systems of multisensory integration. We used the rubber hand illusion (RHI) to draw the location of the ‘self’ away from the body, towards extracorporeal space (Out Condition), thereby violating top-down information about the body location. This was compared with the traditional RHI which drew position of the ‘self’ towards the body (In Condition). We were successfully able to draw proprioceptive position of the limbs in and out from the body suggesting body perception is a purely bottom-up process, resistant to top-down effects. Conversely, we found subjective self-representation was altered by the violation of top-down body information – as the strong association of subjective and proprioceptive factors found in the In Condition became non-significant in the Out Condition. Interestingly, we also found evidence that subjective embodiment can modulate tactile perception.  相似文献   

20.
Landau B  Hoffman JE  Kurz N 《Cognition》2006,100(3):483-510
Williams syndrome (WS) is a rare genetic disorder that results in severe visual-spatial cognitive deficits coupled with relative sparing in language, face recognition, and certain aspects of motion processing. Here, we look for evidence for sparing or impairment in another cognitive system-object recognition. Children with WS, normal mental-age (MA) and chronological age-matched (CA) children, and normal adults viewed pictures of a large range of objects briefly presented under various conditions of degradation, including canonical and unusual orientations, and clear or blurred contours. Objects were shown as either full-color views (Experiment 1) or line drawings (Experiment 2). Across both experiments, WS and MA children performed similarly in all conditions while CA children performed better than both WS group and MA groups with unusual views. This advantage, however, was eliminated when images were also blurred. The error types and relative difficulty of different objects were similar across all participant groups. The results indicate selective sparing of basic mechanisms of object recognition in WS, together with developmental delay or arrest in recognition of objects from unusual viewpoints. These findings are consistent with the growing literature on brain abnormalities in WS which points to selective impairment in the parietal areas of the brain. As a whole, the results lend further support to the growing literature on the functional separability of object recognition mechanisms from other spatial functions, and raise intriguing questions about the link between genetic deficits and cognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号