首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Right-handed subjects were presented with a dichotic tonal sequence, whose basic pattern consisted of three 800-Hz tones followed by two 400-Hz tones on the channel and simultaneously three 400-Hz tones followed by two 800-Hz tones on the other. All tones were 250 msec in duration and separated by 250-msec pauses. On any given stimulus presentation, most subjects reported the sequence of pitches delivered to one ear and ignored the other. They further tended significantly to report the sequence delivered to the right ear rather than to the left. However, each tone appeared to be localized in the ear receiving the higher frequency, regardless of which ear was followed for pitch and regardless of whether the higher or lower frequency was in fact perceived.  相似文献   

2.
People often fail to consciously perceive visual events that are outside the focus of attention, a phenomenon referred to as inattentional blindness or IB (i.e., Mack & Rock, 1998 Mack, A. and Rock, I. 1998. Inattentional blindness, Cambridge, MA: MIT Press. [Crossref] [Google Scholar]). Here, we investigated IB for words within and across sensory modalities (visually and auditorily) in order to assess whether dividing attention across different senses has the same consequences as dividing attention within an individual sensory modality. Participants were asked to monitor a rapid stream of pictures or sounds presented concurrently with task-irrelevant words (spoken or written). A word recognition test was used to measure the processing for unattended words compared to word recognition levels after explicitly monitoring the word stream. We were able to produce high levels of IB for visually and auditorily presented words under unimodal conditions (Experiment 1) as well as under crossmodal conditions (Experiment 2). A further manipulation revealed, however, that IB is less prevalent when attention is divided across modalities than within the same modality (Experiment 3). These findings are explained in terms of the attentional load hypothesis and suggest that, contrary to some claims, attention resources are to a certain extent shared across sensory modalities.  相似文献   

3.
McNorgan C  Reid J  McRae K 《Cognition》2011,(2):211-233
Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1–4 is consistent with a deep integration hierarchy.  相似文献   

4.
Manipulating inattentional blindness within and across sensory modalities   总被引:1,自引:0,他引:1  
People often fail to consciously perceive visual events that are outside the focus of attention, a phenomenon referred to as inattentional blindness or IB (i.e., Mack & Rock, 1998). Here, we investigated IB for words within and across sensory modalities (visually and auditorily) in order to assess whether dividing attention across different senses has the same consequences as dividing attention within an individual sensory modality. Participants were asked to monitor a rapid stream of pictures or sounds presented concurrently with task-irrelevant words (spoken or written). A word recognition test was used to measure the processing for unattended words compared to word recognition levels after explicitly monitoring the word stream. We were able to produce high levels of IB for visually and auditorily presented words under unimodal conditions (Experiment 1) as well as under crossmodal conditions (Experiment 2). A further manipulation revealed, however, that IB is less prevalent when attention is divided across modalities than within the same modality (Experiment 3). These findings are explained in terms of the attentional load hypothesis and suggest that, contrary to some claims, attention resources are to a certain extent shared across sensory modalities.  相似文献   

5.
Lexical decision latencies to word targets presented either visually or auditorily were faster when directly preceded by a briefly presented (53-ms) pattern-masked visual prime that was the same word as the target (repetition primes), compared with different word primes. Primes that were pseudohomophones of target words did not significantly influence target processing compared with unrelated primes (Experiments 1-2) but did produce robust priming effects with slightly longer prime exposures (67 ms) in Experiment 3. Like repetition priming, these pseudohomophone priming effects did not interact with target modality. Experiments 4 and 5 replicated this general pattern of effects while introducing a different measure of prime visibility and an orthographic priming condition. Results are interpreted within the framework of a bimodal interactive activation model.  相似文献   

6.
Twenty-four kindergarten and fourth grade children were asked to locate a display card which had been visually or verbally presented. A probe, which identified the card to be located, was presented verbally and visually equally often. The children's ability to recall the location of an item did not differ as a function of the modality to which the material was presented. Nor was recall significantly affected when the presentation modality differed from the probe modality, suggesting that children as young as 5 can cross these sensory modalities to retrieve material with no loss in accuracy. Serial position curves suggest that the verbal and visual material is not stored in a common intersensory store. The primacy effect is found to be stronger with visually presented material and the recency effect strongest with auditorily presented material. Probe modality did not influence the serial position curves.  相似文献   

7.
This special issue on temporal processing within and across senses was the outcome of a two-day workshop that took place in Tübingen, Germany. The aim of the workshop and this special issue was to advance our knowledge on timing and the senses and to bring together two lines of research that have not yet interacted, those of synchrony and duration perception.  相似文献   

8.
The hippocampus and memory for "what," "where," and "when"   总被引:2,自引:0,他引:2       下载免费PDF全文
Previous studies have indicated that nonhuman animals might have a capacity for episodic-like recall reflected in memory for "what" events that happened "where" and "when". These studies did not identify the brain structures that are critical to this capacity. Here we trained rats to remember single training episodes, each composed of a series of odors presented in different places on an open field. Additional assessments examined the individual contributions of odor and spatial cues to judgments about the order of events. The results indicated that normal rats used a combination of spatial ("where") and olfactory ("what") cues to distinguish "when" events occurred. Rats with lesions of the hippocampus failed in using combinations of spatial and olfactory cues, even as evidence from probe tests and initial sampling behavior indicated spared capacities for perception of spatial and odor cues, as well as some form of memory for those individual cues. These findings indicate that rats integrate "what," "where," and "when" information in memory for single experiences, and that the hippocampus is critical to this capacity.  相似文献   

9.
Mareschal D  Johnson MH 《Cognition》2003,88(3):259-276
Four-month-olds' memory for surface feature and location information was tested following brief occlusions. When the target objects were images of female faces or monochromatic asterisks infants showed increased looking times following a change in identity or color but not following a change in location or combinations of feature and location information. When the target objects were images of manipulable toys, the infants showed increased looking times following a change in location but not identity or the binding of location and identity information. This evidence is consistent with the idea that young infants are unable to maintain the information processed separately in both the dorsal and ventral visual streams during occlusions. Our results suggest that it is the target's affordance for action that determines whether the dorsal or ventral information is selectively maintained during occlusion.  相似文献   

10.
In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Penney, Gibbon, & Meck, 2000). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities.  相似文献   

11.
Some studies have reported a low rate of false recognition (FR) in individuals with autism spectrum disorder (ASD) relative to non-autistic comparison participants (CPs). This finding, however, has not always been replicated and the source of the discrepancy remains unknown. We hypothesised that poor episodic memory functions may account for this finding. We used an adapted version of the Deese, Roediger and McDermott paradigm which presents lists of words, pictures or word–picture pairs to obtain measures of performance which reflect episodic [hits and false alarms (FAs)] and semantic (FR) memory functions. Results showed a decreased rate of FR in ASD individuals with lists of words which rose above the rate seen in non-autistic CPs with lists of word–picture pairs. This increased rate of FR in ASD was accompanied by a parallel increase in hits and a decrease in FA which reached a similar level in the two groups. Poor episodic memory functions may prevent individuals with ASD from acquiring item information which in turn precludes the formation of semantic links between items. This could render them less prone to FR.  相似文献   

12.
Four experiments examined the manner in which item identity and relative position are recovered from visual input. A successivesame/different matching paradigm was designed in which each trial contained a prime and a target display. Each display contained a reference object (i.e., a “+”) and a located object (i.e., a letter, which fell to either the right or the left of the “+”). In Experiment 1, subjects carried out identity judgments on the letters. Experiment 2 examined relative position judgments; in Experiment 3, subjects had to judge both item identity and relative position information. Overall, these initial data suggested that identity and positional information are recovered via independent mechanisms and that these operate concurrently. This suggestion was supported by the results of Experiment 4, which in turn disconfirmed an alternative response account of performance.  相似文献   

13.
Several recent reports suggest that the behavioral and cortical specificity of face processing may be influenced by experience. To test this hypothesis, behavioral and electrophysiological data were recorded from adults in response to human and monkey faces differing in familiarity and orientation. An analysis of event-related potential and behavioral data revealed differentiation across species, familiarity, and orientation. Behavioral measures were correlated with amplitude and latency measures for each factor of interest. These analyses revealed that accuracy was positively related to the amplitude of the vertex positive potential in the human face task but not in the monkey face task. These findings suggest that previous experience with different categories of faces modulates the link between behavioral and electrophysiological measures of face processing.  相似文献   

14.
Training people on temporal discrimination can substantially improve performance in the trained modality but also in untrained modalities. A pretest–training–posttest design was used to investigate whether consolidation plays a crucial role for training effects within the trained modality and its transfer to another modality. In the pretest, both auditory and visual discrimination performance was assessed. In the training phase, participants performed only the auditory task. After a consolidation interval of either 5 min or 24 h, participants were again tested in both the auditory and visual tasks. Irrespective of the consolidation interval, performance improved from the pretest to the posttest in both modalities. Most importantly, the training effect for the trained auditory modality was independent of the consolidation interval whereas the transfer effect to the visual modality was larger after 24 h than after 5 min. This finding shows that transfer effects benefit from extended consolidation.  相似文献   

15.
16.
Does a behavioral and anatomical division exist between spatial and object working memory? In this article, we explore this question by testing human participants in simple visual working memory tasks. We compared a condition in which there was no location change with conditions in which absolute location change and absolute plus relative location change were manipulated. The results showed that object memory was influenced by memory for relative but not for absolute location information. Furthermore, we demonstrated that relative space can be specified by a salient surrounding box or by distractor objects with no touching surfaces. Verbal memory was not influenced by any type of spatial information. Taken together, these results indicate that memory for "where" influences memory for "what." We propose that there is an asymmetry in memory according to which object memory always contains location information.  相似文献   

17.
This study aimed to provide evidence for a Global Precedence Effect (GPE) in both vision and audition modalities. In order to parallel Navon's paradigm, a novel auditory task was designed in which hierarchical auditory stimuli were used to involve local and global processing. Participants were asked to process auditory and visual hierarchical patterns at the local or global level. In both modalities, a global-over-local advantage and a global interference on local processing were found. The other compelling result is a significant correlation between these effects across modalities. Evidence that the same participants exhibit similar processing style across modalities strongly supports the idea of a cognitive style to process information and common processing principle in perception.  相似文献   

18.
The visual system historically has been defined as consisting of at least two broad subsystems subserving object and spatial vision. These visual processing streams have been organized both structurally as two distinct pathways in the brain, and functionally for the types of tasks that they mediate. The classic definition by Ungerleider and Mishkin labeled a ventral "what" stream to process object information and a dorsal "where" stream to process spatial information. More recently, Goodale and Milner redefined the two visual systems with a focus on the different ways in which visual information is transformed for different goals. They relabeled the dorsal stream as a "how" system for transforming visual information using an egocentric frame of reference in preparation for direct action. This paper reviews recent research from psychophysics, neurophysiology, neuropsychology and neuroimaging to define the roles of the ventral and dorsal visual processing streams. We discuss a possible solution that allows for both "where" and "how" systems that are functionally and structurally organized within the posterior parietal lobe.  相似文献   

19.
The effect of semantic priming upon lexical decisions made for words in isolation (Experiment 1) and during sentence comprehension (Experiment 2) was investigated using a cross-modal lexical decision task. In Experiment 1, subjects made lexical decisions to both auditory and visual stimuli. Processing auditorily presented words facilitated subsequent lexical decisions on semantically related visual words. In Experiment 2, subjects comprehended auditorily presented sentences while simultaneously making lexical decisions for visually presented stimuli. Lexical decisions were facilitated when a visual word appeared immediately following a related word in the sentential material. Lexical decisions were also facilitated when the visual word appeared three syllables following closure of the clause containing the related material. Arguments are made for autonomy of semantic priming during sentence comprehension.  相似文献   

20.
Three experiments examined nonspatial extinction in G.K., a patient with bilateral parietal damage. Experiment 1 demonstrated nonspatial extinction (poor detection of a weak relative to a stronger perceptual group), even when the stronger group was less complex than the weaker group. Experiment 2 showed improved report of a letter falling at the location of the stronger group, but explicit judgments of the location of the letter were at chance. Experiment 3 replicated the object-cuing benefit, though G.K. could not discriminate whether a letter fell at the same location as the stronger perceptual group. The data indicate coupling between object- and space-based attention, so that spatial attention is drawn to the location occupied by the winner of object-based competition for selection. In this case, what cues where . This coupling operates implicitly, even when explicit location judgments are impaired.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号