首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT— Recent work has found support for two dissociable and parallel neural subsystems underlying object and shape recognition in the visual domain: an abstract-category subsystem that operates more effectively in the left cerebral hemisphere than in the right, and a specific-exemplar subsystem that operates more effectively in the right hemisphere than in the left. Evidence of this asymmetry has been observed for linguistic stimuli (words, pseudoword forms) and nonlinguistic stimuli (objects). In the auditory domain, we previously found hemispheric asymmetries in priming effects using linguistic stimuli (spoken words). In the present study, we conducted four long-term repetition-priming experiments to investigate whether such hemispheric asymmetries would be observed for nonlinguistic auditory stimuli (environmental sounds) as well. The results support the dissociable-subsystems theory. Specificity effects were obtained when sounds were presented to the left ear (right hemisphere), but not when sounds were presented to the right ear (left hemisphere). Theoretical implications are discussed.  相似文献   

2.
To study how perceptual asymmetries in the recognition of emotion reflect developmental changes in processing affective information, a fused rhyming dichotic word test with positive, negative, and neutral stimuli was administered to adults and children. Results suggested that the hemisphere in which affective information is initially processed affects the strength of perceptual asymmetry and that children's perceptual processing of emotional information is constrained by limited computational resources. Another experiment ruled out effects of volitional shifting of attention to emotional stimuli. These data further confirm that emotional processing involves integration of neural systems across brain regions, including distributed systems that support arousal and recognition. General developmental factors, such as processing capacity, contribute to the coordination of multiple systems responsible for processing emotional information.  相似文献   

3.
In the visual domain, Marsolek and colleagues (1999, 2008) have found support for two dissociable and parallel neural subsystems underlying object and shape recognition: an abstract-category subsystem that operates more effectively in the left cerebral hemisphere (LH), and a specific-exemplar subsystem that operates more effectively in the right cerebral hemisphere (RH). Evidence of this asymmetry has been observed in priming specificity for linguistic (words, pseudoword forms) and nonlinguistic (objects) stimuli. In the auditory domain, the authors previously found hemispheric asymmetries in priming effects for linguistic (spoken words) and nonlinguistic (environmental sounds) stimuli. In the present study, the same asymmetrical pattern was observed in talker identification by means of two long-term repetition-priming experiments. Both experiments consisted of a familiarization phase and a final talker identification test phase, using sentences as stimuli. The results showed that specificity effects (an advantage for same-sentence priming, relative to different-sentence priming) emerged when the target stimuli were presented to the left ear (RH), but not when the target stimuli were presented to the right ear (LH). Taken together, this consistent asymmetrical pattern of data from both domains-visual and auditory-may be indicative of a more general property of the human perceptual processing system. Theoretical implications are discussed.  相似文献   

4.
Contrasting linguistic and nonlinguistic processing has been of interest to many researchers with different scientific, theoretical, or clinical questions. However, previous work on this type of comparative analysis and experimentation has been limited. In particular, little is known about the differences and similarities between the perceptual, cognitive, and neural processing of nonverbal environmental sounds and that of speech sounds. With the aim of contrasting verbal and nonverbal processing in the auditory modality, we developed a new on-line measure that can be administered to subjects from different clinical, neurological, or sociocultural groups. This is an on-line task of sound to picture matching, in which the sounds are either environmental sounds or their linguistic equivalents and which is controlled for potential task and item confounds across the two sound types. Here, we describe the design and development of our measure and report norming data for healthy subjects from two different adult age groups: younger adults (18–24 years of age) and older adults (54–78 years of age). We also outline other populations to which the test has been or is being administered. In addition to the results reported here, the test can be useful to other researchers who are interested in systematically contrasting verbal and nonverbal auditory processing in other populations.  相似文献   

5.
Language and concepts are intimately linked, but how do they interact? In the study reported here, we probed the relation between conceptual and linguistic processing at the earliest processing stages. We presented observers with sequences of visual scenes lasting 200 or 250 ms per picture. Results showed that observers understood and remembered the scenes' abstract gist and, therefore, their conceptual meaning. However, observers remembered the scenes at least as well when they simultaneously performed a linguistic secondary task (i.e., reading and retaining sentences); in contrast, a nonlinguistic secondary task (equated for difficulty with the linguistic task) impaired scene recognition. Further, encoding scenes interfered with performance on the nonlinguistic task and vice versa, but scene processing and performing the linguistic task did not affect each other. At the earliest stages of conceptual processing, the extraction of meaning from visually presented linguistic stimuli and the extraction of conceptual information from the world take place in remarkably independent channels.  相似文献   

6.
We propose that much of the variance among right-handed subjects in perceptual asymmetries on standard behavioral measures of laterality arises from individual differences in characteristic patterns of asymmetric hemispheric arousal. Dextrals with large right-visual-field (RVF) advantages on a tachistoscopic syllable-identification task (assumed to reflect characteristically higher left-hemisphere than right-hemisphere arousal) outperformed those having weak or no visual-field asymmetries (assumed to reflect characteristically higher right-hemisphere than left-hemisphere arousal). The two groups were equal, however, in asymmetries of error patterns that are thought to indicate linguistic or nonlinguistic encoding strategies. For both groups, relations between visual fields in the ability to discriminate the accuracy of performance followed the pattern of syllable identification itself, suggesting that linguistic and metalinguistic processes are based on the same laterally specialized functions. Subjects with strong RVF advantages had a pessimistic bias for rating performance, and those with weak or no asymmetries had an optimistic bias, particularly for the left visual field (LVF). This is concordant with evidence that the arousal level of the right hemisphere is closely related to affective mood. Finally, consistent with the arousal model, leftward asymmetries on a free-vision face-processing task became larger as RVF advantages on the syllable task diminished and as optimistic biases for the LVF, relative to the RVF, increased.  相似文献   

7.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

8.
The neurocognitive processing of environmental sounds and linguistic stimuli shares common semantic resources and can lead to the activation of motor programs for the generation of the passively heard sound or speech. We investigated the extent to which the cognition of environmental sounds, like that of language, relies on symbolic mental representations independent of the acoustic input. In a hierarchical sorting task, we found that evaluation of nonliving sounds is consistently biased toward a focus on acoustical information. However, the evaluation of living sounds focuses spontaneously on sound-independent semantic information, but can rely on acoustical information after exposure to a context consisting of nonliving sounds. We interpret these results as support for a robust iconic processing strategy for nonliving sounds and a flexible symbolic processing strategy for living sounds.  相似文献   

9.
Strong cross-modal interactions exist between visual and auditory processing. The relative contributions of perceptual versus decision-related processes to such interactions are only beginning to be understood. We used methodological and statistical approaches to control for potential decision-related contributions such as response interference, decisional criterion shift, and strategy selection. Participants were presented with rising-, falling-, and constant-amplitude sounds and were asked to detect change (increase or decrease) in sound amplitude while ignoring an irrelevant visual cue of a disk that grew, shrank, or stayed constant in size. Across two experiments, testing context was manipulated by varying the grouping of visual cues during testing, and cross-modal congruency showed independent perceptual and decision-related effects. Whereas a change in testing context greatly affected criterion shifts, cross-modal effects on perceptual sensitivity remained relatively consistent. In general, participants were more sensitive to increases in sound amplitude and less sensitive to sounds paired with dynamic visual cues. As compared with incongruent visual cues, congruent cues enhanced detection of amplitude decreases, but not increases. These findings suggest that the relative contributions of perceptual and decisional processing and the impacts of these processes on cross-modal interactions can vary significantly depending on asymmetries in within-modal processing, as well as consistencies in cross-modal dynamics.  相似文献   

10.
郭秀艳 《心理科学》2002,25(5):535-537,534
本研究以探索记忆中意识和无意识贡献大小为目的,采用年龄和材料两个自变量,年龄分中青年和老年两组,被试均为23人,材料分文字和非文字两种。实验通过PDP的包含和排除测验计算意识和无意识的贡献大小。结果发现:(1)意识在不同年龄和不同材料的记忆中所起作用的差异极为显著;(2)无意识在文字材料的记忆中所起作用的差异极为显著;(3)无意识在非文字材料和不同年龄的记忆中所起的作用的差异均不显著。由此推论,老年人记忆中无意识贡献并未下降。这似说明,老年人内隐记忆并未老化。  相似文献   

11.
N400作为语义违反的特异性ERP组分,其波幅反映了语义整合加工过程的难度,因此可将其作为研究语篇语境意义建构过程的一项重要指标。语篇语境可分为语言语境和非语言语境。前人一系列基于语篇水平的N400研究表明,从语义的角度上看,语言语境对句子理解具有制约和协调作用,其机制涉及词汇语义启动、场景适宜性、文字内容信息以及因果推理等方面;非语言语境对句子理解具有制约和补充作用,但其机制尚不明确。两种不同来源的语义信息在实时理解中是同时得到加工的  相似文献   

12.
Abstract—Left-hemisphere (LH) superiority for speech perception is a fundamental neurocognitive aspect of language, and is particularly strong for consonant perception. Two key theoretical aspects of the LH advantage for consonants remain controversial, however: the processing mode (auditory vs. linguistic) and the developmental basis of the specialization (innate vs. experience dependent). Click consonants offer a unique opportunity to evaluate these theoretical issues. Brief and spectrally complex, oral clicks exemplify the acoustic properties that have been proposed for an auditorily based LH specialization, yet they retain linguistic significance only for listeners whose languages employ them as consonants (e.g., Zulu). Speakers of other languages (e.g., English) perceive these clicks as nonspeech sounds. We assessed Zulu versus English listeners' hemispheric asymmetries for clicks, in and out of syllable context, in a dichotic-listening task. Performance was good for both groups, but only Zulus showed an LH advantage. Thus, linguistic processing and experience both appear to be crucial.  相似文献   

13.
One (unitary) school of thought views all symbolic competences as closely related, while a rival (pluralistic) approach underscores the relative differences among modes of symbolic processing. To secure information on the plausibility of these competing hypotheses, matched groups of left- and right-hemisphere patients were given a visual symbol-recognition test. Subjects were required to choose the correctly depicted symbol among a set of four. The results challenge a strong version of the “unitary” hypothesis. What emerges instead is a view of symbol systems as a continuum: relatively linguistic symbol systems prove challenging for left-hemisphere patients, relatively nonlinguistic systems pose comparable difficulties for right-hemisphere patients. Contrary to hypothesis, the processing of numerical symbols poses special difficulty for right-hemisphere patients. Performance on trademarks—items which can be processed by linguistic or nonlinguistic strategies—suggests that organic patients with contrasting pathologies may adopt different processing strategies when confronting identical physical stimuli.  相似文献   

14.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   

15.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   

16.
This study explores the use of two types of facial expressions, linguistic and affective, in a lateralized recognition accuracy test with hearing and deaf subjects. The linguistic expressions represent unfamiliar facial expression for the hearing subjects whereas they serve as meaningful linguistic emblems for deaf signers. Hearing subjects showed left visual field advantages for both types of signals while deaf subjects' visual field asymmetries were greatly influenced by the order of presentation. The results suggest that for hearing persons, the right hemisphere may predominate in the recognition of all forms of facial expression. For deaf signers, hemispheric specialization for the processing of facial signals may be influenced by the differences these signals serve in this population. The use of noncanonical facial signals in laterality paradigms is encouraged as it provides an additional avenue of exploration into the underlying determinants of hemispheric specialization for recognition of facial expression.  相似文献   

17.
Why linguistic input facilitates nonlinguistic categorization is frequently explained in terms of children's attention to uniquely linguistic forms such as words. However, whether this facilitation is rooted in the children's attention to word forms very early in lexical learning has not been examined directly. A previous experiment (Roberts & Cuff, 1989) provided a set of conditions under which 15-month-olds did not successfully categorize in the absence of linguistic input. The two experiments reported here replicate Roberts and Cuff (1989) exactly, with the exception that either language (Experiment 1) or instrumental music (Experiment 2) was provided as accompanying input. Infants in both experiments successfully categorized and significantly increased attention during habituation. Although directly documenting the influence of language on categorization prior to the “vocabulary explosion,” this influence does not appear attributable to the presence of word forms. Instead, factors common to language and music (e.g., attention-getting properties or factors influencing attention) may facilitate nonlinguistic categorization at the beginnings of word learning.  相似文献   

18.
In this article we evaluate current models of language processing by testing speeded classification of stimuli comprising one linguistic and one nonlinguistic dimension. Garner interference obtains if subjects are slower to classify attributes on one dimension when an irrelevant dimension is varied orthogonally than when the irrelevant dimension is held constant. With certain linguistic-nonlinguistic pairings (e.g., Experiment 1: the words high and low spoken either loudly or softly), significant Garner interference obtained when either dimension was classified; this indicated two-directional crosstalk. With other pairings (e.g., Experiment 3: spoken vowels and loudness), only the nonlinguistic dimension (e.g., loudness) displayed interference, suggesting unidirectional crosstalk downstream from a phonemic/graphemic level of analysis. Collectively, these results indicate the interaction can occur either within or across levels of information processing, being directed toward either more advanced or more primitive processes. Although poorly explained by all current models of language processing, our results are strikingly inconsistent with models that posit autonomy among levels of processing.  相似文献   

19.
Most theories of reference assume that a referent's saliency in the linguistic context determines the choice of referring expression. However, it is less clear whether cognitive factors relating to the nonlinguistic context also have an effect. We investigated whether visual context influences the choice of a pronoun over a repeated noun phrase when speakers refer back to a referent in a preceding sentence. In Experiment 1, linguistic mention as well as visual presence of a competitor with the same gender as the referent resulted in fewer pronouns for the referent, suggesting that both linguistic and visual context determined the choice of referring expression. Experiment 2 showed that even when the competitor had a different gender from the referent, its visual presence reduced pronoun use, indicating that visual context plays a role even if the use of a pronoun is unambiguous. Thus, both linguistic and nonlinguistic information affect the choice of referring expression.  相似文献   

20.
This study aimed at examining sensitivity to lateral linguistic and nonlinguistic information in third and fifth grade readers. A word identification task with a threshold was used, and targets were displayed foveally with or without distractors. Sensitivity to lateral information was inferred from the deterioration of the rate of correct word identification when displayed with distractors. Results show that the two reader groups were sensitive to both right and left lateral information. The area of sensitivity to this information was more extended for the identification of easy words than difficult words. Examination of the detrimental effect of distractors suggests that in both third and fifth graders, the impact of lateral information on foveal processing is the result of a general distraction effect, but also of linguistic processing whose nature remains to be clarified.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号