首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Psycholegal researchers have largely ignored the relevance of nonverbal auditory information in earwitness memory, nor have they compared its retention with visual or verbal information. Memory of nonverbal auditory stimuli was investigated in two different contexts. In Experiment 1, participants recalled more sounds (i.e., nonverbal auditory stimuli) than the sounds' verbal labels. However, with a more ecologically valid method in Experiment 2, participants recalled more verbal stimuli in conjunction with visual information than they did nonverbal stimuli. Even after a 1-week delay, participants' retention of the verbal-visual combination was highest.  相似文献   

2.
The recognition of nonverbal emotional signals and the integration of multimodal emotional information are essential for successful social communication among humans of any age. Whereas prior studies of age dependency in the recognition of emotion often focused on either the prosodic or the facial aspect of nonverbal signals, our purpose was to create a more naturalistic setting by presenting dynamic stimuli under three experimental conditions: auditory, visual, and audiovisual. Eighty-four healthy participants (women = 44, men = 40; age range 20-70 years) were tested for their abilities to recognize emotions either mono- or bimodally on the basis of emotional (happy, alluring, angry, disgusted) and neutral nonverbal stimuli from voice and face. Additionally, we assessed visual and auditory acuity, working memory, verbal intelligence, and emotional intelligence to explore potential explanatory effects of these population parameters on the relationship between age and emotion recognition. Applying unbiased hit rates as performance measure, we analyzed data with linear regression analyses, t tests, and with mediation analyses. We found a linear, age-related decrease in emotion recognition independent of stimulus modality and emotional category. In contrast, the improvement in recognition rates associated with audiovisual integration of bimodal stimuli seems to be maintained over the life span. The reduction in emotion recognition ability at an older age could not be sufficiently explained by age-related decreases in hearing, vision, working memory, and verbal intelligence. These findings suggest alterations in social perception at a level of complexity beyond basic perceptional and cognitive abilities.  相似文献   

3.
31 adolescents with cerebral palsy were administered measures of verbal production, speech perception, nonverbal auditory perception, visuospatial perception and verbal intelligence as well as measures of reading recognition and reading comprehension. Nonverbal auditory perception and verbal intelligence were most highly correlated with both reading measures despite the fact that most subjects were most severely impaired in visuospatial perception.  相似文献   

4.
Emotional communication uses verbal and nonverbal means. In case of conflicting signals, nonverbal information is assumed to have a stronger impact. It is unclear, however, whether perceptual nonverbal dominance varies between individuals and whether it is linked to emotional intelligence. Using audiovisual stimulus material comprising verbal and nonverbal emotional cues that were varied independently, perceptual nonverbal dominance profiles and their relations to emotional intelligence were examined. Nonverbal dominance was found in every participant, ranging from 55 to 100%. Moreover, emotional intelligence, particularly the ability to understand emotions, correlated positively with nonverbal dominance. Furthermore, higher overall emotional intelligence as well as a higher ability to understand emotions were linked to smaller reaction time differences between emotionally incongruent and congruent stimuli. The association between perceptual nonverbal dominance and emotional intelligence, and more specifically the ability to understand emotions, might reflect an adaptive process driven by the experience of higher authenticity in nonverbal cues.  相似文献   

5.
This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10–12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.  相似文献   

6.
Six same-different matching tests, both verbal and nonverbal, in three modalities, along with a set of reading tests, were administered to 120 Israeli children in second, third, and fourth grade. The main effects of all S variables, except sex (grade, socioeconomic level, and ability) were significant, as were the test factors of modality (visual, auditory, cross-modal) and form (verbal-nonverbal), but interactions between S and test factors were small. Multiple regression analysis revealed that overall matching test scores accounted for 35% of the variance in reading scores, although the additional contribution of specific subtests was negligible. Performance on the visual-visual tests was virtually perfect. Auditory-auditory matches were more difficult than auditory-visual matches with nonverbal stimuli, while the reverse was true with verbal stimuli.  相似文献   

7.
The purpose of the present study was to investigate hemispheric deficits in individuals with paranoid schizophrenia on four kinds of tasks: dichoptic viewing tasks involving verbal and nonverbal visual stimuli, and dichotic listening tasks involving verbal and nonverbal auditory stimuli. As dependent measures, both accuracy and speed of (correct) responding were measured. The sample recruited for this study consisted of 18 patients with paranoid schizophrenia, 15 outpatients with anxiety disorders, and 20 controls with no history of psychiatric disorders. Results indicated that, relative to the controls, the paranoid schizophrenic patients were less accurate and less efficient on auditory-verbal tasks requiring right hemisphere processing. Unlike the controls the paranoid schizophrenic patients manifested a lateralized left hemisphere advantage.  相似文献   

8.
The purpose of the present study was to investigate hemispheric deficits in individuals with paranoid schizophrenia on four kinds of tasks: dichoptic viewing tasks involving verbal and nonverbal visual stimuli, and dichotic listening tasks involving verbal and nonverbal auditory stimuli. As dependent measures, both accuracy and speed of (correct) responding were measured. The sample recruited for this study consisted of 18 patients with paranoid schizophrenia, 15 outpatients with anxiety disorders, and 20 controls with no history of psychiatric disorders. Results indicated that, relative to the controls, the paranoid schizophrenic patients were less accurate and less efficient on auditory-verbal tasks requiring right hemisphere processing. Unlike the controls, the paranoid schizophrenic patients manifested a lateralized left hemisphere advantage.  相似文献   

9.
Investigation of the effect that a word recognition task has on concurrent nonverbal tasks showed (a) auditory verbal messages affected visual tracking performance but not the detection of brief light flashes in the visual periphery, (b) greater impairment, both of tracking and light detections, when verbal messages were visual rather than auditory. With a kinaesthetic tracking task, errors increased significantly during auditory messages but were even greater during visual messages. There was no interaction between the modality of tracking error feedback (auditory or visual) and the modality of the verbal message. Nor was the decrement from visual messages reduced by changing the presentation format. It is suggested that different temporal characteristics of visual and auditory information affect the attentional demands of verbal messages.  相似文献   

10.
Three experiments, using temporal generalization and verbal estimation methods, studied judgements of duration of auditory (500-Hz tone) and visual (14-cm blue square) stimuli. With both methods, auditory stimuli were judged longer, and less variable, than visual ones. The verbal estimation experiments used stimuli from 77 to 1183 msec in length, and the slope of the function relating mean estimate to real length differed between modalities (but the intercept did not), consistent with the idea that a pacemaker generating duration representations ran faster for auditory than for visual stimuli. The different variability of auditory and visual stimuli was attributed to differential variability in the operation of a switch of a pacemaker-accumulator clock, and experimental datasuggested that such switch effects were separable from changes in pacemaker speed. Overall, the work showed how a clock model consistent with scalar timing theory, the leading account of animal timing, can address an issue derived from the classical literature on human time perception.  相似文献   

11.
High and low visual imagers, defined as such primarily on the basis of spatial manipulation test performance, were required to identify tachistoscopically-presented pictures, concrete words, and abstract words varying in familiarity. Two recognition paradigms were employed, recognition threshold and recognition latency. High imagers were faster in picture recognition under both paradigms when a nonverbal set or strategy was primed and when pictures were relatively unfamiliar in the threshold paradigm. No relationship was found between imagery ability and word recognition in the visual modality, nor was visual imagery ability related to the auditory recognition of verbal and nonverbal stimuli, such as words and environmental sounds. Commonalities between these findings and others in the imagery ability literature were noted.  相似文献   

12.
The relation between awareness of body topology and auditory comprehension of body part names was studied in 22 aphasic subjects. Two nonverbal tasks—human figure drawing and placement of individual body parts in relation to a drawn face—were compared with two auditory tests of body part comprehension. The two nonverbal and the two verbal tasks were closely correlated with each other, but there was no relation involving either of the verbal tests with either of the nonverbal tests. Selection errors in the auditory comprehension tasks were predominantly semantically based and equally distributed between functionally analogous parts and parts related by location on the body.Dr. Benedet's effort in this study was supported by a grant from the Hispano-North American Joint Committee for Cultural and Educational Cooperation.  相似文献   

13.
Subjects were presented with simultaneously visual and auditory material. They were instructed to attend to only one modality but were required to recall materials from both modalities, one before the other. Results supported a previous suggestion that visual recall is minimum under conditions of simultaneous presentation. That is, in situations where S is allowed to divide his attention between both inputs, visual recall was no greater than when his attention was directed away from visual input and to auditory input. It was noted that this difference in recall between the two modalities was limited to verbal material, as other data indicate the opposite effect for nonverbal material.  相似文献   

14.
15.
Summary Reaction times and the relative latency evaluated by the temporal-order-judgment method for two stimuli of different modalities (visual and auditory) were measured. The difference between reaction times for visual and auditory stimuli was about 40 ms. The relative latency was slightly shorter, however; in conflict with Rutschmann and Link's (1964) previous result, the auditory stimulus must be delayed to be perceived simultaneously with the visual one.  相似文献   

16.
The heterogeneity of schizophrenia remains an obstacle for understanding its pathophysiology. Studies using a tone discrimination screening test to classify patients have found evidence for 2 subgroups having either a specific deficit in verbal working memory (WM) or deficits in both verbal and nonverbal memory. This study aimed to (a) replicate in larger samples differences between these subgroups in auditory verbal WM; (b) evaluate their performance on tests of explicit memory and sustained attention; (c) determine the relation of verbal WM deficits to auditory hallucinations and other symptoms; and (d) examine medication effects. The verbal WM and tone discrimination performance did not differ between medicated (n = 45) and unmedicated (n = 38) patients. Patients with schizophrenia who passed the tone screening test (discriminators; n = 60) were compared with those who did not (nondiscriminators; n = 23) and healthy controls (n = 47). The discriminator subgroup showed poorer verbal WM than did controls and a deficit in verbal but not visual memory on the Wechsler Memory Scale-Revised (Wechsler, 1987), whereas the nondiscriminator subgroup showed overall poorer performance on both verbal and nonverbal tests and a marked deficit in sustained attention. Verbal WM deficits in discriminators were correlated with auditory hallucinations but not with negative symptoms. The results are consistent with a verbal memory deficit in a subgroup of schizophrenia having intact auditory perception, which may stem from dysfunction of language-related cortical regions, and a more generalized cognitive deficit in a subgroup having auditory perceptual and attentional dysfunction.  相似文献   

17.
Hemispheric specialization for processing different types of rapidly exposed stimuli was examined in a forced choice reaction time task. Four conditions of recognition were included: tacial emotion, neutral faces, emotional words, and neutral words. Only the facial emotion condition produced a significant visual field advantage (in favor of the left visual field), but this condition did not differ significantly from the neutral face condition's left visual field superiority. The verbal conditions produced significantly decreased latencies with RVF presentation, while the LVF presentation was associated with decreased latencies on the facial conditions. These results suggested that facial recognition and affective processing cannot be separated as independent factors generating right hemisphere superiority for facial emotion perception, and that task parameters (verbal vs. nonverbal) are important influences upon effects in studies of cerebral specialization.  相似文献   

18.
Impairment of auditory perception and language comprehension in dysphasia   总被引:3,自引:0,他引:3  
Men with chronic focal brain wounds were examined for their ability to discriminate complex tones, synthesized steady-state vowels, and synthesized consonant—vowel syllables. Subjects with left hemisphere damage, but not right hemisphere damage, were impaired in their ability to respond correctly to rapidly changing acoustic stimuli, regardless of whether stimuli were verbal or nonverbal. The degree of impairment in auditory processing correlated highly with the degree of language comprehension impairment. The pattern of impairment of the group with left hemisphere damage on these perceptual tests was similar to that found in children with developmental language disorders.  相似文献   

19.
Dual-process accounts of working memory have suggested distinct encoding processes for verbal and visual information in working memory, but encoding for nonspeech sounds (e.g., tones) is not well understood. This experiment modified the sentence–picture verification task to include nonspeech sounds with a complete factorial examination of all possible stimulus pairings. Participants studied simple stimuli–pictures, sentences, or sounds–and encoded the stimuli verbally, as visual images, or as auditory images. Participants then compared their encoded representations to verification stimuli–again pictures, sentences, or sounds–in a two-choice reaction time task. With some caveats, the encoding strategy appeared to be as important or more important than the external format of the initial stimulus in determining the speed of verification decisions. Findings suggested that: (1) auditory imagery may be distinct from verbal and visuospatial processing in working memory; (2) visual perception but not visual imagery may automatically activate concurrent verbal codes; and (3) the effects of hearing a sound may linger for some time despite recoding in working memory. We discuss the role of auditory imagery in dual-process theories of working memory.  相似文献   

20.
A mental scanning paradigm was used to examine the representation of nonspeech sounds in working memory. Participants encoded sonifications – nonspeech auditory representations of quantitative data – as either verbal lists, visuospatial images, or auditory images. The number of tones and overall frequency changes in the sonifications were also manipulated to allow for different hypothesized patterns of reaction times across encoding strategies. Mental scanning times revealed different patterns of reaction times across encoding strategies, despite the fact that all internal representations were constructed from the same nonspeech sound stimuli. Scanning times for the verbal encoding strategy increased linearly as the number of items in the verbal representation increased. Scanning times for the visuospatial encoding strategy were generally slower and increased as the metric distance (derived metaphorically from frequency change) in the mental image increased. Scanning times for the auditory imagery strategy were faster and closest to the veridical durations of the original stimuli. Interestingly, the number of items traversed in scanning a representation significantly affected scanning times across all encoding strategies. Results suggested that nonspeech sounds can be flexibly represented, and that a universal per-item scanning cost persisted across encoding strategies. Implications for cognitive theory, the mental scanning paradigm, and practical applications are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号