首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Similarity ratings of pairs of lipread consonants were obtained using a 5-point scale. Matrices were constructed showing mean similarity ratings and confusions among stimuli. Both the similarity and the confusion data provide normative data useful for researchers in many areas. Lipread data collected here are compared with similarity ratings of orthographically and auditorily presented consonants collected by Manning (1977). These comparisons provide information about how stimulus similarity both within and between presentation formats may affect information processing of the three types of stimuli. These data are of special interest to researchers studying the visual processing of speech and the effect of format of presentation on recall.  相似文献   

2.
McCotter MV  Jordan TR 《Perception》2003,32(8):921-936
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments 1a (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and visual signals were combined to produce congruent and incongruent audiovisual speech stimuli. Experiments 1a and 1b showed that perception of visual speech, and its influence on identifying the auditory components of congruent and incongruent audiovisual speech, was less for LI than for either NC or GS faces, which produced identical results. Experiments 2a and 2b showed that perception of visual speech, and influences on perception of incongruent auditory speech, was less for LI and CLI faces than for NC and CI faces (which produced identical patterns of performance). Our findings for NC and CI faces suggest that colour is not critical for perception of visual and audiovisual speech. The effect of luminance inversion on performance accuracy was relatively small (5%), which suggests that the luminance information preserved in LI faces is important for the processing of visual and audiovisual speech.  相似文献   

3.
An attempt was made to replicate Hess and Polt's (1960) report of sex differences in pupillary responses to sex-stereotyped pictures. Some methodological refinements were used that seem desirable for future studies. College men and women were either shown or told they would be shown pictures of a semi-nude man, a semi-nude woman, a baby, and a landscape. With resting pupil size as covariate, a three-factor analysis of covariance did not show sex differences in response to visually presented stimuli. Men responded more to verbal than visual mode of presentation and more than women to verbal stimuli. Contrary to previous results, men responded as much or more than women to verbal or visual presentation of baby stimuli. The verbal or anticipatory mode seems to be at least as sensitive as the visual and eliminates problems of control of visual materials.  相似文献   

4.
Direct examinations of gender differences in global-local processing are sparse, and the results are inconsistent. We examined this issue with a visuospatial judgment task and with a shape judgment task. Women and men were presented with hierarchical stimuli that varied in closure (open or closed shape) or in line orientation (oblique or horizontal/vertical) at the global or local level. The task was to classify the stimuli on the basis of the variation at the global level (global classification) or at the local level (local classification). Women’s classification by closure (global or local) was more accurate than men’s for stimuli that varied in closure on both levels, suggesting a female advantage in discriminating shape properties. No gender differences were observed in global-local processing bias. Women and men exhibited a global advantage, and they did not differ in their speed of global or local classification, with only one exception. Women were slower than men in local classification by orientation when the to-be-classified lines were embedded in a global line with a different orientation. This finding suggests that women are more distracted than men by misleading global oriented context when performing local orientation judgments, perhaps because women and men differ in their ability to use cognitive schemes to compensate for the distracting effects of the global context. Our findings further suggest that whether or not gender differences arise depends not only on the nature of the visual task but also on the visual context.  相似文献   

5.
In two separate studies, sex differences in modal-specific elements of working memory were investigated by utilizing words and pictures as stimuli. Groups of men and women performed a free-recall task of words or pictures in which 20 items were presented concurrently and the number of correct items recalled was measured. Following stimulus presentation, half of the participants were presented a verbal-based distraction task. On the verbal working-memory task, performance of men and women was not significantly different in the no-distraction condition. However, in the distraction condition, women's recall was significantly lower than their performance in the no-distraction condition and men's performance in the distraction condition. These findings are consistent with previous research and point to sex differences in cognitive ability putatively resulting from functional neuroanatomical dissimilarities. On the visual working-memory task, women showed significantly greater recall than men. These findings are inconsistent with previous research and underscore the need for further research.  相似文献   

6.
Vatakis A  Spence C 《Perception》2008,37(1):143-160
Research has shown that inversion is more detrimental to the perception of faces than to the perception of other types of visual stimuli. Inverting a face results in an impairment of configural information processing that leads to slowed early face processing and reduced accuracy when performance is tested in face recognition tasks. We investigated the effects of inverting speech and non-speech stimuli on audiovisual temporal perception. Upright and inverted audiovisual video clips of a person uttering syllables (experiments 1 and 2), playing musical notes on a piano (experiment 3), or a rhesus monkey producing vocalisations (experiment 4) were presented. Participants made unspeeded temporal-order judgments regarding which modality stream (auditory or visual) appeared to have been presented first. Inverting the visual stream did not have any effect on the sensitivity of temporal discrimination responses in any of the four experiments, thus implying that audiovisual temporal integration is resilient to the effects of orientation in the picture plane. By contrast, the point of subjective simultaneity differed significantly as a function of orientation only for the audiovisual speech stimuli but not for the non-speech stimuli or monkey calls. That is, smaller auditory leads were required for the inverted than for the upright-visual speech stimuli. These results are consistent with the longer processing latencies reported previously when human faces are inverted and demonstrates that the temporal perception of dynamic audiovisual speech can be modulated by changes in the physical properties of the visual speech (ie by changes in orientation).  相似文献   

7.
Speech alignment is the tendency for interlocutors to unconsciously imitate one another’s speaking style. Alignment also occurs when a talker is asked to shadow recorded words (e.g., Shockley, Sabadini, & Fowler, 2004). In two experiments, we examined whether alignment could be induced with visual (lipread) speech and with auditory speech. In Experiment 1, we asked subjects to lipread and shadow out loud a model silently uttering words. The results indicate that shadowed utterances sounded more similar to the model’s utterances than did subjects’ nonshadowed read utterances. This suggests that speech alignment can be based on visual speech. In Experiment 2, we tested whether raters could perceive alignment across modalities. Raters were asked to judge the relative similarity between a model’s visual (silent video) utterance and subjects’ audio utterances. The subjects’ shadowed utterances were again judged as more similar to the model’s than were read utterances, suggesting that raters are sensitive to cross-modal similarity between aligned words.  相似文献   

8.
In several spatial tasks in which men outperform women in the processing of visual input, the sex difference has been eliminated in matching contexts limited to haptic input. The present experiment tested whether such contrasting results would be reproduced in a mental rotation task. A standard visual condition involved two-dimensional illustrations of three-dimensional stimuli; in a haptic condition, three-dimensional replicas of these stimuli were only felt; in an additional visual condition, these replicas were seen. The results indicated that, irrespective of condition, men's response times were shorter than women's, although accuracy did not significantly differ according to sex. For both men and women, response times were shorter and accuracy was higher in the standard condition than in the haptic one, the best performances being recorded when full replicas were shown. Self-reported solving strategies also varied as a function of sex and condition. The discussion emphasizes the robustness of men's faster speed in mental rotation. With respect to both speed and accuracy, the demanding sequential processing called for in the haptic setting, relative to the standard condition, is underscored, as is the benefit resulting from easier access to depth cues in the visual context with real three-dimensional objects.  相似文献   

9.
Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired (“unity assumption”). Participants made temporal order judgments (TOJ) and simultaneity judgments (SJ) about sine-wave speech (SWS) replicas of pseudowords and the corresponding video of the face. Listeners in speech and non-speech mode were equally sensitive judging audiovisual temporal order. Yet, using the McGurk effect, we could demonstrate that the sound was more likely integrated with lipread speech if heard as speech than non-speech. Judging temporal order in audiovisual speech is thus unaffected by whether the auditory and visual streams are paired. Conceivably, previously found differences between speech and non-speech stimuli are not due to the putative “special” nature of speech, but rather reflect low-level stimulus differences.  相似文献   

10.
Extensive research has identified individual differences associated with sex in a range of visual task performances, including susceptibility to visual illusions. The aim of this study was to identify the locus of sex differences within the context of the Poggendorf illusion. 79 women and 79 men participated within a mixed factorial design. Analyses indicated that sex differences were only present in the stimulus context with the full inducing element present. This finding replicates recent research and provides qualifying evidence as to the locus of the effect. The findings are discussed within the functional framework of perceptual processes involved in extrapolating 3-dimensional characteristics from 2-dimensional visual stimuli.  相似文献   

11.
Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that message is presented in a noisy background. Speech is a particularly important example of multisensory integration because of its behavioural relevance to humans and also because brain regions have been identified that appear to be specifically tuned for auditory speech and lip gestures. Previous research has suggested that speech stimuli may have an advantage over other types of auditory stimuli in terms of audio-visual integration. Here, we used a modified adaptive psychophysical staircase approach to compare the influence of congruent visual stimuli (brief movie clips) on the detection of noise-masked auditory speech and non-speech stimuli. We found that congruent visual stimuli significantly improved detection of an auditory stimulus relative to incongruent visual stimuli. This effect, however, was equally apparent for speech and non-speech stimuli. The findings suggest that speech stimuli are not specifically advantaged by audio-visual integration for detection at threshold when compared with other naturalistic sounds.  相似文献   

12.
104 men and women were tested for visual field-hemispheric transfer of spatial information on a dot-localization task. Right-handed subjects showed significant improvement when stimuli were presented to the left visual field of the right hemisphere (LVF-RH) after practice on the same task presented to the right visual field of the left hemisphere (RVF-LH) first. No improvement was found when the task was presented in the reverse order (LVF-RH first followed by RVF-LH). It was concluded that, for right-handers, transfer of spatial information to the right hemisphere is facilitated while transfer to the left hemisphere is inhibited. Left-handed subjects demonstrated no significant improvement in either condition, suggesting inhibition or lack of transfer of spatial information in either direction. No sex differences were found in either right-handed or left-handed subjects. The findings suggest that there may be different mechanisms underlying the similarities in functional lateralization of women and left-handers.  相似文献   

13.
We investigated primary and secondary psychopathy and the ability to detect high-stakes, real-life emotional lies in an on-line experiment (N = 150). Using signal detection analysis, we found that lie detection ability was overall above chance level, there was a tendency towards responding liberally to the test stimuli, and women were more accurate than men. Further, sex moderated the relationship between psychopathy and lie detection ability; in men, primary psychopathy had a significant positive correlation with the ability to detect lies, whereas in women there was a significant negative correlation with deception detection. The results are discussed with reference to evolutionary theory and sex differences in processing socio-emotional information.  相似文献   

14.
The ability to selectively attend to an auditory stimulus appears to decline with age and may result from losses in the ability to inhibit the processing of irrelevant stimuli (i.e., the inhibitory deficit hypothesis; L. Hasher & R. T. Zacks, 1988). It is also possible that declines in the ability to selectively attend are a result of age-related hearing losses. Three experiments examined whether older and younger adults differed in their ability to inhibit the processing of distracting stimuli when the listening situation was adjusted to correct for individual differences in hearing. In all 3 experiments, younger and older adults were equally affected by irrelevant stimuli, unattended stimuli, or both. The implications for auditory attention research and for possible differences between auditory and visual processing are discussed.  相似文献   

15.
Emotion regulation deficits have been implicated in anxiety and depressive disorders, and these internalising disorders are more prevalent in women than men. Few electrophysiological studies have investigated sex differences in emotional reactivity and emotion regulation controlling for menstrual phase. Event-related potentials (ERPs) were recorded from 28 early follicular women, 29 midluteal women, and 27 men who completed an emotion regulation task. A novel finding of increased N2 amplitude during suppression was found for midluteal women compared with men. These findings suggest midluteal women may be significantly less able to suppress cortical processing of negative stimuli compared to men. This ERP finding was complemented by behavioral ratings data which revealed that while both early follicular and midluteal women reported more distress than men, midluteal women also reported greater effort when suppressing their responses than men. P1 and N1 components were increased in midluteal women compared to men regardless of instructional set, suggesting greater early attentional processing. No sex or menstrual phase differences were apparent in P3 or LPP. This study underscores the importance of considering menstrual phase when examining sex differences in the cortical processing of emotion regulation and demonstrates that midluteal women may have deficits in down-regulating their neural and behavioural responses.  相似文献   

16.
Upon hearing an ambiguous speech sound dubbed onto lipread speech, listeners adjust their phonetic categories in accordance with the lipread information (recalibration) that tells what the phoneme should be. Here we used sine wave speech (SWS) to show that this tuning effect occurs if the SWS sounds are perceived as speech, but not if the sounds are perceived as non-speech. In contrast, selective speech adaptation occurred irrespective of whether listeners were in speech or non-speech mode. These results provide new evidence for the distinction between a speech and non-speech processing mode, and they demonstrate that different mechanisms underlie recalibration and selective speech adaptation.  相似文献   

17.
Three experiments investigated performance as a function of the visual hemifield to which verbal and spatial stimuli were presented tachistoscopically. The aim was to relate laterality effects to individual, sex and cultural differences in spatial ability within the framework of a model of hemispheric specialisation. A left-hemisphere advantage for verbal materials was obtained but no advantage for right-hemisphere presentation of visuospatial information occurred in the following samples: British and Ghanaian (experiment 1), high and low spatial ability groups (experiment 2) and males and females (experiment 3). Traditional spatial ability tests had no predictive value for performance on the tachistocopic tasks and an interaction between presentation field and responding hand was interpreted as implying equivalent processing of spatial information in either hemisphere.  相似文献   

18.
Functional MRI was used to investigate sex differences in brain activation during a paradigm similar to a lexical-decision task. Six males and 6 females performed two runs of the lexical visual field task (i.e., deciding which visual field a word compared with a pseudoword was presented to). A sex difference was noted behaviorally: The reaction time data showed males had a marginal right visual field advantage and women a left visual field advantage. Imaging results showed that men had a strongly left-lateralized pattern of activation, e.g., inferior frontal and fusiform gyrus, while women showed a more symmetrical pattern in language related areas with greater right-frontal and right-middle-temporal activation. The data show evidence of task-specific sex differences in the cerebral organization of language processing.  相似文献   

19.
Congruent information conveyed over different sensory modalities often facilitates a variety of cognitive processes, including speech perception (Sumby & Pollack, 1954). Since auditory processing is substantially faster than visual processing, auditory-visual integration can occur over a surprisingly wide temporal window (Stein, 1998). We investigated the processing architecture mediating the integration of acoustic digit names with corresponding symbolic visual forms. The digits "1" or "2" were presented in auditory, visual, or bimodal format at several stimulus onset asynchronies (SOAs; 0, 75, 150, and 225 msec). The reaction times (RTs) for echoing unimodal auditory stimuli were approximately 100 msec faster than the RTs for naming their visual forms. Correspondingly, bimodal facilitation violated race model predictions, but only at SOA values greater than 75 msec. These results indicate that the acoustic and visual information are pooled prior to verbal response programming. However, full expression of this bimodal summation is dependent on the central coincidence of the visual and auditory inputs. These results are considered in the context of studies demonstrating multimodal activation of regions involved in speech production.  相似文献   

20.
In this research the role of the RH in the comprehension of speech acts (or illocutionary force) was examined. Two split-screen experiments were conducted in which participants made lexical decisions for lateralized targets after reading a brief conversation remark. On one-half of the trials the target word named the speech act performed with the preceding conversation remark; on the remaining trials the target did not name the speech act that the remark performed. In both experiments, lexical decisions were facilitated for targets representing the speech act performed with the prior utterance, but only when the target was presented to the left visual field (and hence initially processed by the RH) and not when presented to the right visual field. This effect occurred at both short (Experiment 1: 250 ms) and long (Experiment 2: 1000 ms) delays. The results demonstrate the critical role played by the RH in conversation processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号