首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is well known that the right side of the mouth moves more than the left during speech, but little is known about how this asymmetry affects lipreading. We investigated asymmetries in the visual expression and perception of speech using the McGurk effect-an illusion in which incongruent lip movements cause listeners to misreport sounds. Thirty right-handed participants watched film clips in which the left, the right, or neither side of the mouth was covered. The McGurk effect was attenuated when the right side of the mouth was covered, demonstrating that this side is more important to lipreading than is the left side of the mouth. Mirror-reversed images tested whether the asymmetry was the result of an observer bias toward the left hemispace. The McGurk effect was stronger in the normal than in the mirror orientation when the mouth was fully visible. Thus, observers attend more to what they think is the right side of the speaker's mouth. Asymmetries in mouth movements may reflect the gestural origins of language, which are also right lateralized.  相似文献   

2.
Current models of reading and speech perception differ widely in their assumptions regarding the interaction of orthographic and phonological information during language perception. The present experiments examined this interaction through a two-alternative, forced-choice paradigm, and explored the nature of the connections between graphemic and phonemic processing subsystems. Experiments 1 and 2 demonstrated a facilitation-dominant influence (i.e., benefits exceed costs) of graphemic contexts on phoneme discrimination, which is interpreted as a sensitivity effect. Experiments 3 and 4 demonstrated a symmetrical influence (i.e., benefits equal costs) of phonemic contexts on grapheme discrimination, which can be interpreted as either a bias effect, or an equally facilitative/inhibitory sensitivity effect. General implications for the functional architecture of language processing models are discussed, as well as specific implications for models of visual word recognition and speech perception.  相似文献   

3.
This study explored asymmetries for movement, expression and perception of visual speech. Sixteen dextral models were videoed as they articulated: 'bat,' 'cat,' 'fat,' and 'sat.' Measurements revealed that the right side of the mouth was opened wider and for a longer period than the left. The asymmetry was accentuated at the beginning and ends of the vocalization and was attenuated for words where the lips did not articulate the first consonant. To measure asymmetries in expressivity, 20 dextral observers watched silent videos and reported what was said. The model's mouth was covered so that the left, right or both sides were visible. Fewer errors were made when the right mouth was visible compared to the left--suggesting that the right side is more visually expressive of speech. Investigation of asymmetries in perception using mirror-reversed clips revealed that participants did not preferentially attend to one side of the speaker's face. A correlational analysis revealed an association between movement and expressivity whereby a more motile right mouth led to stronger visual expressivity of the right mouth. The asymmetries are most likely driven by left hemisphere specialization for language, which causes a rightward motoric bias.  相似文献   

4.
Massaro DW  Chen TH 《Psychonomic bulletin & review》2008,15(2):453-7; discussion 458-62
Galantucci, Fowler, and Turvey (2006) have claimed that perceiving speech is perceiving gestures and that the motor system is recruited for perceiving speech. We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruitedfor perceiving speech, and that speech perception can be adequately described by a prototypical pattern recognition model, the fuzzy logical model of perception (FLMP). Empirical evidence taken as support for gesture and motor theory is reconsidered in more detail and in the framework of the FLMR Additional theoretical and logical arguments are made to challenge gesture and motor theory.  相似文献   

5.
The motor theory of speech perception revised   总被引:35,自引:0,他引:35  
  相似文献   

6.
More than 50 years after the appearance of the motor theory of speech perception, it is timely to evaluate its three main claims that (1) speech processing is special, (2) perceiving speech is perceiving gestures, and (3) the motor system is recruited for perceiving speech. We argue that to the extent that it can be evaluated, the first claim is likely false. As for the second claim, we review findings that support it and argue that although each of these findings may be explained by alternative accounts, the claim provides a single coherent account. As for the third claim, we review findings in the literature that support it at different levels of generality and argue that the claim anticipated a theme that has become widespread in cognitive science.  相似文献   

7.
A theory of visual movement perception   总被引:1,自引:0,他引:1  
  相似文献   

8.
9.
10.
Brain processes underlying spoken language comprehension comprise auditory encoding, prosodic analysis and linguistic evaluation. Auditory encoding usually activates both hemispheres while language-specific stages are lateralized: analysis of prosodic cues are right-lateralized while linguistic evaluation is left-lateralized. Here, we investigated to what extent the absence of prosodic information influences lateralization. MEG brain-responses indicated that syntactic violations lead to early bi-lateral brain responses for syntax violations. When the pitch of sentences was flattened to diminish prosodic cues, the brain's syntax response was lateralized to the right hemisphere, indicating that the missing pitch was generated automatically by the brain when it was absent. This represents a Gestalt phenomenon, since we perceive more than is actually presented.  相似文献   

11.
12.
A complete understanding of visual phonetic perception (lipreading) requires linking perceptual effects to physical stimulus properties. However, the talking face is a highly complex stimulus, affording innumerable possible physical measurements. In the search for isomorphism between stimulus properties and phoneticeffects, second-order isomorphism was examined between theperceptual similarities of video-recorded perceptually identified speech syllables and the physical similarities among the stimuli. Four talkers produced the stimulus syllables comprising 23 initial consonants followed by one of three vowels. Six normal-hearing participants identified the syllables in a visual-only condition. Perceptual stimulus dissimilarity was quantified using the Euclidean distances between stimuli in perceptual spaces obtained via multidimensional scaling. Physical stimulus dissimilarity was quantified using face points recorded in three dimensions by an optical motion capture system. The variance accounted for in the relationship between the perceptual and the physical dissimilarities was evaluated using both the raw dissimilarities and the weighted dissimilarities. With weighting and the full set of 3-D optical data, the variance accounted for ranged between 46% and 66% across talkers and between 49% and 64% across vowels. The robust second-order relationship between the sparse 3-D point representation of visible speech and the perceptual effects suggests that the 3-D point representation is a viable basis for controlled studies of first-order relationships between visual phonetic perception and physical stimulus attributes.  相似文献   

13.
14.
Summary Although Gibson (1979) did not explicitly discuss the perspectival appearing of the ecological environment, his important ecological approach to visual perception can accommodate both (a) the stream of visual-perceptual experience that flows at the heart of the visual system's total activity of ordinary visual perceiving (ordinary seeing), and (b) the dimension of the visual experiential stream that is the ecological environment's perspectival appearing to the visual perceiver. In the present article, perspectival appearing is located at the level of brain centers of the visual system, where processes are determined by the spatiotemporally structured visual stimulus flux. And the stream of visual experience is interpreted as itself possessing a kind of perspective structure (as does the visual stimulus flux), including variant and invariant features that the visual system isolates and extracts from experience, producing the perceiver's cognitive visual awareness-of (Gibson, 1979) the environment and self in the environment.  相似文献   

15.
Three experiments examined whether image manipulations known to disrupt face perception also disrupt visual speech perception. Research has shown that an upright face with an inverted mouth looks strikingly grotesque whereas an inverted face and an inverted face containing an upright mouth look relatively normal. The current study examined whether a similar sensitivity to upright facial context plays a role in visual speech perception. Visual and audiovisual syllable identification tasks were tested under 4 presentation conditions: upright face-upright mouth, inverted face-inverted mouth, inverted face-upright mouth, and upright face-inverted mouth. Results revealed that for some visual syllables only the upright face-inverted mouth image disrupted identification. These results suggest that upright facial context can play a role in visual speech perception. A follow-up experiment testing isolated mouths supported this conclusion.  相似文献   

16.
Three experiments were carried out to investigate the evaluation and integration of visual and auditory information in speech perception. In the first two experiments, subjects identified /ba/ or /da/ speech events consisting of high-quality synthetic syllables ranging from /ba/ to /da/ combined with a videotaped /ba/ or /da/ or neutral articulation. Although subjects were specifically instructed to report what they heard, visual articulation made a large contribution to identification. The tests of quantitative models provide evidence for the integration of continuous and independent, as opposed to discrete or nonindependent, sources of information. The reaction times for identification were primarily correlated with the perceived ambiguity of the speech event. In a third experiment, the speech events were identified with an unconstrained set of response alternatives. In addition to /ba/ and /da/ responses, the /bda/ and /tha/ responses were well described by a combination of continuous and independent features. This body of results provides strong evidence for a fuzzy logical model of perceptual recognition.  相似文献   

17.
18.
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech‐related processing deficits. Here, we examined the influence of visual articulatory information (lip‐read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip‐read information that disambiguates noise‐masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio‐visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.  相似文献   

19.
We investigated the effects of visual speech information (articulatory gestures) on the perception of second language (L2) sounds. Previous studies have demonstrated that listeners often fail to hear the difference between certain non-native phonemic contrasts, such as in the case of Spanish native speakers regarding the Catalan sounds /ɛ/ and /e/. Here, we tested whether adding visual information about the articulatory gestures (i.e., lip movements) could enhance this perceptual ability. We found that, for auditory-only presentations, Spanish-dominant bilinguals failed to show sensitivity to the /ɛ/–/e/ contrast, whereas Catalan-dominant bilinguals did. Yet, when the same speech events were presented audiovisually, Spanish-dominants (as well as Catalan-dominants) were sensitive to the phonemic contrast. Finally, when the stimuli were presented only visually (in the absence of sound), none of the two groups presented clear signs of discrimination. Our results suggest that visual speech gestures enhance second language perception at the level of phonological processing especially by way of multisensory integration.  相似文献   

20.
A theory of visual interpolation in object perception   总被引:10,自引:0,他引:10  
We describe a new theory explaining the perception of partly occluded objects and illusory figures, from both static and kinematic information, in a unified framework. Three ideas guide our approach. First, perception of partly occluded objects, perception of illusory figures, and some other object perception phenomena derive from a single boundary interpolation process. These phenomena differ only in respects that are not part of the unit formation process, such as the depth placement of units formed. Second, unit formation from static and kinematic information can be treated in the same general framework. Third, spatial and spatiotemporal discontinuities in the boundaries of optically projected areas are fundamental to the unit formation process. Consistent with these ideas, we develop a detailed theory of unit formation that accounts for most cases of boundary perception in the absence of local physical specification. According to this theory, discontinuities in the first derivative of projected edges are initiating conditions for unit formation. A formal notion of relatability is defined, specifying which physically given edges leading into discontinuities can be connected to others by interpolated edges. Intuitively, relatability requires that two edges be connectable by a smooth, monotonic curve. The roots of the discontinuity and relatability notions in ecological constraints on object perception are discussed. Finally, we elaborate our approach by discussing related issues, some new phenomena, connections to other approaches, and issues for future research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号