首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The effects of viewing the face of the talker (visual speech) on the processing of clearly presented intact auditory stimuli were investigated using two measures likely to be sensitive to the articulatory motor actions produced in speaking. The aim of these experiments was to highlight the need for accounts of the effects of audio-visual (AV) speech that explicitly consider the properties of articulated action. The first experiment employed a syllable-monitoring task in which participants were required to monitor for target syllables within foreign carrier phrases. An AV effect was found in that seeing a talker's moving face (moving face condition) assisted in more accurate recognition (hits and correct rejections) of spoken syllables than of auditory-only still face (still face condition) presentations. The second experiment examined processing of spoken phrases by investigating whether an AV effect would be found for estimates of phrase duration. Two effects of seeing the moving face of the talker were found. First, the moving face condition had significantly longer duration estimates than the still face auditory-only condition. Second, estimates of auditory duration made in the moving face condition reliably correlated with the actual durations whereas those made in the still face auditory condition did not. The third experiment was carried out to determine whether the stronger correlation between estimated and actual duration in the moving face condition might have been due to generic properties of AV presentation. Experiment 3 employed the procedures of the second experiment but used stimuli that were not perceived as speech although they possessed the same timing cues as those of the speech stimuli of Experiment 2. It was found that simply presenting both auditory and visual timing information did not result in more reliable duration estimates. Further, when released from the speech context (used in Experiment 2), duration estimates for the auditory-only stimuli were significantly correlated with actual durations. In all, these results demonstrate that visual speech can assist in the analysis of clearly presented auditory stimuli in tasks concerned with information provided by viewing the production of an utterance. We suggest that these findings are consistent with there being a processing link between perception and action such that viewing a talker speaking will activate speech motor schemas in the perceiver.  相似文献   

2.
采用原型变异任务,通过操纵规则和相似性特征的呈现通道,探讨特征的呈现方式对类别学习中规则和相似性知识获得的影响。结果发现,在听觉-视觉条件下习得规则的人数显著多于习得相似性的人数,而在视觉-听觉和视觉-视觉条件下则不存在显著差异;并且,三种条件下习得规则的正确率都高于习得相似性的正确率。这说明特征的呈现方式影响对规则和相似性特征的习得,在听觉通道呈现规则时被试更倾向基于规则分类。  相似文献   

3.
This study explored the effect of reading with reversed speech on the frequency of stuttering. Eight adults who stutter served as participants and read four 300-syllable passages while listening to three types of speech stimuli: normal speech (choral reading), reversed speech at normal speed, reversed speech at half speed, and a control condition of no auditory feedback. A repeated-measures analysis of variance showed a significant decrease in stuttering frequency in the choral reading condition but not in reversed speech at normal and half speed. However, the reversed speech at half-speed condition showed a large effect size (omega2 = 0.32). Data suggest that a forward moving speech feedback is not essential to decrease the frequency of stuttering in adults who stutter.  相似文献   

4.
The effects of an auditory model on the learning of relative and absolute timing were examined. In 2 experiments, participants attempted to learn to produce a 1,000- or 1,600-ms sequence of 5 key presses with a specific relative-timing pattern. In each experiment, participants were, or were not, provided an auditory model that consisted of a series of tones that were temporally spaced according to the criterion relative-timing pattern. In Experiment 1, participants (n = 14) given the auditory template exhibited better relative- and absolute-timing performance than participants (n = 14) not given the auditory template. In Experiment 2, auditory and no-auditory template groups again were tested, but in that experiment each physical practice participant (n = 16) was paired during acquisition with an observer (n = 16). The observer was privy to all instructions as well as auditory and visual information that was provided the physical practice participant. The results replicated the results of Experiment 1: Relative-timing information was enhanced by the auditory template for both the physical and observation practice participants. Absolute timing was improved only when the auditory model was coupled with physical practice. Consistent with the proposal of D. M. Scully and K. M. Newell (1985), modeled timing information in physical and observational practice benefited the learning of the relative-timing features of the task, but physical practice was required to enhance absolute timing.  相似文献   

5.
Learning a second language as an adult is particularly effortful when new phonetic representations must be formed. Therefore the processes that allow learning of speech sounds are of great theoretical and practical interest. Here we examined whether perception of single formant transitions, that is, sound components critical in speech perception, can be enhanced through an implicit task-irrelevant learning procedure that has been shown to produce visual perceptual learning. The single-formant sounds were paired at subthreshold levels with the attended targets in an auditory identification task. Results showed that task-irrelevant learning occurred for the unattended stimuli. Surprisingly, the magnitude of this learning effect was similar to that following explicit training on auditory formant transition detection using discriminable stimuli in an adaptive procedure, whereas explicit training on the subthreshold stimuli produced no learning. These results suggest that in adults learning of speech parts can occur at least partially through implicit mechanisms.  相似文献   

6.
Two experiments investigated the nature of the code in which lip-read speech is processed. In Experiment 1 subjects repeated words, presented with lip-read and masked auditory components out of synchrony by 600 ms. In one condition the lip-read input preceded the auditory input, and in the second condition the auditory input preceded the lip-read input. Direction of the modality lead did not affect the accuracy of report. Unlike auditory/graphic letter matching (Wood, 1974), the processing code used to match lip-read and auditory stimuli is insensitive to the temporal ordering of the input modalities. In Experiment 2, subjects were presented with two types of lists of colour names: in one list some words were heard, and some read; the other list consisted of heard and lip-read words. When asked to recall words from only one type of input presentation, subjects confused lip-read and heard words more frequently than they confused heard and read words. The results indicate that lip-read and heard speech share a common, non-modality specific, processing stage that excludes graphically presented phonological information.  相似文献   

7.
The study employed a single-subject multiple baseline design to examine the ability of 9 individuals with severe Broca's aphasia or global aphasia to produce graphic symbol sentences of varying syntactical complexity using a software program that turns a computer into a speech output communication device. The sentences ranged in complexity from simple two-word phrases to those with morphological inflections, transformations, and relative clauses. Overall, results indicated that individuals with aphasia are able to access, manipulate, and combine graphic symbols to produce phrases and sentences of varying degrees of syntactical complexity. The findings are discussed in terms of the clinical and public policy implications.  相似文献   

8.
To compare the properties of inner and overt speech, Oppenheim and Dell (2008) counted participants' self-reported speech errors when reciting tongue twisters either overtly or silently and found a bias toward substituting phonemes that resulted in words in both conditions, but a bias toward substituting similar phonemes only when speech was overt. Here, we report 3 experiments revisiting their conclusion that inner speech remains underspecified at the subphonemic level, which they simulated within an activation-feedback framework. In 2 experiments, participants recited tongue twisters that could result in the errorful substitutions of similar or dissimilar phonemes to form real words or nonwords. Both experiments included an auditory masking condition, to gauge the possible impact of loss of auditory feedback on the accuracy of self-reporting of speech errors. In Experiment 1, the stimuli were composed entirely from real words, whereas, in Experiment 2, half the tokens used were nonwords. Although masking did not have any effects, participants were more likely to report substitutions of similar phonemes in both experiments, in inner as well as overt speech. This pattern of results was confirmed in a 3rd experiment using the real-word materials from Oppenheim and Dell (in press). In addition to these findings, a lexical bias effect found in Experiments 1 and 3 disappeared in Experiment 2. Our findings support a view in which plans for inner speech are indeed specified at the feature level, even when there is no intention to articulate words overtly, and in which editing of the plan for errors is implicated. (PsycINFO Database Record (c) 2010 APA, all rights reserved).  相似文献   

9.
It is important that tacts are controlled by stimuli across all senses but teaching tacts to children with autism spectrum disorder (ASD) is often limited to visual stimuli. This study replicated and extended a study on the effects of antecedent-stimulus presentations on the acquisition of auditory tacts. We used a concurrent multiple probe across sets design and an embedded adapted alternating treatments design to evaluate acquisition of auditory tacts when auditory stimuli were presented alone (i.e., isolated) or with corresponding pictures (i.e., compound-with-known and compound-with-unknown) with two school-aged boys with ASD. Both participants' responding met the mastery criterion no matter the stimulus presentation with at least one set, but one participant failed to acquire one set of stimuli in the isolated condition. The isolated condition was rarely the most efficient. We conducted post-training stimulus-control probes, and we observed disrupted stimulus control in the isolated condition for one participant. Implications for arranging auditory tacts instruction are discussed.  相似文献   

10.
The effects of an auditory model on the learning of relative and absolute timing were examined. In 2 experiments, participants attempted to learn to produce a 1,000- or 1,600-ms sequence of 5 key presses with a specific relative-timing pattern. In each experiment, participants were, or were not, provided an auditory model that consisted of a series of tones that were temporally spaced according to the criterion relative-timing pattern. In Experiment 1, participants (n = 14) given the auditory template exhibited better relative- and absolute-timing performance than participants (n = 14) not given the auditory template. In Experiment 2, auditory and no-auditory template groups again were tested, but in that experiment each physical practice participant (n = 16) was paired during acquisition with an observer (n = 16). The observer was privy to all instructions as well as auditory and visual information that was provided the physical practice participant. The results replicated the results of Experiment 1: Relative-timing information was enhanced by the auditory template for both the physical and observation practice participants. Absolute timing was improved only when the auditory model was coupled with physical practice. Consistent with the proposal of D. M. Scully and K. M. Newell (1985), modeled timing information in physical and observational practice benefited the learning of the relative-timing features of the task, but physical practice was required to enhance absolute timing.  相似文献   

11.
Human cognition and behavior are dominated by symbol use. This paper examines the social learning strategies that give rise to symbolic communication. Experiment 1 contrasts an individual‐level account, based on observational learning and cognitive bias, with an inter‐individual account, based on social coordinative learning. Participants played a referential communication game in which they tried to communicate a range of recurring meanings to a partner by drawing, but without using their conventional language. Individual‐level learning, via observation and cognitive bias, was sufficient to produce signs that became increasingly effective, efficient, and shared over games. However, breaking a referential precedent eliminated these benefits. The most effective, most efficient, and most shared signs arose when participants could directly interact with their partner, indicating that social coordinative learning is important to the creation of shared symbols. Experiment 2 investigated the contribution of two distinct aspects of social interaction: behavior alignment and concurrent partner feedback. Each played a complementary role in the creation of shared symbols: Behavior alignment primarily drove communication effectiveness, and partner feedback primarily drove the efficiency of the evolved signs. In conclusion, inter‐individual social coordinative learning is important to the evolution of effective, efficient, and shared symbols.  相似文献   

12.
Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that message is presented in a noisy background. Speech is a particularly important example of multisensory integration because of its behavioural relevance to humans and also because brain regions have been identified that appear to be specifically tuned for auditory speech and lip gestures. Previous research has suggested that speech stimuli may have an advantage over other types of auditory stimuli in terms of audio-visual integration. Here, we used a modified adaptive psychophysical staircase approach to compare the influence of congruent visual stimuli (brief movie clips) on the detection of noise-masked auditory speech and non-speech stimuli. We found that congruent visual stimuli significantly improved detection of an auditory stimulus relative to incongruent visual stimuli. This effect, however, was equally apparent for speech and non-speech stimuli. The findings suggest that speech stimuli are not specifically advantaged by audio-visual integration for detection at threshold when compared with other naturalistic sounds.  相似文献   

13.
Abstract - We investigated the role of cross-modal links in spatial attention in modulating the efficiency of dual-task performance. The difficulty of combining speech shadowing with a simulated driving task was modulated by the spatial location from which the speech was presented. In both single- and dual-task conditions, participants found it significantly easier to shadow one of two auditory streams when the relevant speech was presented from directly in front of them, rather than from the side. This frontal speech advantage was more pronounced when participants performed the demanding simulated driving task at the same time as shadowing than when they performed the shadowing task alone. These results demonstrate that people process auditory information more efficiently (with a lower overall dual-task decrement) when relevant auditory and visual stimuli are presented from the same, rather than different, spatial locations. These results are related to recent findings showing that there are extensive cross-modal links in spatial attention, and have clear implications for the design of better user interfaces.  相似文献   

14.
Jordan TR  Abedipour L 《Perception》2010,39(9):1283-1285
Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.  相似文献   

15.
Immediate serial recall of visually presented verbal stimuli is impaired by the presence of irrelevant auditory background speech, the so-called irrelevant speech effect. Two of the three main accounts of this effect place restrictions on when it will be observed, limiting its occurrence either to items processed by the phonological loop (the phonological loop hypothesis) or to items that are not too dissimilar from the irrelevant speech (the feature model). A third, the object-oriented episodic record (O-OER) model, requires only that the memory task involves seriation. The present studies test these three accounts by examining whether irrelevant auditory speech will interfere with a task that does not involve the phonological loop, does not use stimuli that are compatible with those to be remembered, but does require seriation. Two experiments found that irrelevant speech led to lower levels of performance in a visual statistical learning task, offering more support for the O-OER model and posing a challenge for the other two accounts.  相似文献   

16.
Ian M. Lyons 《Cognition》2009,113(2):189-204
In two different contexts, we examined the hypothesis that individual differences in working memory (WM) capacity are related to the tendency to infer complex, ordinal relationships between numerical symbols. In Experiment 1, we assessed whether this tendency arises in a learning context that involves mapping novel symbols to quantities by training adult participants to associate dot-quantities with novel symbols, the overall relative order of which had to be inferred. Performance was best for participants who were higher in WM capacity (HWMs). HWMs also learned ordinal information about the symbols that lower WM individuals (LWMs) did not. In Experiment 2, we examined whether WM relates to performance when participants are explicitly instructed to make numerical order judgments about highly enculturated numerical symbols by having participants indicate whether sets of three Arabic numerals were in increasing order. All participants responded faster when sequential sets (3-4-5) were in order than when they were not. However, only HWMs responded faster when non-sequential, patterned sets (1-3-5) were in order, suggesting they were accessing ordinal associations that LWMs were not. Taken together, these experiments indicate that WM capacity plays a key role in extending symbolic number representations beyond their quantity referents to include symbol-symbol ordinal associations, both in a learning context and in terms of explicitly accessing ordinal relationships in highly enculturated stimuli.  相似文献   

17.
Experiment 1 extended J. S. Nairne and W. L. McNabb's (1985) counting procedure for presenting numerical stimuli to examine the modality effect. The present authors presented participants with dots and beeps and instructed participants to count the items to derive to-be-remembered numbers. In addition, the authors presented numbers as visual and auditory symbols, and participants recalled items by using free-serial written recall. Experiment 1 demonstrated primacy effects, recency effects, and modality effects for visual and auditory symbols and for counts of dots and beeps. Experiment 2 replicated the procedure in Experiment 1 using strict-serial written recall instead of free-serial written recall. The authors demonstrated primacy and recency effects across all 4 presentation conditions and found a modality effect for numbers that the authors presented as symbols. However, the authors found no modality effect when they presented numbers as counts of beeps and dots. The authors discuss the implications of the results in terms of methods for testing modality effects.  相似文献   

18.
The emergence of human communication systems is typically investigated via 2 approaches with complementary strengths and weaknesses: naturalistic studies and computer simulations. This study was conducted with a method that combines these approaches. Pairs of participants played video games requiring communication. Members of a pair were physically separated but exchanged graphic signals through a medium that prevented the use of standard symbols (e.g., letters). Communication systems emerged and developed rapidly during the games, integrating the use of explicit signs with information implicitly available to players and silent behavior-coordinating procedures. The systems that emerged suggest 3 conclusions: (a) signs originate from different mappings; (b) sign systems develop parsimoniously; (c) sign forms are perceptually distinct, easy to produce, and tolerant to variations.  相似文献   

19.
刘文理  乐国安 《心理学报》2012,44(5):585-594
采用启动范式, 以汉语听者为被试, 考察了非言语声音是否影响言语声音的知觉。实验1考察了纯音对辅音范畴连续体知觉的影响, 结果发现纯音影响到辅音范畴连续体的知觉, 表现出频谱对比效应。实验2考察了纯音和复合音对元音知觉的影响, 结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别, 表现出启动效应。两个实验一致发现非言语声音能够影响言语声音的知觉, 表明言语声音知觉也需要一个前言语的频谱特征分析阶段, 这与言语知觉听觉理论的观点一致。  相似文献   

20.
In view of the frequent clinical use of external auditory stimuli in fluency building programs, the purpose of the present experiment was to compare the effects of rhythmic pacing, delayed auditory feedback, and high intensity masking noise on the frequency of stuttering by dysfluency type. Twelve normal hearing young adult stutterers completed an oral reading (approximately 250 syllables) and conversational speech task (3 min) while listening to the three auditory stimuli and during a control condition presented in random order. The results demonstrated that during oral reading all three auditory stimuli were associated with significant reductions in stuttering frequency. However, during conversational speech, only the metronome produced a significant reduction in total stuttering frequency. Individual dysfluency types were not differentially affected by the three auditory stimuli.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号