首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
The use of sinusoidal replicas of speech signals reveals that listeners can perceive speech solely from temporally coherent spectral variation of nonspeech acoustic elements. This sensitivity to coherent change in acoustic stimulation is analogous to the sensitivity to change in configurations of visual stimuli, as detailed by Johansson. The similarities and potential differences between these two kinds of perceptual functions are described.  相似文献   

3.
When speech is rapidly alternated between the two ears, intelligibility declines as rates approach 3–5 switching cycles/sec and then paradoxically returns to a good level beyond that point. The present study examines previous explanations of the phenomenon by comparing intelligibility of alternated speech with that for presentation of an interrupted message to a single ear. Results favor one of two possible explanations, and a theoretical model to account for the effect is proposed.  相似文献   

4.
5.
By employing new methods of analysis to the physical signal, a number of researchers have provided evidence which suggests that there may be invariant acoustic cues which serve to identify the presence of particular phonetic segments (e.g., Kewley-Port, 1980; Searle, Jacobson, & Rayment, 1979; Stevens & Blumstein, 1978. Whereas previous studies have focused upon the existence of invariant properties present in the physical stimulus, the present study examines the existence of any invariant information available in the psychological stimulus. For this purpose, subjects were asked to classify either a series of full-CV syllables ([bi], [bε], [bo], [??], [di], [d∈], [do], [??]) or one of two series of chirp stimuli consisting of information available in the first 30 meec of each syllable. The full-formant chirp stimuli consisted of the first 30 msec of each syllable, whereas the two-formant chirps were composed of the first 30 msec of only the second and third formants. The object of the present study was to determine whether or not there was sufficient information available in either the full- or two formant chirp series to allow subjects to group the stimuli into two classes corresponding to the identity of the initial consonant of the syllables (i.e., [b], or [d]). A series of classification tasks were used, ranging from a completely free sorting task to a perceptual learning task with experimenter-imposed classifications. The results suggest that there is information available in the full-formant chirps, but not in the two-formant chirps, which allows subjects to group the sounds into classes corresponding to the identity of the initial consonant sounds.  相似文献   

6.
We performed two experiments comparing the effects of speech production and speech comprehension on simulated driving performance. In both experiments, participants completed a speech task and a simulated driving task under single‐ and dual‐task conditions, with language materials matched for linguistic complexity. In Experiment 1, concurrent production and comprehension resulted in more variable velocity compared to driving alone. Experiment 2 replicated these effects in a more difficult simulated driving environment, with participants showing larger and more variable headway times when speaking or listening while driving than when just driving. In both experiments, concurrent production yielded better control of lane position relative to single‐task performance; concurrent comprehension had little impact on control of lane position. On all other measures, production and comprehension had very similar effects on driving. The results show, in line with previous work, that there are detrimental consequences for driving of concurrent language use. Our findings imply that these detrimental consequences may be roughly the same whether drivers are producing speech or comprehending it. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
8.
9.
The TRACE model of speech perception   总被引:25,自引:1,他引:25  
  相似文献   

10.
If one listens to a meaningless syllable that is repeated over and over, he will hear it undergo a variety of changes. These changes are extremely systematic in character and can be described phonetically in terms of reorganizations of the phones constituting the syllable and changes in a restricted set of distinctive features. When a new syllable is presented to a subject after he has listened to a particular syllable that was repeated, he will misreport the new (test) syllable. His misperception of the test syllable is related to the changes occurring in the representation of the original repeated syllable just prior to the presentation of the test syllable.  相似文献   

11.
Two studies examine the effects of speech styles and task interdependence on status conferral judgments. In both studies, participants were exposed to an individual who used either a powerful or powerless speech style in a low or high task interdependence group, and made judgments about the amount of status to confer to the individual. When task interdependence was low, participants conferred more status to powerful speakers, whereas when interdependence was high, participants conferred more status to powerless speakers. Furthermore, Study 2 demonstrated that speech styles influenced trait inferences about the speaker (agency and communality), but these traits were weighted differently in status conferral judgments across groups. These findings provide insight into both the relationship between observed behaviors and status positions and the decision process underlying status conferral judgments.  相似文献   

12.
The effects of SpeechEasy on stuttering frequency, stuttering severity self-ratings, speech rate, and speech naturalness for 31 adults who stutter were examined. Speech measures were compared for samples obtained with and without the device in place in a dispensing setting. Mean stuttering frequencies were reduced by 79% and 61% for the device compared to the control conditions on reading and monologue tasks, respectively. Mean severity self-ratings decreased by 3.5 points for oral reading and 2.7 for monologue on a 9-point scale. Despite dramatic reductions in stuttering frequency, mean global speech rates in the device condition increased by only 8% in the reading task and 15% for the monologue task, and were well below normal. Further, complete elimination of stuttering was not associated with normalized speech rates. Nevertheless, mean ratings of speech naturalness improved markedly in the device compared to the control condition and, at 3.3 and 3.2 for reading and monologue, respectively, were only slightly outside the normal range. These results show that SpeechEasy produced improved speech outcomes in an assessment setting. However, findings raise the issue of a possible contribution of slowed speech rate to the stuttering reduction effect, especially given participants' instructions to speak chorally with the delayed signal as part of the active listening instructions of the device protocol. Study of device effects in situations of daily living over the long term is necessary to fully explore its treatment potential, especially with respect to long-term stability. Educational objectives: The reader will be able to discuss and evaluate: (1) issues pertinent to evaluating treatment benefits of fluency aids and (2) the effects of SpeechEasy on stuttering frequency, speech rate, and speech naturalness during testing in a dispensing setting for a relatively large sample of adults who stutter.  相似文献   

13.
Studies of filled and silent pauses performed in the last two decades are reviewed in order to determine the significance of pauses for the speaker. Following a brief history, the theoretical implications of pause location are examined and the relevant studies summarized. In addition, the functional significance of pauses is considered in terms of cognitive, affective-state, and social interaction variables.  相似文献   

14.
This article examines caregiver speech to young children. The authors obtained several measures of the speech used to children during early language development (14-30 months). For all measures, they found substantial variation across individuals and subgroups. Speech patterns vary with caregiver education, and the differences are maintained over time. While there are distinct levels of complexity for different caregivers, there is a common pattern of increase across age within the range that characterizes each educational group. Thus, caregiver speech exhibits both long-standing patterns of linguistic behavior and adjustment for the interlocutor. This information about the variability of speech by individual caregivers provides a framework for systematic study of the role of input in language acquisition.  相似文献   

15.
Massaro DW  Chen TH 《Psychonomic bulletin & review》2008,15(2):453-7; discussion 458-62
Galantucci, Fowler, and Turvey (2006) have claimed that perceiving speech is perceiving gestures and that the motor system is recruited for perceiving speech. We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruitedfor perceiving speech, and that speech perception can be adequately described by a prototypical pattern recognition model, the fuzzy logical model of perception (FLMP). Empirical evidence taken as support for gesture and motor theory is reconsidered in more detail and in the framework of the FLMR Additional theoretical and logical arguments are made to challenge gesture and motor theory.  相似文献   

16.
17.
It has been proposed that speech is specified by the eye, the ear, and even the skin. Kuhl and Meltzoff (1984) showed that 4-month-olds could lip-read to an extent. Given the age of the infants, it was not clear whether this was a learned skill or a by-product of the primary auditory process. This paper presents evidence that neonate infants (less than 33 h) show virtually identical patterns of intermodal interaction as do 4-month-olds. Since they are neonates, it is unlikely that learning was involved. The results indicate that human speech is specified by both eye and ear at an age when built-in structural sensitivities provide the most plausible explanation.  相似文献   

18.
19.
In this article, we examine the controversy over and partial ban of Barack Obama's 2009 school speech. Drawing on Lacan's analysis of Poe's Purloined Letter, we argue that the speech was purloined in the way the letter was in Poe's story; that is, its course was prolonged. Further, the banning of the speech had less to do with the speech's content than it did with the role of the speech as a pure signifier – a structuring moment around which political subjectivities were staked out. These subjectivities, we argue, draw on the symbolic universe that Spillers refers to as the “American Grammar”, which never directly references race while being fully constituted by racial difference. The American Grammar constructs a historically and spatially contingent subject of a white, male political leader as a universal paternal authority figure whose legitimacy is based on protecting the “innocent” and the “intimate (home)” from external threat. In efforts to ban Obama's school speech and protect the nation's “innocent school children”, two specific elisions were made: 1) Obama was constructed in terms of what we, following Morrison, term an Africanist presence and made external to the nation to which he was then labeled as a threat, and 2) the space of the school was constructed as private and apolitical precluding Obama's access.  相似文献   

20.
Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号