首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In two studies based on Stanley Milgram’s original pilots, we present the first systematic examination of cyranoids as social psychological research tools. A cyranoid is created by cooperatively joining in real-time the body of one person with speech generated by another via covert speech shadowing. The resulting hybrid persona can subsequently interact with third parties face-to-face. We show that naïve interlocutors perceive a cyranoid to be a unified, autonomously communicating person, evidence for a phenomenon Milgram termed the “cyranic illusion.” We also show that creating cyranoids composed of contrasting identities (a child speaking adult-generated words and vice versa) can be used to study how stereotyping and person perception are mediated by inner (dispositional) vs. outer (physical) identity. Our results establish the cyranoid method as a unique means of obtaining experimental control over inner and outer identities within social interactions rich in mundane realism.  相似文献   

2.
Young children have an overall preference for child‐directed speech (CDS) over adult‐directed speech (ADS), and its structural features are thought to facilitate language learning. Many studies have supported these findings, but less is known about processing of CDS at short, sub‐second timescales. How do the moment‐to‐moment dynamics of CDS influence young children's attention and learning? In Study 1, we used hierarchical clustering to characterize patterns of pitch variability in a natural CDS corpus, which uncovered four main word‐level contour shapes: ‘fall’, ‘rise’, ‘hill’, and ‘valley’. In Study 2, we adapted a measure from adult attention research—pupil size synchrony—to quantify real‐time attention to speech across participants, and found that toddlers showed higher synchrony to the dynamics of CDS than to ADS. Importantly, there were consistent differences in toddlers’ attention when listening to the four word‐level contour types. In Study 3, we found that pupil size synchrony during exposure to novel words predicted toddlers’ learning at test. This suggests that the dynamics of pitch in CDS not only shape toddlers’ attention but guide their learning of new words. By revealing a physiological response to the real‐time dynamics of CDS, this investigation yields a new sub‐second framework for understanding young children's engagement with one of the most important signals in their environment.  相似文献   

3.
In forensic settings, lay (nonexpert) listeners may be required to compare voice samples for identity. In two experiments we investigated the effect of background noise and variations in speaking style on performance. In each trial, participants heard two recordings, responded whether the voices belonged to the same person, and provided a confidence rating. In Experiment 1, the first recording featured read speech and the second featured read or spontaneous speech. Both recordings were presented in quiet, or with background noise. Accuracy was highest when recordings featured the same speaking style. In Experiment 2, background noise either occurred in the first or second recording. Accuracy was higher when it occurred in the second. The overall results reveal that both speaking style and background noise can disrupt accuracy. Although there is a relationship between confidence and accuracy in all conditions, it is variable. The forensic implications of these findings are discussed.  相似文献   

4.
Pre-delinquent peers in Achievement Place (a community based family style rehabilitation program based on a token economy) were given points (token reinforcement) to modify the articulation errors of two boys. In Experiment I, using a multiple baseline experimental design, error words involving the /l/, /r/, /th/, and /ting/ sounds were successfully treated by both a group of peers and by individual peers. Also, generalization occurred to words that were not trained. The speech correction procedure used by the peers involved a number of variables including modelling, peer approval, contingent points, and feedback. The individual role of each of these variables was not experimentally analyzed, but it was demonstrated that peers could function as speech therapists without instructions, feedback, or the presence of an adult. It was also found that payment of points to peers for detecting correct articulations produced closer agreement with the experimenter than when they were paid points for finding incorrect articulations. The results were replicated in a second experiment with another subject who had similar articulation errors. In addition, the second experiment showed that peer speech correction procedures resulted in some generalization to the correct use of target words in sentences and significant improvements on standard tests of articulation.  相似文献   

5.
This study longitudinally examined the production of pointing in four Spanish 1-year-old and four Spanish 2-year-old children in interactive situations with their mothers at home over the course of one year. Three aspects were analyzed: a) the functions of the pointing gesture, their accurate comprehension by the interlocutor (mother or child), and their order of emergence in the child; b) whether or not there were differences in the production of pointing according to who initiated the interaction; and c) whether maternal and child speech were related to maternal and child pointing production. The results showed that the pointing function of showing is the most frequent for both children and mothers from groups 1 and 2, and the first to emerge followed by the informing, requesting object, requesting action, and requesting cooperation functions. The accuracy with which these intentions were comprehended was found to be very high for both mother and child. Pointing production was greater when the speaker initiated the interaction than when the other person did, indicating that gestures follow the turn-taking system. Finally, the production of pointing to showing in children and mothers was found to be related to maternal and child speech, while pointing to request cooperation triggered the process of joint activity between mother and child.  相似文献   

6.
Kim J  Sironic A  Davis C 《Perception》2011,40(7):853-862
Seeing the talker improves the intelligibility of speech degraded by noise (a visual speech benefit). Given that talkers exaggerate spoken articulation in noise, this set of two experiments examined whether the visual speech benefit was greater for speech produced in noise than in quiet. We first examined the extent to which spoken articulation was exaggerated in noise by measuring the motion of face markers as four people uttered 10 sentences either in quiet or in babble-speech noise (these renditions were also filmed). The tracking results showed that articulated motion in speech produced in noise was greater than that produced in quiet and was more highly correlated with speech acoustics. Speech intelligibility was tested in a second experiment using a speech-perception-in-noise task under auditory-visual and auditory-only conditions. The results showed that the visual speech benefit was greater for speech recorded in noise than for speech recorded in quiet. Furthermore, the amount of articulatory movement was related to performance on the perception task, indicating that the enhanced gestures made when speaking in noise function to make speech more intelligible.  相似文献   

7.
Speakers tend to prepare their nouns immediately before saying them, rather than preparing them further in advance. To test the limits of this last-second preparation, speakers were asked to name object pairs without pausing between names. There was not enough time to prepare the second name while articulating the first, so the speakers’ delay in starting to say the first name was based on the amount of time available to prepare the second name during speech. Before speaking, they spent more time preparing a second name (e.g., carrot) when the first name was monosyllabic (e.g., wig) rather than multisyllabic (e.g., windmill ). When additional words intervened between names, the length of the first name became less important and speech began earlier. Preparation differences were reflected in speech latencies, durations, and eye movements. The results suggest that speakers are sensitive to the length of prepared words and the time needed for preparing subsequent words. They can use this information to increase fluency while minimizing word buffering.  相似文献   

8.
The goal of the present study was to examine functioning of late bilinguals in their second language. Specifically, we asked how native and non-native Hebrew speaking listeners perceive accented and native-accented Hebrew speech. To achieve this goal we used the gating paradigm to explore the ability of healthy late fluent bilinguals (Russian and Arabic native speakers) to recognize words in L2 (Hebrew) when they were spoken in an accent like their own, a native accent (Hebrew speakers), or another foreign accent (American accent). The data revealed that for Hebrew speakers, there was no effect of accent, whereas for the two bilingual groups (Russian and Arabic native speakers), stimuli with an accent like their own and the native Hebrew accent, required significantly less phonological information than the other foreign accents. The results support the hypothesis that phonological assimilation works in a similar manner in these two different groups.  相似文献   

9.
Exaggeration of the vowel space in infant-directed speech (IDS) is well documented for English, but not consistently replicated in other languages or for other speech-sound contrasts. A second attested, but less discussed, pattern of change in IDS is an overall rise of the formant frequencies, which may reflect an affective speaking style. The present study investigates longitudinally how Dutch mothers change their corner vowels, voiceless fricatives, and pitch when speaking to their infant at 11 and 15 months of age. In comparison to adult-directed speech (ADS), Dutch IDS has a smaller vowel space, higher second and third formant frequencies in the vowels, and a higher spectral frequency in the fricatives. The formants of the vowels and spectral frequency of the fricatives are raised more strongly for infants at 11 than at 15 months, while the pitch is more extreme in IDS to 15-month olds. These results show that enhanced positive affect is the main factor influencing Dutch mothers’ realisation of speech sounds in IDS, especially to younger infants. This study provides evidence that mothers’ expression of emotion in IDS can influence the realisation of speech sounds, and that the loss or gain of speech clarity may be secondary effects of affect.  相似文献   

10.
Phonemic deficits in developmental dyslexia   总被引:20,自引:0,他引:20  
Summary The present study explored a possible relationship between reading difficulties and speech difficulties. Dyslexic and normal readers, matched for Reading Age, were compared first on a reading task and secondly on a speaking task.In the first experiment, the two groups were asked to read nonsense words aloud. Both groups were able to read one-syllable nonwords equally well but the dyslexics had more difficulty than the normal readers when asked to read two-syllable nonwords. Moreover, they found two-syllable nonwords containing consonant clusters particularly difficult. The probability of their making an error increased with the number of consonant clusters.In the second experiment, the subjects were required to repeat real words and nonsense words of two, three, or four syllables. Both groups found nonsense words more difficult to repeat than real words. However, the relative difficulty of nonsense words over real words was greater for the dyslexic group. Their difficulty was especially marked when they had to repeat four-syllable nonsense words.Thus, in both experiments the dyslexic readers were more affected by the phonological complexity of the stimuli than the normal readers were. Hence, it was suggested that the dyslexic readers tested were subject to a general phonemic deficit which affected their ability to process both written and spoken words.  相似文献   

11.
When speaking to infants, adults typically alter the acoustic properties of their speech in a variety of ways compared with how they speak to other adults; for example, they use higher pitch, increased pitch range, more pitch variability, and slower speech rate. Research shows that these vocal changes happen similarly across industrialized populations, but no studies have carefully examined basic acoustic properties of infant-directed (ID) speech in traditional societies. Moreover, some scholars have suggested that ID speech is culturally specific and does not exist in some small-scale societies. We examined fundamental frequency (F0) production and speech rate in mothers speaking to both infants and adults in three cultures: Fijians, Kenyans, and North Americans. In all three cultures, speakers used higher F0 when speaking to infants relative to when speaking to other adults, and they also used significantly greater F0 variation and fewer syllables per second. Previous research has found that American mothers tend to use higher pitch than do mothers from other cultures, but when maternal education was controlled in the current study, we did not find a significant difference in average pitch across our three populations. This is the first research systematically comparing spontaneous ID and adult-directed speech prosody between Western and traditional societies, and it is consistent with a large body of evidence showing similar acoustic patterns in ID speech across industrialized populations.  相似文献   

12.
Iconicity – the correspondence between form and meaning – may help young children learn to use new words. Early‐learned words are higher in iconicity than later learned words. However, it remains unclear what role iconicity may play in actual language use. Here, we ask whether iconicity relates not just to the age at which words are acquired, but also to how frequently children and adults use the words in their speech. If iconicity serves to bootstrap word learning, then we would expect that children should say highly iconic words more frequently than less iconic words, especially early in development. We would also expect adults to use iconic words more often when speaking to children than to other adults. We examined the relationship between frequency and iconicity for approximately 2000 English words. Replicating previous findings, we found that more iconic words are learned earlier. Moreover, we found that more iconic words tend to be used more by younger children, and adults use more iconic words when speaking to children than to other adults. Together, our results show that young children not only learn words rated high in iconicity earlier than words low in iconicity, but they also produce these words more frequently in conversation – a pattern that is reciprocated by adults when speaking with children. Thus, the earliest conversations of children are relatively higher in iconicity, suggesting that this iconicity scaffolds the production and comprehension of spoken language during early development.  相似文献   

13.
A series of experiments was conducted to investigate the effects of stimulus variability on the memory representations for spoken words. A serial recall task was used to study the effects of changes in speaking rate, talker variability, and overall amplitude on the initial encoding, rehearsal, and recall of lists of spoken words. Interstimulus interval (ISI) was manipulated to determine the time course and nature of processing. The results indicated that at short ISIs, variations in both talker and speaking rate imposed a processing cost that was reflected in poorer serial recall for the primacy portion of word lists. At longer ISIs, however, variation in talker characteristics resulted in improved recall in initial list positions, whereas variation in speaking rate had no effect on recall performance. Amplitude variability had no effect on serial recall across all ISIs. Taken together, these results suggest that encoding of stimulus dimensions such as talker characteristics, speaking rate, and overall amplitude may be the result of distinct perceptual operations. The effects of these sources of stimulus variability in speech are discussed with regard to perceptual saliency, processing demands, and memory representation for spoken words.  相似文献   

14.
In contrast to the many published accounts of the disfluent repetition of sounds at the beginnings of words, cases where it is predominantly the final parts of words that are repeated have been reported relatively rarely. With few exceptions, those studies that have been published have described either pre-school children or neurologically impaired subjects. The purpose of this case report was to describe final part-word repetitions in the speech of two school-age boys of normal intelligence with no known neurological lesions. Their speech was recorded during spontaneous conversation, reading, and sentence repetition. The repetitions occurred in all three speaking conditions, although the majority of instances were observed in spontaneous speech, and on both content words and function words. The participants exhibited no apparent awareness of the disfluencies, no abnormal muscle tension, and no accessory behaviours. Each child produced word-final repeated fragments whose phonological structure was highly predictable according to his individual set of rules. The results are discussed in terms of possible motor and cognitive explanations for the disfluencies. EDUCATIONAL OBJECTIVES: As a result of this activity, the participant will be able to: (1) summarize prior research into final part-word repetition; (2) describe the detailed characteristics of final part-word repetitions as displayed by two children of normal intelligence; and (3) discuss ways in which the behaviour might be explained as part of a model of speech production.  相似文献   

15.
The intelligibility of word lists subjected to various types of spectral filtering has been studied extensively. Although words used for communication are usually present in sentences rather than lists, there has been no systematic report of the intelligibility of lexical components of narrowband sentences. In the present study, we found that surprisingly little spectral information is required to identify component words when sentences are heard through narrow spectral slits. Four hundred twenty listeners (21 groups of 20 subjects) were each presented with 100 bandpass filtered CID ( “everyday speech ”) sentences; separate groups received center frequencies of 370, 530, 750, 1100, 1500, 2100, 3000, 4200, and 6000 Hz at 70 dBA SPL. In Experiment 1, intelligibility of single 1/3-octave bands with steep filter slopes (96 dB/octave) averaged more than 95% for sentences centered at 1100, 1500, and 2100 Hz. In Experiment 2, we used the same center frequencies with extremely narrow bands (slopes of 115 dB/octave intersecting at the center frequency, resulting in a nominal bandwidth of l/20 octave). Despite the severe spectral tilt for all frequencies of this impoverished spectrum, intelligibility remained relatively high for most bands, with the greatest intelligibility (77%) at 1500 Hz. In Experiments 1 and 2, the bands centered at 370 and 6000 Hz provided little useful information when presented individually, but in each experiment they interacted synergistically when combined. The present findings demonstrate the adaptive flexibility of mechanisms used for speech perception and are discussed in the context of the LAME model of opportunistic multilevel processing.  相似文献   

16.
This study investigated the effects of imagining speaking aloud, sensorimotor feedback, and auditory feedback on respondents' reports of having spoken aloud and examined the relationship between responses to “spoken aloud” in the reality-monitoring task and the sense of agency over speech. After speaking aloud, lip-synching, or imagining speaking, participants were asked whether each word had actually been spoken. The number of endorsements of “spoken aloud” was higher for words spoken aloud than for those lip-synched and higher for words lip-synched than for those imagined as having been spoken aloud. When participants were prevented by white noise from receiving auditory feedback, the discriminability of words spoken aloud decreased, and when auditory feedback was altered, reports of having spoken aloud decreased even though participants had actually done so. It was also found that those who have had auditory hallucination-like experiences were less able than were those without such experiences to discriminate the words spoken aloud, suggesting that endorsements of having “spoken aloud” in the reality-monitoring task reflected a sense of agency over speech. These results were explained in terms of the source-monitoring framework, and we proposed a revised forward model of speech in order to investigate auditory hallucinations.  相似文献   

17.
Teachers frequently deal with unusual and perplexing behavioral problems in their classes. This study demonstrates how spontaneous and prompted speech were produced in a six-year-old mute by a first-grade teacher and her aide. A reinforcement system for peer-prompted speech and spontaneous speech was employed in three separate school classes in a multiple-baseline fashion. The reinforcement system produced prompted and spontaneous speech in each situation. Postchecks in the second grade indicated the child was still speaking and conversing spontaneously with his peers. This study suggests a method that teachers can use in the classroom to deal with this severely handicapping condition.  相似文献   

18.
SPEECH EVENTS, LANGUAGE DEVELOPMENT, AND THE CLINICAL SITUATION   总被引:1,自引:1,他引:0  
Psychoanalysis brings about psychic change by the mediation of speech. This paper reflects upon the significance of the structure and developmental organisation of the speech event as a verbal and non-verbal unit composed of semantically and prosodically encoded messages, interactions and emotional contact between partners. Spoken words communicate semantic meanings and the affects of a given speech event. Words carry personal emotional meanings which are inseparable from their referential significance. Such emotional meanings are very hard to articulate in words. They are conveyed by the ineffable but essential feelings present in their sound and pronunciation. Speech is an intentionally object-related and emotionally engaging social activity resulting from a child having been spoken to early in life by an adult wanting to establish affective verbal contact. The early organisation and later transformation of the structure of the speech event carries private meanings for each person's listening and speaking stance. A refined understanding of the structural and emotional complexities of verbal communicative exchanges during analysis may enhance the analyst's ability to understand the patient'smanner of participation in the analytic process.  相似文献   

19.
The origin and functions of the hand and arm gestures that accompany speech production are poorly understood. It has been proposed that gestures facilitate lexical retrieval, but little is known about when retrieval is accompanied by gestural activity and how this activity is related to the semantics of the word to be retrieved. Electromyographic (EMG) activity of the dominant forearm was recorded during a retrieval task in which participants tried to identify target words from their definitions. EMG amplitudes were significantly greater for concrete than for abstract words. The relationship between EMG amplitude and other conceptual attributes of the target words was examined. EMG was positively related to a word’s judged spatiality, concreteness, drawability, and manipulability. The implications of these findings for theories of the relation between speech production and gesture are discussed.This experiment was done by the first author under the supervision of the second author in partial completion of the Ph.D. degree at Columbia University. We gratefully acknowledge the advice and comments of Lois Putnam, Robert Remez, James Magnuson, Michele Miozzo, and Robert B. Tallarico, and the assistance of Stephen Krieger, Lauren Walsh, Jennifer Kim, and Jillian White.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号