首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
2.
Does gender make a difference in the way politicians speak and are spoken to in public? This paper examines perspective in three television interviews and two radio interviews with Bill Clinton in June 2004 and in three television interviews and two radio interviews with Hillary Clinton in June 2003 with the same interviewers. Our perspectival approach assumes that each utterance has a dialogically constructed point of view. Earlier research has shown that markers of conceptual orality and literacy as well as referencing (name and pronoun use for self and other reference) do reflect perspective. This paper asks whether perspective is gendered. Our data analysis demonstrates that some markers of perspective show gender differences while others do not. Those that do include the number of syllables spoken by each interlocutor, referencing, the use of the intensifier so, the use of the hedge you know, the use of non-standard pronunciations, turn transitions, and lastly the use of laughter.  相似文献   

3.
4.
5.
Clark and Fox Tree (2002) have presented empirical evidence, based primarily on the London–Lund corpus (LL; Svartvik & Quirk, 1980), that the fillers uh and um are conventional English words that signal a speaker’s intention to initiate a minor and a major delay, respectively. We present here empirical analyses of uh and um and of silent pauses (delays) immediately following them in six media interviews of Hillary Clinton. Our evidence indicates that uh and um cannot serve as signals of upcoming delay, let alone signal it differentially: In most cases, both uh and um were not followed by a silent pause, that is, there was no delay at all; the silent pauses that did occur after um were too short to be counted as major delays; finally, the distributions of durations of silent pauses after uh and um were almost entirely overlapping and could therefore not have served as reliable predictors for a listener. The discrepancies between Clark and Fox Tree’s findings and ours are largely a consequence of the fact that their LL analyses reflect the perceptions of professional coders, whereas our data were analyzed by means of acoustic measurements with the PRAAT software (www.praat.org). A comparison of our findings with those of O’Connell, Kowal, and Ageneau (2005) did not corroborate the hypothesis of Clark and Fox Tree that uh and um are interjections: Fillers occurred typically in initial, interjections in medial positions; fillers did not constitute an integral turn by themselves, whereas interjections did; fillers never initiated cited speech, whereas interjections did; and fillers did not signal emotion, whereas interjections did. Clark and Fox Tree’s analyses were embedded within a theory of ideal delivery that we find inappropriate for the explication of these phenomena.  相似文献   

6.
Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.  相似文献   

7.
8.
This study examines affective facial expression in conversation. Experiment 1 demonstrates that the accuracy of affect-identification for conversational facial expressions generally is no better than chance. The explanation explored by Experiment 2 is that many conversational facial expressions operate as nonverbal interjections. Thus, much like verbal interjections (“gosh,”“really,”“oh please,”“jeez,” etc.), the attribution of affect for certain conversational facial expressions should depend on their verbal context. Experiment 2 supports the notion of facial expression as interjection by demonstrating that most any conversational facial expression, regardless of Us true source emotion or of the affect it signals in isolation, tends to be interpreted according to the affect associated with the verbal context in which it occurs. In addition to the identification of context-dependent interjection as yet another function of facial expression, the study suggests a pressing need for further investigation of nonverbal behavior in natural-conversation settings.  相似文献   

9.
Two experiments tested if cognitive load interferes with perspective-taking in verbal communication even if feedback from the addressee is available. Participants gave instructions on the assembly of a machine model. In Experiment 1, cognitive load was demonstrated to be a function of the complexity of assembly steps. In Experiment 2, position of feedback (during simple vs. during complex steps) and type of feedback (question vs. ambiguous interjection) were manipulated. With simple steps, speakers' responses were a function of feedback type. Speakers responded differently to questions than to interjections. With complex steps, however, responses were a function of cognitive load. Regardless of the type of feedback, most speakers simply repeated their previous utterances.  相似文献   

10.
The purpose of this study was to examine the effect of filled pauses (uh) on the verification of words and the establishment of causal connections during the comprehension of spoken expository discourse. With this aim, we asked Spanish-speaking students to listen to excerpts of interviews with writers, and to perform a word-verification task and a question-answering task on causal connectivity. There were two versions of the excerpts: filled pause present and filled pause absent. Results indicated that filled pauses increased verification times for words that preceded them, but did not make a difference on response times to questions on causal connectivity. The results suggest that, as signals of delay, filled pauses create a break with surface information, but they do not have the same effect on the establishment of meaningful connections.  相似文献   

11.
It is argued in the following that the dialogical complexity of speaker perspective requires a broad empirical analysis. To date, such analyses, particularly of political discourse, have been couched in terms of narrower concepts, such as self-presentation and political positioning or involvement/distancing, and have been typically carried out by means of qualitative methods applied to pronominal usage. The present research applies complementarily both quantitative and qualitative analyses to BBC television interviews of Shimon Peres (January 29, 2001) and of Edward Said (October 18, 2000) by Tim Sebastian in a program entitled HARDtalk. In addition to pronouns, these analyses include a number of other hypothetical indicators of a broad concept of perspective on the part of both interviewer and interviewees: turn-initial words, hesitations, questions, use of yes and no, personal reference utterances (e.g., I think), interjections, number of syllables spoken, and interruptions and overlaps. Quantitative comparisons of interviewer with interviewee revealed important differences on all these measures. Qualitative analyses also confirmed subtle local dynamics of perspective. Accordingly, the findings are interpreted within a general theoretical concept of perspective, derived from Bakhtin's (1981) dialogicity.  相似文献   

12.
The proposition that the difference in memory span between Welsh digits and English digits is accounted for by the longer articulatory duration of Welsh digits is critically reexamined. Two methods of measuring digit duration are contrasted. One is derived from digits spoken in isolation; the other is based on digits spoken in list format. Duration of Welsh digits was greater only when spoken in lists; with isolated production Welsh digits were significantly shorter than English digits. Also, span was shorter for Welsh digits. The results are interpreted in the light of the different articulatory demands made at the junctures between words in the English and Welsh lists. A supplementary experiment, using English words, illustrated that articulatory complexity at item boundaries increased serial recall error.  相似文献   

13.
Ten excerpts of both President Clinton's Grand Jury Testimony of August 17, 1998 and of each of two interviews with Hillary Rodham Clinton (Today Show, January 27, 1998; Good Morning America, January 28, 1998) were analyzed. In all of them, the topic under discussion was the President's insistence on his innocence in the Lewinsky case. Comparisons between the President and First Lady revealed long and short within-speaker pauses, respectively. His replies to questions average more than twice the length of hers. Comparisons were also made with other speech genres, including modern presidential inaugural rhetoric. In particular, President Clinton's statement of his innocence at the conclusion of an educational press conference on January 26, 1998 and his prepared statement at the beginning of his Grand Jury Testimony were found to vary notably from all the other corpora. Both are characterized by several of Ekman's (1985, p. 286) behavioral cues for the detection of deception.  相似文献   

14.
The question of whether overt recall of to-be-remembered material accelerates learning is important in a wide range of real-world learning settings. In the case of verbal sequence learning, previous research has proposed that recall either is necessary for verbal sequence learning (Cohen & Johansson Journal of Verbal Learning and Verbal Behavior, 6, 139–143, 1967; Cunningham, Healy, & Williams Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 575–597, 1984), or at least contributes significantly to it (Glass, Krejci, & Goldman Journal of Memory and Language, 28, 189–199, 1989; Oberauer & Meyer Memory, 17, 774–781, 2009). In contrast, here we show that the amount of previous spoken recall does not predict learning and is not necessary for it. We suggest that previous research may have underestimated participants’ learning by using suboptimal performance measures, or by using manual or written recall. However, we show that the amount of spoken recall predicted how much interference from other to-be-remembered sequences would be observed. In fact, spoken recall mediated most of the error learning observed in the task. Our data support the view that the learning of overlapping auditory–verbal sequences is driven by learning the phonological representations and not the articulatory motor responses. However, spoken recall seems to reinforce already learned representations, whether they are correct or incorrect, thus contributing to a participant identifying a specific stimulus as either “learned” or “new” during the presentation phase.  相似文献   

15.
When engaging in conversation, we efficiently go back and forth with our partner, organizing our contributions in reciprocal turn-taking behavior. Using multiple auditory and visual cues, we make online decisions about when it is the appropriate time to take our turn. In two experiments, we demonstrated, for the first time, that auditory and visual information serve complementary roles when making such turn-taking decisions. We presented clips of single utterances spoken by individuals engaged in conversations in audiovisual, auditory-only or visual-only modalities. These utterances occurred either right before a turn exchange (i.e., ‘Turn-Ends’) or right before the next sentence spoken by the same talker (i.e., ‘Turn-Continuations’). In Experiment 1, participants discriminated between Turn-Ends and Turn-Continuations in order to synchronize a button-press response to the moment the talker would stop speaking. We showed that participants were best at discriminating between Turn-Ends and Turn-Continuations in the audiovisual condition. However, in terms of response synchronization, participants were equally precise at timing their responses to a Turn-End in the audiovisual and auditory-only conditions, showing no advantage of visual information. In Experiment 2, we used a gating paradigm, where increasing segments of Turns-Ends and Turn-Continuations were presented, and participants predicted if a turn exchange would occur at the end of the sentence. We found an audiovisual advantage in detecting an upcoming turn early in the perception of a turn exchange. Together, these results suggest that visual information functions as an early signal indicating an upcoming turn exchange while auditory information is used to precisely time a response to the turn end.  相似文献   

16.
Although articulatory suppression abolishes the effect of irrelevant sound (ISE) on serial recall when sequences are presented visually, the effect persists with auditory presentation of list items. Two experiments were designed to test the claim that, when articulation is suppressed, the effect of irrelevant sound on the retention of auditory lists resembles a suffix effect. A suffix is a spoken word that immediately follows the final item in a list. Even though participants are told to ignore it, the suffix impairs serial recall of auditory lists. In Experiment 1, the irrelevant sound consisted of instrumental music. The music generated a significant ISE that was abolished by articulatory suppression. It therefore appears that, when articulation is suppressed, irrelevant sound must contain speech for it to have any effect on recall. This is consistent with what is known about the suffix effect. In Experiment 2, the effect of irrelevant sound under articulatory suppression was greater when the irrelevant sound was spoken by the same voice that presented the list items. This outcome is again consistent with the known characteristics of the suffix effect. It therefore appears that, when rehearsal is suppressed, irrelevant sound disrupts the acoustic-perceptual encoding of auditorily presented list items. There is no evidence that the persistence of the ISE under suppression is a result of interference to the representation of list items in a postcategorical phonological store.  相似文献   

17.
Neisser, Hoenig, and Goldstein (1969) reduced the “stimulus prefix effect” (diminished recall of seven digits preceded by a redundant prefix) when the redundant prefix and the recall digits were produced by different speakers. In the present studies, similar results were obtained using one speaker only, but with the prefix and recall digits spoken separately in different utterances and combined by tape splicing. The results support a hypothesis concerning the perception of intact, wholistically organized articulatory units. A second hypothesis, also based on the idea of intact articulatory units, was tested.  相似文献   

18.
A scale for assessing the complexity or density of utterances was developed using 10 categories ofsemantic relations (e.g., temporal ordering, causality). The categories are inferable from the particular meanings of the words (e.g., connectives, particular tense variations) used in an utterance. The scale was applied to three samples of subjects to assess its interjudge reliability and to compare the utterances of fourth-, sixth-, and eighth-grade children from middle- and working-class neighborhoods. It was also used to compare the complexity of utterances for different types of visual stimuli (used to elicit language samples). Interjudge reliabilities were more than acceptable for each of the samples, and significant differences in semantic density were found across grade, between children from working-class and middle-class neighborhoods, and for the stimuli used to elicit the utterances. When two of the three types of eliciting visual stimuli were equated for content and exposure conditions, the differences in verbal density between eliciting conditions were not replicated. The usefulness of the scale for assessing utterance density and by implication, comprehension difficulty of utterances and of texts, is discussed.This study was supported in part by a Grant from the National Institute of Education, U.S. Department of Health, Education and Welfare, Complexity in Auditory and Graphics Communication, Project No. 4-470. Points of view or opinions stated here do not necessarily represent National Institute of Education position or policy.  相似文献   

19.
The aimof this study is to show that, for effective communication in interviews, media professionals sometimes speak spontaneously instead of reading their questions. The research question arises from the incompatibility of ideal delivery, on the one hand, and the concept of spontaneity as hesitant and therefore defective, on the other. In a 2×2(speakersxtasks) within-subject design, readings and interviews of prominent American TV network anchormen (W. Cronkite and D. Rather) are compared in terms of temporal and hesitation variables. The results indicate differences between the two types of speaking in pause duration, variance of articulation rate per phrase-length unit, occurrence of pauses at various syntactic positions, the relationship between pause duration and pause position, and occurrence of vocal hesitations (filled pauses, repeats, and false starts). The findings are interpreted in terms of a neglected but basic concept required for any theory of language use—communicative intent.  相似文献   

20.
ABSTRACT— During the grammatical encoding of spoken multiword utterances, various kinds of information must be used to determine the order of words. For example, whereas in adjective-noun utterances like "red car," word order can be determined on the basis of the word's grammatical class information, in noun-noun utterances like "… by car, bus, or …," word order cannot be determined on the basis of a word's grammatical class information. We investigated whether a word's phonological properties play a role in grammatical encoding. In four experiments, participants produced multiword utterances in which the words' onset phonology was manipulated. Phonological-onset relatedness yielded inhibitory effects in noun-noun utterances, no effects in noun-adjective utterances, and facilitatory effects in adjective-noun, noun-verb, and adjective-adjective-noun utterances. These results cannot be explained by differences in the stimulus displays used to elicit the utterances and suggest that grammatical encoding is sensitive to the phonological properties of words.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号