首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context.  相似文献   

2.
Children achieve increasingly complex language milestones initially in gesture or in gesture+speech combinations before they do so in speech, from first words to first sentences. In this study, we ask whether gesture continues to be part of the language-learning process as children begin to develop more complex language skills, namely narratives. A key aspect of narrative development is tracking story referents, specifying who did what to whom. Adults track referents primarily in speech by introducing a story character with a noun phrase and then following the same referent with a pronoun—a strategy that presents challenges for young children. We ask whether young children can track story referents initially in communications that combine gesture and speech by using character viewpoint in gesture to introduce new story characters, before they are able to do so exclusively in speech using nouns followed by pronouns. Our analysis of 4- to 6-year-old children showed that children introduced new characters in gesture+speech combinations with character viewpoint gestures at an earlier age than conveying the same referents exclusively in speech with the use of nominal phrases followed by pronouns. Results show that children rely on viewpoint in gesture to convey who did what to whom as they take their first steps into narratives.  相似文献   

3.
Previous evidence suggests that children's mastery of prosodic modulations to signal the informational status of discourse referents emerges quite late in development. In the present study, we investigate the children's use of head gestures as it compares to prosodic cues to signal a referent as being contrastive relative to a set of possible alternatives. A group of French-speaking pre-schoolers were audio-visually recorded while playing in a semi-spontaneous but controlled production task, to elicit target words in the context of broad focus, contrastive focus, or corrective focus utterances. We analysed the acoustic features of the target words (syllable duration and word-level pitch range), as well as the head gesture features accompanying these target words (head gesture type, alignment patterns with speech). We found that children's production of head gestures, but not their use of either syllable duration or word-level pitch range, was affected by focus condition. Children mostly aligned head gestures with relevant speech units, especially when the target word was in phrase-final position. Moreover, the presence of a head gesture was linked to greater syllable duration patterns in all focus conditions. Our results show that (a) 4- and 5-year-old French-speaking children use head gestures rather than prosodic cues to mark the informational status of discourse referents, (b) the use of head gestures may gradually entrain the production of adult-like prosodic features, and that (c) head gestures with no referential relation with speech may serve a linguistic structuring function in communication, at least during language development.  相似文献   

4.
Two experiments investigated the relative influence of speech and pointing gesture information in the interpretation of referential acts. Children averaging 3 and 5 years of age and adults viewed a videotape containing the independent manipulation of speech and gestural forms of reference. A man instructed the subjects to choose a ball or a doll by vocally labeling the referent and/or pointing to it. A synthetic speech continuum between two alternatives was crossed with the pointing gesture in a factorial design. Based on research in other domains, it was predicted that all age groups would utilize gestural information, although both speech and gestures were predicted to influence children less than adults. The main effects and interactions of speech and gesture in combination with quantitative models of performance showed the following similarities in information processing between preschoolers and adults: (1) referential evaluation of gestures occurs independently of the evaluation of linguistic reference; (2) speech and gesture are continuous, rather than discrete, sources of information; (3) 5-year-olds and adults combine the two types of information in such a way that the least ambiguous source has the most impact on the judgment. Greater discriminability of both speech and gesture information for adults compared to preschoolers indicated small quantitative progressions with development in the ability to extract and utilize referential signals.  相似文献   

5.
6.
Past research in referential communication has indicated normally developing children show developmental progression in ability to communicate a specific referent to a listener. In one paradigm subjects were given lists of word-pairs in which one member of each pair was designated as the referent. It was shown that communicating about referents found in word-pairs associated in some way was more difficult than communicating about referents in dissimilar word-pairs. The present study extended this methodology to learning-disabled children. Learning-disabled, language-learning-disabled, and normally achieving children were asked to communicate about 30 pictured referents on three different tasks. On Tasks 1 and 2 each subject was asked to give a clue for the referent that would distinguish it from the other picture. Stimuli for Task 1 were 30 pairs of pictures that were related in some way and the stimuli for Task 2 were 30 pairs of unrelated pictures. Task 3 required the subjects to evaluate the adequacy of the examiner's clues for Task 1 stimuli. The disabled subjects were matched to the normally achieving subjects on the basis of receptive vocabulary age. Few differences were noted among the groups' performances on these referential communication tasks. Implications include the importance of vocabulary and concept development to referential communication.  相似文献   

7.
GESTURE, SPEECH, AND LEXICAL ACCESS:   总被引:1,自引:0,他引:1  
Abstract— In a within-subjects design that varied whether speakers were allowed to gesture and the difficulty of lexical access, speakers were videotaped as they described animated action cartoons to a listener. When speakers were permitted to gesture, they gestured more often during phrases with spatial content than during phrases with other content. Speech with spatial content was less fluent when speakers could not gesture than when they could gesture, speech with nonspatial content was not affected by gesture condition. Preventing gesturing increased the relative frequency of nonjuncture filled pauses in speech with spatial content, but not in speech with other content. Overall, the effects of preventing speakers from gesturing resembled those of increasing the difficulty of lexical access by other means, except that the effects of gesture restriction were specific to speech with spatial content. The findings support the hypothesis that gestural accompaniments to spontaneous speech can facilitate access to the mental lexicon  相似文献   

8.
An eye tracking methodology was used to evaluate 3- and 4-year-old children’s sensitivity to speaker affect when resolving referential ambiguity. Children were presented with pictures of three objects on a screen (including two referents of the same kind, e.g., an intact doll and a broken doll, and one distracter item), paired with a prerecorded referentially ambiguous instruction (e.g., “Look at the doll”). The intonation of the instruction varied in terms of the speaker’s vocal affect: positive-sounding, negative-sounding, or neutral. Analyses of eye gaze patterns indicated that 4-year-olds, but not 3-year-olds, were more likely to look to the referent whose state matched the speaker’s vocal affect as the noun was heard (e.g., looked more often to the broken doll referent in the negative affect condition). These findings indicate that 4-year-olds can use speaker affect to help identify referential mappings during on-line comprehension.  相似文献   

9.
The present experiment tested the hypothesis that development of syntactic comprehension through verbal modeling is enhanced by referent concreteness as a contextual influence. Young children heard a model narrate a series of events in passive form while the model either performed the corresponding activities, showed pictures portraying the same activities, or displayed no referential aids. In accord with prediction, verbal modeling with enactive referents produced higher levels of comprehension of passives than modeling with pictorial referents or modeling without referential aids. Modeling with pictorial referents and modeling without referents did not differ in overall efficacy. However, modeling alone produced results that were less consistent across different measures of comprehension. Children who lacked understanding of passives were more dependent on concrete referents than those who had some initial comprehension of the linguistic form. The results suggest that verbal modeling with pictorial referents and verbal modeling alone facilitate comprehension of passives, whereas verbal modeling with enactive referents promotes learning. Findings of a supplemental experiment reveal that the effects of verbal modeling on comprehension are enhanced when syntactic forms occur in a meaningful verbal context.  相似文献   

10.
We report two experiments that investigated the widely held assumption that speakers use the addressee's discourse model when choosing referring expressions (e.g., Ariel, 1990; Chafe, 1994; Givón, 1983; Prince, 1985), by manipulating whether the addressee could hear the immediately preceding linguistic context. Experiment 1 showed that speakers increased pronoun use (and decreased noun phrase use) when the referent was mentioned in the immediately preceding sentence compared to when it was not, even though the addressee did not hear the preceding sentence, indicating that speakers used their own, privileged discourse model when choosing referring expressions. The same pattern of results was found in Experiment 2. Speakers produced more pronouns when the immediately preceding sentence mentioned the referent than when it mentioned a referential competitor, regardless of whether the sentence was shared with their addressee. Thus, we conclude that choice of referring expression is determined by the referent's accessibility in the speaker's own discourse model rather than the addressee's.  相似文献   

11.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

12.
The current study assessed the extent to which the use of referential prosody varies with communicative demand. Speaker–listener dyads completed a referential communication task during which speakers attempted to indicate one of two color swatches (one bright, one dark) to listeners. Speakers' bright sentences were reliably higher pitched than dark sentences for ambiguous (e.g., bright red versus dark red) but not unambiguous (e.g., bright red versus dark purple) trials, suggesting that speakers produced meaningful acoustic cues to brightness when the accompanying linguistic content was underspecified (e.g., “Can you get the red one?”). Listening partners reliably chose the correct corresponding swatch for ambiguous trials when lexical information was insufficient to identify the target, suggesting that listeners recruited prosody to resolve lexical ambiguity. Prosody can thus be conceptualized as a type of vocal gesture that can be recruited to resolve referential ambiguity when there is communicative demand to do so.  相似文献   

13.
Two experiments investigated the extent to which literal processing occurs in comprehending figurative idiomatic expressions. Subjects read stories on a cathode-ray tube (CRT). Target phrases, some of which were idioms, contained nouns which were potential anaphors of previously mentioned referents. A method developed by Dell, McKoon and Ratcliff (1983) was used to determine whether subjects carried out semantic processing resulting in activation of the referents of those anaphors. In Experiment 1 the targets consisted of either an idiom or a literal phrase, each including the same potential anaphor, or a control phrase. Results suggest that the preceding referent was activated by the anaphor in the literal phrase, but not by the potential anaphor in the idiomatic phrase. Experiment 2 showed that these results were not due to differences in the materials used. These results are interpreted as supporting the hypothesis that when an idiomatic phrase is interpreted figuratively full literal semantic processing of that phase is not necessarily carried out.This research was conducted while the author was a graduate student at the Department of Psychology, Northeastern University.  相似文献   

14.
Spontaneous gesture frequently accompanies speech. The question is why. In these studies, we tested two non‐mutually exclusive possibilities. First, speakers may gesture simply because they see others gesture and learn from this model to move their hands as they talk. We tested this hypothesis by examining spontaneous communication in congenitally blind children and adolescents. Second, speakers may gesture because they recognize that gestures can be useful to the listener. We tested this hypothesis by examining whether speakers gesture even when communicating with a blind listener who is unable to profit from the information that the hands convey. We found that congenitally blind speakers, who had never seen gestures, nevertheless gestured as they spoke, conveying the same information and producing the same range of gesture forms as sighted speakers. Moreover, blind speakers gestured even when interacting with another blind individual who could not have benefited from the information contained in those gestures. These findings underscore the robustness of gesture in talk and suggest that the gestures that co‐occur with speech may serve a function for the speaker as well as for the listener.  相似文献   

15.
16.
The present study investigated the contribution of lexico-semantic associations to impairments in establishing reference in schizophrenia. We examined event-related potentials as schizophrenia patients and healthy, demographically matched controls read five-sentence scenarios. Sentence 4 introduced a noun that referred back to three possible referents introduced in Sentences 1–3. These referents were contextually appropriate, contextually inappropriate but lexico-semantically associated, and contextually inappropriate and lexico-semantically nonassociated. In order to determine whether participants had correctly linked the anaphor to its referent, the final sentence reintroduced each referent, and participants indicated whether the last two sentences referred to the same entity. Results indicated that between 300 and 400 ms, patients, like healthy controls, used discourse context to link the noun with its preceding referent. However, between 400 and 500 ms, neural activity in patients was modulated only by lexico-semantic associations, rather than by discourse context. Moreover, patients were also more likely than controls to incorrectly link the noun with contextually inappropriate but lexico-semantically associated referents. These results suggest that at least some types of referential impairments may be driven by sustained activation of contextually inappropriate lexico-semantic associations.  相似文献   

17.
People move their hands as they talk – they gesture. Gesturing is a robust phenomenon, found across cultures, ages, and tasks. Gesture is even found in individuals blind from birth. But what purpose, if any, does gesture serve? In this review, I begin by examining gesture when it stands on its own, substituting for speech and clearly serving a communicative function. When called upon to carry the full burden of communication, gesture assumes a language-like form, with structure at word and sentence levels. However, when produced along with speech, gesture assumes a different form – it becomes imagistic and analog. Despite its form, the gesture that accompanies speech also communicates. Trained coders can glean substantive information from gesture – information that is not always identical to that gleaned from speech. Gesture can thus serve as a research tool, shedding light on speakers’ unspoken thoughts. The controversial question is whether gesture conveys information to listeners not trained to read them. Do spontaneous gestures communicate to ordinary listeners? Or might they be produced only for speakers themselves? I suggest these are not mutually exclusive functions – gesture serves as both a tool for communication for listeners, and a tool for thinking for speakers.  相似文献   

18.
When describing visual scenes, speakers typically gaze at objects while preparing their names. In a study of the relation between eye movements and speech, a corpus of self-corrected speech errors was analyzed. If errors result from rushed word preparation, insufficient visual information, or failure to check prepared names against objects, speakers should spend less time gazing at referents before uttering errors than before uttering correct names. Counter to predictions, gazes to referents before errors (e.g., gazes to an axe before saying "ham-" [hammer]) highly resembled gazes to referents before correct names (e.g., gazes to an axe before saying "axe"). However, speakers gazed at referents for more time after initiating erroneous compared with correct names, apparently while they prepared corrections. Assuming that gaze nonetheless reflects word preparation, errors were not associated with insufficient preparation. Nor were errors systematically associated with decreased inspection of objects. Like gesture, gaze may accurately reflect a speaker's intentions even when the accompanying speech does not.  相似文献   

19.
When describing scenes, speakers gaze at objects while preparing their names (Z. M. Griffin & K. Bock, 2000). In this study, the authors investigated whether gazes to referents occurred in the absence of a correspondence between visual features and word meaning. Speakers gazed significantly longer at objects before intentionally labeling them inaccurately with the names of similar things (e.g., calling a horse a dog) than when labeling them accurately. This held for grammatical subjects and objects as well as agents and patients. Moreover, the time spent gazing at a referent before labeling it with a novel word or accurate name was similar and decreased as speakers gained experience using the novel word. These results suggest that visual attention in speaking may be directed toward referents in the absence of any association between their visual forms and the words used to talk about them.  相似文献   

20.
A pragmatic account of referential communication is developed which presents an alternative to traditional Gricean accounts by focusing on cooperativeness and efficiency, rather than informativity. The results of four language‐production experiments support the view that speakers can be cooperative when producing redundant adjectives, doing so more often when color modification could facilitate the listener's search for the referent in the visual display (Experiment 1a). By contrast, when the listener knew which shape was the target, speakers did not produce redundant color adjectives (Experiment 1b). English speakers used redundant color adjectives more often than Spanish speakers, suggesting that speakers are sensitive to the differential efficiency of prenominal and postnominal modification (Experiment 2). Speakers were also cooperative when using redundant size adjectives (Experiment 3). Overall, these results show how discriminability affects a speaker's choice of referential expression above and beyond considerations of informativity, supporting the view that redundant speakers can be cooperative.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号