首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language.  相似文献   

2.
Speech carries accent information relevant to determining the speaker’s linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1–3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of “bonnet”) in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker’s dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.  相似文献   

3.
Adults use gaze and voice signals as cues to the mental and emotional states of others. We examined the influence of voice cues on children’s judgments of gaze. In Experiment 1, 6-year-olds, 8-year-olds, and adults viewed photographs of faces fixating the center of the camera lens and a series of positions to the left and right and judged whether gaze was direct or averted. On each trial, participants heard the participant-directed voice cue (e.g., “I see you”), an object-directed voice cue (e.g., “I see that”), or no voice. In 6-year-olds, the range of directions of gaze leading to the perception of eye contact (the cone of gaze) was narrower for trials with object-directed voice cues than for trials with participant-directed voice cues or no voice. This effect was absent in 8-year-olds and adults, both of whom had a narrower cone of gaze than 6-year-olds. In Experiment 2, we investigated whether voice cues would influence adults’ judgments of gaze when the task was made more difficult by limiting the duration of exposure to the face. Adults’ cone of gaze was wider than in Experiment 1, and the effect of voice cues was similar to that observed in 6-year-olds in Experiment 1. Together, the results indicate that object-directed voice cues can decrease the width of the cone of gaze, allowing more adult-like judgments of gaze in young children, and that voice cues may be especially effective when the cone of gaze is wider because of immaturity (Experiment 1) or limited exposure (Experiment 2).  相似文献   

4.
Cimpian A  Markman EM 《Cognition》2008,107(1):19-53
Sentences that refer to categories - generic sentences (e.g., "Dogs are friendly") - are frequent in speech addressed to young children and constitute an important means of knowledge transmission. However, detecting generic meaning may be challenging for young children, since it requires attention to a multitude of morphosyntactic, semantic, and pragmatic cues. The first three experiments tested whether 3- and 4-year-olds use (a) the immediate linguistic context, (b) their previous knowledge, and (c) the social context to determine whether an utterance with ambiguous scope (e.g., "They are afraid of mice", spoken while pointing to 2 birds) is generic. Four-year-olds were able to take advantage of all the cues provided, but 3-year-olds were sensitive only to the first two. In Experiment 4, we tested the relative strength of linguistic-context cues and previous-knowledge cues by putting them in conflict; in this task, 4-year-olds, but not 3-year-olds, preferred to base their interpretations on the explicit noun phrase cues from the linguistic context. These studies indicate that, from early on, children can use contextual and semantic information to construe sentences as generic, thus taking advantage of the category knowledge conveyed in these sentences.  相似文献   

5.
Across languages, children map words to meaning with great efficiency, despite a seemingly unconstrained space of potential mappings. The literature on how children do this is primarily limited to spoken language. This leaves a gap in our understanding of sign language acquisition, because several of the hypothesized mechanisms that children use are visual (e.g., visual attention to the referent), and sign languages are perceived in the visual modality. Here, we used the Human Simulation Paradigm in American Sign Language (ASL) to determine potential cues to word learning. Sign-naïve adult participants viewed video clips of parent–child interactions in ASL, and at a designated point, had to guess what ASL sign the parent produced. Across two studies, we demonstrate that referential clarity in ASL interactions is characterized by access to information about word class and referent presence (for verbs), similarly to spoken language. Unlike spoken language, iconicity is a cue to word meaning in ASL, although this is not always a fruitful cue. We also present evidence that verbs are highlighted well in the input, relative to spoken English. The results shed light on both similarities and differences in the information that learners may have access to in acquiring signed versus spoken languages.  相似文献   

6.
Thorpe K  Fernald A 《Cognition》2006,100(3):389-433
Three studies investigated how 24-month-olds and adults resolve temporary ambiguity in fluent speech when encountering prenominal adjectives potentially interpretable as nouns. Children were tested in a looking-while-listening procedure to monitor the time course of speech processing. In Experiment 1, the familiar and unfamiliar adjectives preceding familiar target nouns were accented or deaccented. Target word recognition was disrupted only when lexically ambiguous adjectives were accented like nouns. Experiment 2 measured the extent of interference experienced by children when interpreting prenominal words as nouns. In Experiment 3, adults used prosodic cues to identify the form class of adjective/noun homophones in string-identical sentences before the ambiguous words were fully spoken. Results show that children and adults use prosody in conjunction with lexical and distributional cues to ‘listen through’ prenominal adjectives, avoiding costly misinterpretation.  相似文献   

7.
Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.  相似文献   

8.
Learning a new word consists of two primary tasks that have often been conflated into a single process: referent selection, in which a child must determine the correct referent of a novel label, and referent retention, which is the ability to store this newly formed label-object mapping in memory for later use. In addition, children must be capable of performing these tasks rapidly and repeatedly as they are frequently exposed to novel words during the course of natural conversation. Here we used a preferential pointing task to investigate 2-year-olds’ (N = 72) ability to infer the referent of a novel noun from a single ambiguous exposure and their ability to retain this mapping over time. Children were asked to identify the referent of a novel label on six critical trials distributed throughout the course of a 10-min study involving many familiar and novel objects. On these critical trials, images of a known object and a novel object (e.g., a ball and a nameless artifact constructed in the laboratory) appeared on two computer screens and a voice asked children to “point at the _____ [e.g., glark].” Following label onset, children were allowed only 3 s during which to infer the correct referent, point at it, and potentially store this new word-object mapping. In a final posttest trial, all previously labeled novel objects appeared and children were asked to point to one of them (e.g., “Can you find the glark?”). To succeed, children needed to have initially mapped the novel labels correctly and retained these mappings over the course of the study. Despite the difficult demands of the current task, children successfully identified the target object on the retention trial. We conclude that 2-year-olds are able to fast map novel nouns during a brief single exposure under ambiguous labeling conditions.  相似文献   

9.
Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant–ant for big–small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.  相似文献   

10.
How do children evaluate the veracity of printed text? We examined children’s handling of unexpected suggestions conveyed via print versus orally. In Experiment 1 (N = 131), 3- to 6-year-olds witnessed a speaker either read aloud an unexpected but not completely implausible printed label (e.g., fish for a bird-like animal with some fish features) or speak the label without accompanying text. Pre-readers accepted labels in both conditions. Early readers often rejected spoken labels yet accepted them in the print condition, and in Experiment 2 (N = 55) 3- to 6-year-olds continued to apply them even after the print was obscured. Early readers accept printed testimony that they reject if only spoken, and the influence of text endures even when it is no longer visible.  相似文献   

11.
In this study, 2.5-, 3-, and 4-year-olds (N = 108) participated in a novel noun generalization task in which background context was manipulated. During the learning phase of each trial, children were presented with exemplars in one or multiple background contexts. At the test, children were asked to generalize to a novel exemplar in either the same or a different context. The 2.5-year-olds’ performance was supported by matching contexts; otherwise, children in this age group demonstrated context dependent generalization. The 3-year-olds’ performance was also supported by matching contexts; however, children in this age group were aided by training in multiple contexts as well. Finally, the 4-year-olds demonstrated high performance in all conditions. The results are discussed in terms of the relationship between word learning and memory processes; both general memory development and memory developments specific to word learning (e.g., retention of linguistic labels) are likely to support word learning and generalization.  相似文献   

12.
Different kinds of speech sounds are used to signify possible word forms in every language. For example, lexical stress is used in Spanish (/‘be.be/, ‘he/she drinks’ versus /be.’be/, ‘baby’), but not in French (/‘be.be/ and /be.’be/ both mean ‘baby’). Infants learn many such native language phonetic contrasts in their first year of life, likely using a number of cues from parental speech input. One such cue could be parents’ object labeling, which can explicitly highlight relevant contrasts. Here we ask whether phonetic learning from object labeling is abstract—that is, if learning can generalize to new phonetic contexts. We investigate this issue in the prosodic domain, as the abstraction of prosodic cues (like lexical stress) has been shown to be particularly difficult. One group of 10-month-old French-learners was given consistent word labels that contrasted on lexical stress (e.g., Object A was labeled /‘ma.bu/, and Object B was labeled /ma.’bu/). Another group of 10-month-olds was given inconsistent word labels (i.e., mixed pairings), and stress discrimination in both groups was measured in a test phase with words made up of new syllables. Infants trained with consistently contrastive labels showed an earlier effect of discrimination compared to infants trained with inconsistent labels. Results indicate that phonetic learning from object labeling can indeed generalize, and suggest one way infants may learn the sound properties of their native language(s).  相似文献   

13.
Listeners are exposed to inconsistencies in communication; for example, when speakers’ words (i.e. verbal) are discrepant with their demonstrated emotions (i.e. non-verbal). Such inconsistencies introduce ambiguity, which may render a speaker to be a less credible source of information. Two experiments examined whether children make credibility discriminations based on the consistency of speakers’ affect cues. In Experiment 1, school-age children (7- to 8-year-olds) preferred to solicit information from consistent speakers (e.g. those who provided a negative statement with negative affect), over novel speakers, to a greater extent than they preferred to solicit information from inconsistent speakers (e.g. those who provided a negative statement with positive affect) over novel speakers. Preschoolers (4- to 5-year-olds) did not demonstrate this preference. Experiment 2 showed that school-age children's ratings of speakers were influenced by speakers’ affect consistency when the attribute being judged was related to information acquisition (speakers’ believability, “weird” speech), but not general characteristics (speakers’ friendliness, likeability). Together, findings suggest that school-age children are sensitive to, and use, the congruency of affect cues to determine whether individuals are credible sources of information.  相似文献   

14.
When learning new words, do children use a speaker's eye gaze because it reveals referential intent? We conducted two experiments that addressed this question. In Experiment 1, the experimenter left while two novel objects were placed where the child could see both, but the experimenter would be able to see only one. The experimenter returned, looked directly at the mutually visible object, and said either, "There's the [novel word]!" or "Where's the [novel word]?" Two- through 4-year-olds selected the target of the speaker's gaze more often on there trials than on where trials, although only the older children identified the referent correctly at above-chance levels on trials of both types. In Experiment 2, the experimenter placed a novel object where only the child could see it and left while the second object was similarly hidden. When she returned and asked, "Where's the [novel word]?" 2- through 4-year-olds chose the second object at above-chance levels. Preschoolers do not blindly follow gaze, but consider the linguistic and pragmatic context when learning a new word.  相似文献   

15.
A fundamental assumption regarding spoken language is that the relationship between sound and meaning is essentially arbitrary. The present investigation questioned this arbitrariness assumption by examining the influence of potential non-arbitrary mappings between sound and meaning on word learning in adults. Native English-speaking monolinguals learned meanings for Japanese words in a vocabulary-learning task. Spoken Japanese words were paired with English meanings that: (1) matched the actual meaning of the Japanese word (e.g., “hayai” paired with fast); (2) were antonyms for the actual meaning (e.g., “hayai” paired with slow); or (3) were randomly selected from the set of antonyms (e.g., “hayai” paired with blunt). The results showed that participants learned the actual English equivalents and antonyms for Japanese words more accurately and responded faster than when learning randomly paired meanings. These findings suggest that natural languages contain non-arbitrary links between sound structure and meaning and further, that learners are sensitive to these non-arbitrary relationships within spoken language.  相似文献   

16.
The present study was designed to examine age differences in the ability to use voice information acquired intentionally (Experiment 1) or incidentally (Experiment 2) as an aid to spoken word identification. Following both implicit and explicit voice learning, participants were asked to identify novel words spoken either by familiar talkers (ones they had been exposed to in the training phase) or by 4 unfamiliar voices. In both experiments, explicit memory for talkers' voices was significantly lower in older than in young listeners. Despite this age-related decline in voice recognition, however, older adults exhibited equivalent, and in some cases greater, benefit than young listeners from having words spoken by familiar talkers. Implications of the findings for age-related changes in explicit versus implicit memory systems are discussed.  相似文献   

17.
Across a series of four experiments with 3‐ to 4‐year‐olds we demonstrate how cognitive mechanisms supporting noun learning extend to the mapping of actions to objects. In Experiment 1 (n = 61) the demonstration of a novel action led children to select a novel, rather than a familiar object. In Experiment 2 (n = 78) children exhibited long‐term retention of novel action‐object mappings and extended these actions to other category members. In Experiment 3 (n = 60) we showed that children formed an accurate sensorimotor record of the novel action. In Experiment 4 (n = 54) we demonstrate limits on the types of actions mapped to novel objects. Overall these data suggest that certain aspects of noun mapping share common processing with action mapping and support a domain‐general account of word learning.  相似文献   

18.
It is widely accepted that adults show an advantage for deontic over epistemic reasoning. Two published studies (Cummins, 1996b; Harris and Núñez, 1996, Experiment 4) found evidence of this “deontic advantage” in preschool-aged children and are frequently cited as evidence that preschoolers show the same deontic advantage as adults. However, neither study has been replicated, and it is not clear from either study that preschoolers were showing the deontic advantage under the same conditions as adults. The current research investigated these issues. Experiment 1 attempted to replicate both Cummins’s and Harris and Núñez’s studies with 3- and 4-year-olds (N = 56), replicating the former with 4-year-olds and the latter with both 3- and 4-year-olds. Experiment 2 modified Cummins’s task to remove the contextual differences between conditions, making it more similar to adult tasks, finding that 4-year-olds (n = 16) show no evidence of the deontic advantage when no authority figure is present in the deontic condition, whereas both 7-year-olds (n = 16) and adults (n = 28) do. Experiment 3 removed the authority figure from the deontic condition in Harris and Núñez’s task, again finding that 3- and 4-year-olds (N = 28) show no evidence of the deontic advantage under these conditions. These results suggest that for preschoolers, the deontic advantage is reliant on particular contextual cues such as the presence of an authority figure, in the deontic condition. By 7 years of age, however, children are reasoning like adults and show evidence of the advantage when no such contextual cues are present.  相似文献   

19.
Because children hear language in environments that contain many things to talk about, learning the meaning of even the simplest word requires making inferences under uncertainty. A cross-situational statistical learner can aggregate across naming events to form stable word-referent mappings, but this approach neglects an important source of information that can reduce referential uncertainty: social cues from speakers (e.g., eye gaze). In four large-scale experiments with adults, we tested the effects of varying referential uncertainty in cross-situational word learning using social cues. Social cues shifted learners away from tracking multiple hypotheses and towards storing only a single hypothesis (Experiments 1 and 2). In addition, learners were sensitive to graded changes in the strength of a social cue, and when it became less reliable, they were more likely to store multiple hypotheses (Experiment 3). Finally, learners stored fewer word-referent mappings in the presence of a social cue even when given the opportunity to visually inspect the objects for the same amount of time (Experiment 4). Taken together, our data suggest that the representations underlying cross-situational word learning of concrete object labels are quite flexible: In conditions of greater uncertainty, learners store a broader range of information.  相似文献   

20.
Consonants and vowels have been shown to play different relative roles in different processes, including retrieving known words from pseudowords during adulthood or simultaneously learning two phonetically similar pseudowords during infancy or toddlerhood. The current study explores the extent to which French-speaking 3- to 5-year-olds exhibit a so-called “consonant bias” in a task simulating word acquisition, that is, when learning new words for unfamiliar objects. In Experiment 1, the to-be-learned words differed both by a consonant and a vowel (e.g., /byf/-/duf/), and children needed to choose which of the two objects to associate with a third one whose name differed from both objects by either a consonant or a vowel (e.g., /dyf/). In such a conflict condition, children needed to favor (or neglect) either consonant information or vowel information. The results show that only 3-year-olds preferentially chose the consonant identity, thereby neglecting the vowel change. The older children (and adults) did not exhibit any response bias. In Experiment 2, children needed to pick up one of two objects whose names differed on either consonant information or vowel information. Whereas 3-year-olds performed better with pairs of pseudowords contrasting on consonants, the pattern of asymmetry was reversed in 4-year-olds, and 5-year-olds did not exhibit any significant response bias. Interestingly, girls showed overall better performance and exhibited earlier changes in performance than boys. The changes in consonant/vowel asymmetry in preschoolers are discussed in relation with developments in linguistic (lexical and morphosyntactic) and cognitive processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号