首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We evaluated the impact of visual similarity and action similarity on visual object identification. We taught participants to associate novel objects with nonword labels and verified that in memory visually similar objects were confused more often than visually dissimilar objects. We then taught participants to associate novel actions with nonword labels and verified that similar actions were confused moreoften th an dissimilaractions. We then paired specific objects with specific actions. Visually similar objects paired with similar actions were confused more often in memory than when these same objects were paired with dissimilar actions. Hence the actions associated with objects served to increase or decrease their separation in memory space, and influenced the ease with which these objects could be identified. These experiments ultimately demonstrated that when identifying stationary objects, the memory of how these object were used dramatically influenced the ability to identify these objects.  相似文献   

2.
Humans intuitively think about the actions of others in terms of mental states eliefs, desires, emotions and intentions. This 'theory of mind' plays a central role in how children learn the meanings of certain words. First, it underlies how they determine the reference of a novel word. When children hear a new object name (e.g. 'Look at the fendle'), they do not use spatio-temporal contiguity to determine what the word describes; instead they focus on cues to the referential intention of the speaker, such as direction of gaze. Second, an understanding of purpose and design is sometimes necessary to enable the child to understand the entities and actions that nouns and verbs refer to. This is particularly relevant for nouns that refer to collections of objects such as 'family' and 'game', and for verbs that refer to actions defined in terms of an actor's goals, such as 'give' and 'make'. Finally, intentional considerations partially underlie the generalization of names for artifact categories, such as 'chair' and 'clock', which can refer to entities of highly dissimilar appearance.  相似文献   

3.
In these studies, we examined how a default assumption about word meaning, the mutual exclusivity assumption and an intentional cue, gaze direction, interacted to guide 24‐month‐olds' object‐word mappings. In Expt 1, when the experimenter's gaze was consistent with the mutual exclusivity assumption, novel word mappings were facilitated. When the experimenter's eye‐gaze was in conflict with the mutual exclusivity cue, children demonstrated a tendency to rely on the mutual exclusivity assumption rather than follow the experimenter's gaze to map the label to the object. In Expt 2, children relied on the experimenter's gaze direction to successfully map both a first label to a novel object and a second label to a familiar object. Moreover, infants mapped second labels to familiar objects to the same degree that they mapped first labels to novel objects. These findings are discussed with regard to children's use of convergent and divergent cues in indirect word mapping contexts.  相似文献   

4.
We directly compare children learning argument expressing and argument dropping languages on the use of verb meaning and syntactic cues, by examining enactments of transitive and intransitive verbs given in transitive and intransitive syntactic frames. Our results show similarities in the children’s knowledge: (1) Children were somewhat less likely to perform an action when the core meaning of a verb was in conflict with the frame in which it was presented; (2) Children enacted the core meaning of the verb with considerable accuracy in all conditions; and (3) Children altered their actions to include or not include explicit objects appropriately to the frame. The results suggest that 3-year-olds learning languages that present them with very different structural cues still show similar knowledge about and sensitivity to the core meanings of transitive and intransitive verbs as well as the implications of the frames in which they appear.  相似文献   

5.
Imai M  Kita S  Nagumo M  Okada H 《Cognition》2008,109(1):54-65
Some words are sound-symbolic in that they involve a non-arbitrary relationship between sound and meaning. Here, we report that 25-month-old children are sensitive to cross-linguistically valid sound-symbolic matches in the domain of action and that this sound symbolism facilitates verb learning in young children. We constructed a set of novel sound-symbolic verbs whose sounds were judged to match certain actions better than others, as confirmed by adult Japanese- as well as English speakers, and by 2- and 3-year-old Japanese-speaking children. These sound-symbolic verbs, together with other novel non-sound-symbolic verbs, were used in a verb learning task with 3-year-old Japanese children. In line with the previous literature, 3-year-olds could not generalize the meaning of novel non-sound-symbolic verbs on the basis of the sameness of action. However, 3-year-olds could correctly generalize the meaning of novel sound-symbolic verbs. These results suggest that iconic scaffolding by means of sound symbolism plays an important role in early verb learning.  相似文献   

6.
This study explores a common assumption made in the cognitive development literature that children will treat gestures as labels for objects. Without doubt, researchers in these experiments intend to use gestures symbolically as labels. The present studies examine whether children interpret these gestures as labels. In Study 1 two-, three-, and four-year olds tested in a training paradigm learned gesture–object pairs for both iconic and arbitrary gestures. Iconic gestures became more accurate with age, while arbitrary gestures did not. Study 2 tested the willingness of children aged 40–60 months to fast map novel nouns, iconic gestures and arbitrary gestures to novel objects. Children used fast mapping to choose objects for novel nouns, but treated gesture as an action associate, looking for an object that could perform the action depicted by the gesture. They were successful with iconic gestures but chose objects randomly for arbitrary gestures and did not fast map. Study 3 tested whether this effect was a result of the framing of the request and found that results did not change regardless of whether the request was framed with a deictic phrase (“this one 〈gesture〉”) or an article (“a 〈gesture〉”). Implications for preschool children’s understanding of iconicity, and for their default interpretations of gesture are discussed.  相似文献   

7.
Children (4 to 6 years of age) were taught to associate printed 3- or 4-letter abbreviations, or cues, with spoken words (e.g., bfr for beaver). All but 1 of the letters in the cue corresponded to phonemes in the spoken target word. Two types of cues were constructed: phonetic cues, in which the medial letter was phonetically similar to the target word, and control cues, in which the central phoneme was phonetically dissimilar. In Experiment 1, children learned the phonetic cues better than the control cues, and learning correlated with measures of phonological skill and knowledge of the meanings of the words taught. In Experiment 2, the target words differed on a semantic variable-imageability-and learning was influenced by both the phonetic properties of the cue and the imageability of the words used.  相似文献   

8.
In the first study using point-light displays (lights corresponding to the joints of the human body) to examine children's understanding of verbs, 3-year-olds were tested to see if they could perceive familiar actions that corresponded to motion verbs (e.g., walking). Experiment 1 showed that children could extend familiar motion verbs (e.g., walking and dancing) to videotaped point-light actions shown in the intermodal preferential looking paradigm. Children watched the action that matched the requested verb significantly more than they watched the action that did not match the verb. In Experiment 2, the findings of Experiment 1 were validated by having children spontaneously produce verbs for these actions. The use of point-light displays may illuminate the factors that contribute to verb learning.  相似文献   

9.
Four experiments explored the processing of pointing gestures comprising hand and combined head and gaze cues to direction. The cross-modal interference effect exerted by pointing hand gestures on the processing of spoken directional words, first noted by S. R. H. Langton, C. O'Malley, and V. Bruce (1996), was found to be moderated by the orientation of the gesturer's head-gaze (Experiment 1). Hand and head cues also produced bidirectional interference effects in a within-modalities version of the task (Experiment 2). These findings suggest that both head-gaze and hand cues to direction are processed automatically and in parallel up to a stage in processing where a directional decision is computed. In support of this model, head-gaze cues produced no influence on nondirectional decisions to social emblematic gestures in Experiment 3 but exerted significant interference effects on directional responses to arrows in Experiment 4. It is suggested that the automatic analysis of head, gaze, and pointing gestures occurs because these directional signals are processed as cues to the direction of another individual's social attention.  相似文献   

10.
Research suggests that variability of exemplars supports successful object categorization; however, the scope of variability's support at the level of higher-order generalization remains unexplored. Using a longitudinal study, we examined the role of exemplar variability in first- and second-order generalization in the context of nominal-category learning at an early age. Sixteen 18-month-old children were taught 12 categories. Half of the children were taught with sets of highly similar exemplars; the other half were taught with sets of dissimilar, variable exemplars. Participants' learning and generalization of trained labels and their development of more general word-learning biases were tested. All children were found to have learned labels for trained exemplars, but children trained with variable exemplars generalized to novel exemplars of these categories, developed a discriminating word-learning bias generalizing labels of novel solid objects by shape and labels of nonsolid objects by material, and accelerated in vocabulary acquisition. These findings demonstrate that object variability leads to better abstraction of individual and global category organization, which increases learning outside the laboratory.  相似文献   

11.
Because children hear language in environments that contain many things to talk about, learning the meaning of even the simplest word requires making inferences under uncertainty. A cross-situational statistical learner can aggregate across naming events to form stable word-referent mappings, but this approach neglects an important source of information that can reduce referential uncertainty: social cues from speakers (e.g., eye gaze). In four large-scale experiments with adults, we tested the effects of varying referential uncertainty in cross-situational word learning using social cues. Social cues shifted learners away from tracking multiple hypotheses and towards storing only a single hypothesis (Experiments 1 and 2). In addition, learners were sensitive to graded changes in the strength of a social cue, and when it became less reliable, they were more likely to store multiple hypotheses (Experiment 3). Finally, learners stored fewer word-referent mappings in the presence of a social cue even when given the opportunity to visually inspect the objects for the same amount of time (Experiment 4). Taken together, our data suggest that the representations underlying cross-situational word learning of concrete object labels are quite flexible: In conditions of greater uncertainty, learners store a broader range of information.  相似文献   

12.
In two experiments, this paper examines how the labels used to describe interpersonal interactions can affect perceivers' judgments of who caused the interaction. Two universal, connotative dimensions of word meaning underlying the labels, evaluation and potency, influenced expectations about interactants' behaviors and experiences, which in turn affected perceivers' causal attributions. Evaluation and potency ratings for a set of experiencer verbs, a set of action verbs, a set of trait labels (Experiment 1) and a set of social category labels (Experiment 2) were used to construct sentences describing interactions between two people. The complete set of sentences contained all possible combinations of high or low evaluation and potency for all the sentence constituents. Participants were asked to judge who caused the event—subject or object—without having been told that evaluation and potency were the dimensions of interest. When the sentence subject and object differed in evaluation, the evaluative match between the sentence subject and the verb was the most important factor influencing attributions. The potency of the constituents and the class of the verb (experiencer or action) affected the magnitude of the attributions. When the sentence subject and object shared the same valence, attributions were based on verb class. The results highlight the important role of language in interpreting social behavior. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

13.
Sound‐symbolism is the nonarbitrary link between the sound and meaning of a word. Japanese‐speaking children performed better in a verb generalization task when they were taught novel sound‐symbolic verbs, created based on existing Japanese sound‐symbolic words, than novel nonsound‐symbolic verbs ( Imai, Kita, Nagumo, & Okada, 2008 ). A question remained as to whether the Japanese children had picked up regularities in the Japanese sound‐symbolic lexicon or were sensitive to universal sound‐symbolism. The present study aimed to provide support for the latter. In a verb generalization task, English‐speaking 3‐year‐olds were taught novel sound‐symbolic verbs, created based on Japanese sound‐symbolism, or novel nonsound‐symbolic verbs. English‐speaking children performed better with the sound‐symbolic verbs, just like Japanese‐speaking children. We concluded that children are sensitive to universal sound‐symbolism and can utilize it in word learning and generalization, regardless of their native language.  相似文献   

14.
This study examined whether children use prosodic correlates to word meaning when interpreting novel words. For example, do children infer that a word spoken in a deep, slow, loud voice refers to something larger than a word spoken in a high, fast, quiet voice? Participants were 4- and 5-year-olds who viewed picture pairs that varied along a single dimension (e.g., big vs. small flower) and heard a recorded voice asking them, for example, “Can you get the blicket one?” spoken with either meaningful or neutral prosody. The 4-year-olds failed to map prosodic cues to their corresponding meaning, whereas the 5-year-olds succeeded (Experiment 1). However, 4-year-olds successfully mapped prosodic cues to word meaning following a training phase that reinforced children’s attention to prosodic information (Experiment 2). These studies constitute the first empirical demonstration that young children are able to use prosody-to-meaning correlates as a cue to novel word interpretation.  相似文献   

15.
Adults use gaze and voice signals as cues to the mental and emotional states of others. We examined the influence of voice cues on children’s judgments of gaze. In Experiment 1, 6-year-olds, 8-year-olds, and adults viewed photographs of faces fixating the center of the camera lens and a series of positions to the left and right and judged whether gaze was direct or averted. On each trial, participants heard the participant-directed voice cue (e.g., “I see you”), an object-directed voice cue (e.g., “I see that”), or no voice. In 6-year-olds, the range of directions of gaze leading to the perception of eye contact (the cone of gaze) was narrower for trials with object-directed voice cues than for trials with participant-directed voice cues or no voice. This effect was absent in 8-year-olds and adults, both of whom had a narrower cone of gaze than 6-year-olds. In Experiment 2, we investigated whether voice cues would influence adults’ judgments of gaze when the task was made more difficult by limiting the duration of exposure to the face. Adults’ cone of gaze was wider than in Experiment 1, and the effect of voice cues was similar to that observed in 6-year-olds in Experiment 1. Together, the results indicate that object-directed voice cues can decrease the width of the cone of gaze, allowing more adult-like judgments of gaze in young children, and that voice cues may be especially effective when the cone of gaze is wider because of immaturity (Experiment 1) or limited exposure (Experiment 2).  相似文献   

16.
Scott RM  Fisher C 《Cognition》2012,122(2):163-180
Recent evidence shows that children can use cross-situational statistics to learn new object labels under referential ambiguity (e.g., Smith & Yu, 2008). Such evidence has been interpreted as support for proposals that statistical information about word-referent co-occurrence plays a powerful role in word learning. But object labels represent only a fraction of the vocabulary children acquire, and arguably represent the simplest case of word learning based on observations of world scenes. Here we extended the study of cross-situational word learning to a new segment of the vocabulary, action verbs, to permit a stronger test of the role of statistical information in word learning. In two experiments, on each trial 2.5-year-olds encountered two novel intransitive (e.g., "She's pimming!"; Experiment 1) or transitive verbs (e.g., "She's pimming her toy!"; Experiment 2) while viewing two action events. The consistency with which each verb accompanied each action provided the only source of information about the intended referent of each verb. The 2.5-year-olds used cross-situational consistency in verb learning, but also showed significant limits on their ability to do so as the sentences and scenes became slightly more complex. These findings help to define the role of cross-situational observation in word learning.  相似文献   

17.
Children's ability to understand symmetrical verbs was investigated, along with adults' use of linguistic and visual cues to learn novel symmetrical verbs. Symmetrical verbs encode a relationship r between two entities such that X r Y entails Y r X. In Experiment 1, sixteen children (mean age 4;8) acted out two types of sentences with symmetrical and asymmetrical verbs. Eight adult judges viewed videotapes of the children's performance and tried to guess what sentence type was being enacted. Judges' performance was predicted(p < .05) by the verb type, symmetrical or asymmetrical. In Experiment 2, seventy-two adult subjects received visual and linguistic cues to the meanings of novel verbs. Both cue types affected subjects' judgments about whether the new verbs were symmetrical or asymmetrical (p < .05).  相似文献   

18.
汉字义符在汉语动作动词意义认知中的作用   总被引:9,自引:4,他引:5  
张积家  陈新葵 《心理学报》2005,37(4):434-441
通过4个实验,探讨汉字义符在汉语动作动词意义认知中的作用。实验1探讨汉字义符对认知动作动词的动作器官意义的影响。结果表明,当义符与动作器官一致时,会促进对动作动词的动作器官意义的认知;当义符与动作器官不一致时,会抑制对动作动词的动作器官意义的认知。实验2探讨汉字义符对认知动作动词的动作工具意义的影响。结果表明,当义符与动作工具一致时,会促进对动作动词的动作工具意义的认知;当义符与动作工具不一致或无关时,会抑制对动作动词的动作工具意义的认知。实验3探讨汉字义符的作用是否随词频而变化。结果表明,当义符与动作器官一致时,对高频词和低频词动作器官意义的认知不存在显著差异;不一致时,对高频词动作器官意义的认知比对低频词快。实验4探讨汉字义符与动作器官是否一致对具体性不同的动作动词的动作器官意义认知的影响。结果表明,当义符与动作器官一致时,对具体性高的词和具体性低的词的动作器官意义的认知不存在显著差异;不一致时,对具体性高的词的动作器官意义的认知比对具体性低的词快。整个研究表明,汉语动词的结构特点影响对动作动词的动作器官或动作工具意义的认知。  相似文献   

19.
Infant signs are intentionally taught/learned symbolic gestures which can be used to represent objects, actions, requests, and mental state. Through infant signs, parents and infants begin to communicate specific concepts earlier than children’s first spoken language. This study examines whether cultural differences in language are reflected in children’s and parents’ use of infant signs. Parents speaking East Asian languages with their children utilize verbs more often than do English-speaking mothers; and compared to their English-learning peers, Chinese children are more likely to learn verbs as they first acquire spoken words. By comparing parents’ and infants’ use of infant signs in the U.S. and Taiwan, we investigate cultural differences of noun/object versus verb/action bias before children’s first language. Parents reported their own and their children's use of first infant signs retrospectively. Results show that cultural differences in parents’ and children’s infant sign use were consistent with research on early words, reflecting cultural differences in communication functions (referential versus regulatory) and child-rearing goals (independent versus interdependent). The current study provides evidence that intergenerational transmission of culture through symbols begins prior to oral language.  相似文献   

20.
One advantage of living in a social group is the opportunity to use information provided by other individuals. Social information can be based on cues provided by a conspecific or even by a heterospecific individual (e.g., gaze direction, vocalizations, pointing gestures). Although the use of human gaze and gestures has been extensively studied in primates, and is increasingly studied in other mammals, there is no documentation of birds using these cues in a cooperative context. In this study, we tested the ability of three African gray parrots to use different human cues (pointing and/or gazing) in an object-choice task. We found that one subject spontaneously used the most salient pointing gesture (looking and steady pointing with hand at about 20 cm from the baited box). The two others were also able to use this cue after 15 trials. None of the parrots spontaneously used the steady gaze cues (combined head and eye orientation), but one learned to do so effectively after only 15 trials when the distance between the head and the baited box was about 1 m. However, none of the parrots were able to use the momentary pointing nor the distal pointing and gazing cues. These results are discussed in terms of sensitivity to joint attention as a prerequisite to understand pointing gestures as it is to the referential use of labels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号