首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
Prepositions name spatial relationships (e.g., book on a table). But they are also used to convey abstract, non‐spatial relationships (e.g., Adrian is on a roll)—raising the question of how the abstract uses relate to the concrete spatial uses. Despite considerable success in delineating these relationships, no general account exists for the two most frequently extended prepositions: in and on. We test the proposal that what is preserved in abstract uses of these prepositions is the relative degree of control between the located object (the figure) and the reference object (the ground). Across four experiments, we find a continuum of greater figure control for on (e.g., Jordan is on a roll) and greater ground control for in (e.g., Casey is in a depression). These findings bear on accounts of semantic structure and language change, as well as on second language instruction.  相似文献   

2.
The recent emergence of a new sign language among deaf children and adolescents in Nicaragua provides an opportunity to study how grammatical features of a language arise and spread, and how new language environments are constructed. The grammatical regularities that underlie language use reside largely outside the domain of explicit awareness. Nevertheless, knowledge of these regularities must be transmitted from one generation to the next to survive as part of the language. During this transmission, language form and use is shaped by both the characteristics of ontogenetic development within individual users and by historical changes in patterns of interaction between users. To capture this process, the present study follows the emergence of spatial modulations in Nicaraguan Sign Language (NSL). A comprehension task examining interpretations of spatially modulated verbs reveals that new form-function mappings arise among children who functionally differentiate previously equivalent forms. The new mappings are then acquired by their age peers (who are also children), and by subsequent generations of children who learn the language, but not by adult contemporaries. As a result, language emergence is characterized by a convergence on form within each age cohort, and a mismatch in form from one age cohort to the cohort that follows. In this way, each age cohort, in sequence, transforms the language environment for the next, enabling each new cohort of learners to develop further than its predecessors.  相似文献   

3.
This study investigated how young children’s increasingly flexible use of spatial reference frames enables accurate search for hidden objects by using a task that 3-year-olds have been shown to perform with great accuracy and 2-year-olds have been shown to perform inaccurately. Children watched as an object was rolled down a ramp, behind a panel of doors, and stopped at a barrier visible above the doors. In two experiments, we gave 2- and 2.5-year-olds a strong reference frame by increasing the relative salience and stability of the barrier. In Experiment 1, 2.5-year-olds performed at above-chance levels with the more salient barrier. In Experiment 2, we highlighted the stability of the barrier (or ramp) by maximizing the spatial extent of each reference frame across the first four training trials. Children who were given a stable barrier (and moving ramp) during these initial trials performed at above-chance levels and significantly better than children who were given a stable ramp (and moving barrier). This work highlights that factors central to spatial cognition and motor planning—aligning egocentric and object-centered reference frames—play a role in the ramp task during this transitional phase in development.  相似文献   

4.
Spatial terms that encode support (e.g., “on”, in English) are among the first to be understood by children across languages (e.g., Bloom, 1973; Johnston & Slobin, 1979). Such terms apply to a wide variety of support configurations, including Support-From-Below (SFB; cup on table) and Mechanical Support, such as stamps on envelopes, coats on hooks, etc. Research has yet to delineate infants’ semantic space for the term “on” when considering its full range of usage. Do infants initially map “on” to a very broad, highly abstract category – one including cups on tables, stamps on envelopes, etc.? Or do infants begin with a much more restricted interpretation - mapping “on” to certain configurations over others? Much infant cognition research suggests that SFB is an event category that infants learn about early - by five months of age (Baillargeon & DeJong, 2017) - raising the possibility that they may also begin by interpreting the word “on” as referring to configurations like cups on tables, rather than stamps on envelopes. Further, studies examining language production suggests that children and adults map the basic locative expression (BE on, in English) to SFB over Mechanical Support (Landau et al., 2016). We tested the hypothesis that this ‘privileging’ of SFB in early infant cognition and child and adult language also characterizes infants’ language comprehension. Using the Intermodal-Preferential-Looking-Paradigm in combination with infant eye-tracking, 20-month-olds were presented with two support configurations: SFB and Mechanical, Support-Via-Adhesion (henceforth, SVA). Infants preferentially mapped “is on” to SFB (rather than SVA) suggesting that infants differentiate between two quite different kinds of support configurations when mapping spatial language to these two configurations and more so, that SFB is privileged in early language understanding of the English spatial term “on”.  相似文献   

5.
    
Generic statements express generalizations about categories and present a unique semantic profile that is distinct from quantified statements. This paper reports two studies examining the development of children's intuitions about the semantics of generics and how they differ from statements quantified by all, most, and some. Results reveal that, like adults, preschoolers (a) recognize that generics have flexible truth conditions and are capable of representing a wide range of prevalence levels; and (b) interpret novel generics as having near‐universal prevalence implications. Results further show that by age 4, children are beginning to differentiate the meaning of generics and quantified statements; however, even 7‐ to 11‐year‐olds are not adultlike in their intuitions about the meaning of most‐quantified statements. Overall, these studies suggest that by preschool, children interpret generics in much the same way that adults do; however, mastery of the semantics of quantified statements follows a more protracted course.  相似文献   

6.
Lozano SC  Hard BM  Tversky B 《Cognition》2007,103(3):480-490
Embodied approaches to cognition propose that our own actions influence our understanding of the world. Do other people's actions also have this influence? The present studies show that perceiving another person's actions changes the way people think about objects in a scene. In Study 1, participants viewed a photograph and answered a question about the location of one object relative to another. The question either did or did not call attention to an action being performed in the scene. Studies 2 and 3 focused on whether depicting an action in a scene influenced perspective choice. Across all studies, drawing attention to action, whether verbally or pictorially, led observers to encode object locations from the actor's spatial perspective. Study 4 demonstrated that the tendency to adopt the actor's perspective might be mediated by motor experience.  相似文献   

7.
We present a dynamical systems account of how simple social information influences perspective-taking. Our account is motivated by the notion that perspective-taking may obey common dynamic principles with perceptuomotor coupling. We turn to the prominent HKB dynamical model of motor coordination, drawing from basic principles of self-organization to describe how conversational perspective-taking unfolds in a low-dimensional attractor landscape. We begin by simulating experimental data taken from a simple instruction-following task, in which participants have different expectations about their interaction partner. By treating belief states as different values of a control parameter, we show that data generated by a basic dynamical process fits overall egocentric and other-centric response distributions, the time required for participants to enact a response on a trial-by-trial basis, and the action dynamics exhibited in individual trials. We end by discussing the theoretical significance of dynamics in dialog, arguing that high-level coordination such as perspective-taking may obey similar dynamics as perceptuomotor coordination, pointing to common principles of adaptivity and flexibility during dialog.  相似文献   

8.
    
There is mounting evidence that language comprehension involves the activation of mental imagery of the content of utterances ( Barsalou, 1999 ; Bergen, Chang, & Narayan, 2004 ; Bergen, Narayan, & Feldman, 2003 ; Narayan, Bergen, & Weinberg, 2004 ; Richardson, Spivey, McRae, & Barsalou, 2003 ; Stanfield & Zwaan, 2001 ; Zwaan, Stanfield, & Yaxley, 2002 ). This imagery can have motor or perceptual content. Three main questions about the process remain under‐explored, however. First, are lexical associations with perception or motion sufficient to yield mental simulation, or is the integration of lexical semantics into larger structures, like sentences, necessary? Second, what linguistic elements (e.g., verbs, nouns, etc.) trigger mental simulations? Third, how detailed are the visual simulations that are performed? A series of behavioral experiments address these questions, using a visual object categorization task to investigate whether up‐ or down‐related language selectively interferes with visual processing in the same part of the visual field (following Richardson et al., 2003 ). The results demonstrate that either subject nouns or main verbs can trigger visual imagery, but only when used in literal sentences about real space—metaphorical language does not yield significant effects—which implies that it is the comprehension of the sentence as a whole and not simply lexical associations that yields imagery effects. These studies also show that the evoked imagery contains detail as to the part of the visual field where the described scene would take place.  相似文献   

9.
Object imagery refers to the ability to construct pictorial images of objects. Individuals with high object imagery (high-OI) produce more vivid mental images than individuals with low object imagery (low-OI), and they encode and process both mental images and visual stimuli in a more global and holistic way. In the present study, we investigated whether and how level of object imagery may affect the way in which individuals identify visual objects. High-OI and low-OI participants were asked to perform a visual identification task with spatially-filtered pictures of real objects. Each picture was presented at nine levels of filtering, starting from the most blurred (level 1: only low spatial frequencies—global configuration) and gradually adding high spatial frequencies up to the complete version (level 9: global configuration plus local and internal details). Our data showed that high-OI participants identified stimuli at a lower level of filtering than participants with low-OI, indicating that they were better able than low-OI participants to identify visual objects at lower spatial frequencies. Implications of the results and future developments are discussed.  相似文献   

10.
    
Gorniak P  Roy D 《Cognitive Science》2007,31(2):197-231
We introduce a computational theory of situated language understanding in which the meaning of words and utterances depends on the physical environment and the goals and plans of communication partners. According to the theory, concepts that ground linguistic meaning are neither internal nor external to language users, but instead span the objective-subjective boundary. To model the possible interactions between subject and object, the theory relies on the notion of perceived affordances: structured units of interaction that can be used for prediction at multiple levels of abstraction. Language understanding is treated as a process of filtering perceived affordances. The theory accounts for many aspects of the situated nature of human language use and provides a unified solution to a number of demands on any theory of language understanding including conceptual combination, prototypicality effects, and the generative nature of lexical items. To support the theory, we describe an implemented system that understands verbal commands situated in a virtual gaming environment. The implementation uses probabilistic hierarchical plan recognition to generate perceived affordances. The system has been evaluated on its ability to correctly interpret free-form spontaneous verbal commands recorded from unrehearsed game play between human players. The system is able to \"step into the shoes\" of human players and correctly respond to a broad range of verbal commands in which linguistic meaning depends on social and physical context. We quantitatively compare the system's predictions in response to direct player commands with the actions taken by human players and show generalization to unseen data across a range of situations and verbal constructions.  相似文献   

11.
    
The ability to determine how many objects are involved in physical events is fundamental for reasoning about the world that surrounds us. Previous studies suggest that infants can fail to individuate objects in ambiguous occlusion events until their first birthday and that learning words for the objects may play a crucial role in the development of this ability. The present eye-tracking study tested whether the classical object individuation experiments underestimate young infants’ ability to individuate objects and the role word learning plays in this process. Three groups of 6-month-old infants (N = 72) saw two opaque boxes side by side on the eye-tracker screen so that the content of the boxes was not visible. During a familiarization phase, two visually identical objects emerged sequentially from one box and two visually different objects from the other box. For one group of infants the familiarization was silent (Visual Only condition). For a second group of infants the objects were accompanied with nonsense words so that objects’ shape and linguistic labels indicated the same number of objects in the two boxes (Visual & Language condition). For the third group of infants, objects’ shape and linguistic labels were in conflict (Visual vs. Language condition). Following the familiarization, it was revealed that both boxes contained the same number of objects (e.g. one or two). In the Visual Only condition, infants looked longer to the box with incorrect number of objects at test, showing that they could individuate objects using visual cues alone. In the Visual & Language condition infants showed the same looking pattern. However, in the Visual vs Language condition infants looked longer to the box with incorrect number of objects according to linguistic labels. The results show that infants can individuate objects in a complex object individuation paradigm considerably earlier than previously thought and that linguistic cues enforce their own preference in object individuation. The results are consistent with the idea that when language and visual information are in conflict, language can exert an influence on how young infants reason about the visual world.  相似文献   

12.
    
In order to investigate whether addressees can make immediate use of speaker-based constraints during reference resolution, participant addressees’ eye movements were monitored as they helped a confederate cook follow a recipe. Objects were located in the helper’s area, which the cook could not reach, and the cook’s area, which both could reach. Critical referring expressions matched one object (helper’s area) or two objects (helper’s and cook’s areas), and were produced when the cook’s hands were empty or full, which defined the cook’s reaching ability constraints. Helper’s first and total fixations showed that they restricted their domain of interpretation to their own objects when the cook’s hands were empty, and widened it to include the cook’s objects only when the cook’s hands were full. These results demonstrate that addressees can quickly take into account task-relevant constraints to restrict their referential domain to referents that are plausible given the speaker’s goals and constraints.  相似文献   

13.
Recent research has demonstrated an asymmetry between the origins and endpoints of motion events, with preferential attention given to endpoints rather than beginnings of motion in both language and memory. Two experiments explore this asymmetry further and test its implications for language production and comprehension. Experiment 1 shows that both adults and 4‐year‐old children detect fewer within‐category changes in source than goal objects when tested for memory of motion events; furthermore, these groups produce fewer references to source than goal objects when describing the same motion events. Experiment 2 asks whether the specificity of encoding source/goal relations differs in both spatial memory and the comprehension of novel spatial vocabulary. Results show that endpoint configuration changes are detected more accurately than source configuration changes by both adults and young children. Furthermore, when interpreting novel motion verbs, both age groups expect more fine‐grained lexical distinctions in the domain of endpoint configurations compared to that of source configurations. These studies demonstrate that a cognitive‐attentional bias in spatial representation and memory affects both the detail of linguistic encoding during the use of spatial language and the specificity of hypotheses about spatial referents that learners build during the acquisition of the spatial lexicon.  相似文献   

14.
    
The question of when and how bottom‐up input is integrated with top‐down knowledge has been debated extensively within cognition and perception, and particularly within language processing. A long running debate about the architecture of the spoken‐word recognition system has centered on the locus of lexical effects on phonemic processing: does lexical knowledge influence phoneme perception through feedback, or post‐perceptually in a purely feedforward system? Elman and McClelland (1988) reported that lexically restored ambiguous phonemes influenced the perception of the following phoneme, supporting models with feedback from lexical to phonemic representations. Subsequently, several authors have argued that these results can be fully accounted for by diphone transitional probabilities in a feedforward system (Cairns et al., 1995; Pitt & McQueen, 1998). We report results strongly favoring the original lexical feedback explanation: lexical effects were present even when transitional probability biases were opposite to those of lexical biases.  相似文献   

15.
The role of language in acquiring object kind concepts in infancy   总被引:6,自引:0,他引:6  
Xu F 《Cognition》2002,85(3):223-250
Four experiments investigated whether 9-month-old infants could use the presence of labels to help them establish a representation of two distinct objects in a complex object individuation task. We found that the presence of two distinct labels facilitated object individuation, but the presence of one label for both objects, two distinct tones, two distinct sounds, or two distinct emotional expressions did not. These findings suggest that language may play an important role in the acquisition of sortal/object kind concepts in infancy: words may serve as "essence placeholders". Implications for the relationship between language and conceptual development are discussed.  相似文献   

16.
    
We investigate a possible universal constraint on spatial meaning. It has been proposed that people attend preferentially to the endpoints of spatial motion events, and that languages may therefore make finer semantic distinctions at event endpoints than at event beginnings. We test this proposal. In Experiment 1, we show that people discriminate the endpoints of spatial motion events more readily than they do event beginnings-suggesting a non-linguistic attentional bias toward endpoints. In Experiment 2, speakers of Arabic, Chinese, and English each described a set of spatial events displayed in video clips. Although the spatial systems of these languages differ, speakers of all three languages made finer semantic distinctions at event endpoints, compared to event beginnings. These findings are consistent with the proposal that event endpoints are privileged over event beginnings, in both language and perception.  相似文献   

17.
This study investigated the relative contribution of perception/cognition and language-specific semantics in nonverbal categorization of spatial relations. English and Korean speakers completed a video-based similarity judgment task involving containment, support, tight fit, and loose fit. Both perception/cognition and language served as resources for categorization, and allocation between the two depended on the target relation and the features contrasted in the choices. Whereas perceptual/cognitive salience for containment and tight-fit features guided categorization in many contexts, language-specific semantics influenced categorization where the two features competed for similarity judgment and when the target relation was tight support, a domain where spatial relations are perceptually diverse. In the latter contexts, each group categorized more in line with semantics of their language, that is, containment/support for English and tight/loose fit for Korean. We conclude that language guides spatial categorization when perception/cognition alone is not sufficient. In this way, language is an integral part of our cognitive domain of space.  相似文献   

18.
We tested young children’s spatial reasoning in a match-to-sample task, manipulating the objects in the task (abstract geometric shapes, line drawings of realistic objects, or both). Korean 4- and 5-year-old children (N = 161) generalized the target spatial configuration (i.e., on, in, above) more easily when the sample used geometric shapes and the choices used realistic objects than the reverse (i.e., realistic-object sample to geometric-shape choices). With within-type stimuli (i.e., sample and choices were both geometric shapes or both realistic objects), 5-year-old, but not 4-year-old, children generalized the spatial relations more easily with geometric shapes than realistic objects. In addition, children who knew more locative terms (e.g., “in”, “on”) performed better on the task, suggesting a link to children’s spatial vocabulary. The results demonstrate an advantage of geometric shapes over realistic objects in facilitating young children’s performance on a match-to-sample spatial reasoning task.  相似文献   

19.
Children's overextensions of spatial language are often taken to reveal spatial biases. However, it is unclear whether extension patterns should be attributed to children's overly general spatial concepts or to a narrower notion of conceptual similarity allowing metaphor‐like extensions. We describe a previously unnoticed extension of spatial expressions and use a novel method to determine its origins. English‐ and Greek‐speaking 4‐ and 5‐year‐olds used containment expressions (e.g., English into, Greek mesa) for events where an object moved into another object but extended such expressions to events where the object moved behind or under another object. The pattern emerged in adult speakers of both languages and also in speakers of 10 additional languages. We conclude that learners do not have an overly general concept of Containment. Nevertheless, children (and adults) perceive similarities across Containment and other types of spatial scenes, even when these similarities are obscured by the conventional forms of the language.  相似文献   

20.
    
We investigated the coupling between a speaker's and a listener's eye movements. Some participants talked extemporaneously about a television show whose cast members they were viewing on a screen in front of them. Later, other participants listened to these monologues while viewing the same screen. Eye movements were recorded for all speakers and listeners. According to cross-recurrence analysis, a listener's eye movements most closely matched a speaker's eye movements at a delay of 2 sec. Indeed, the more closely a listener's eye movements were coupled with a speaker's, the better the listener did on a comprehension test. In a second experiment, low-level visual cues were used to manipulate the listeners' eye movements, and these, in turn, influenced their latencies to comprehension questions. Just as eye movements reflect the mental state of an individual, the coupling between a speaker's and a listener's eye movements reflects the success of their communication.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号