首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Young children occasionally make scale errors – they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and researchers' informal accounts of these behaviors. However, scale errors have only been documented using laboratory procedures designed to promote their occurrence. To formally document the occurrence of scale errors in everyday settings, we posted a survey on the internet. Across two studies, participants reported many examples of everyday scale errors that are similar to those observed in our labs and were committed by children of the same age. These findings establish that scale errors occur in the course of children's daily lives, lending further support to the account that these behaviors stem from general aspects of visual processing.  相似文献   

2.
Scale errors are observed when young children make mistakes by attempting to put their bodies into miniature versions of everyday objects. Such errors have been argued to arise from children’s insufficient integration of size into their object representations. The current study investigated whether Japanese and UK children’s (18–24 months old, N = 80) visual exploration in a categorization task related to their scale error production. UK children who showed greater local processing made more scale errors, whereas Japanese children, who overall showed greater global processing, showed no such relationship. These results raise the possibility that children’s suppression of scale errors emerges not from attention to size per se, but from a critical integration of global (i.e., size) and local (i.e., object features) information during object processing, and provide evidence that this mechanism differs cross-culturally.  相似文献   

3.
Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children's decreased attention to object size information. This study investigated the attention to object size information in scale errors performers. Two groups of children aged 18–25 months (N = 52) and 48–60 months (N = 23) were tested in two consecutive tasks: an action task that replicated the original scale errors elicitation situation, and a looking task that involved watching on a computer screen actions performed with adequate to inadequate size object. Our key finding – that children performing scale errors in the action task subsequently pay less attention to size changes than non-scale errors performers in the looking task – suggests that the origins of scale errors in childhood operate already at the perceptual level, and not at the action level.  相似文献   

4.
Young children sometimes attempt an action on an object, which is inappropriate because of the object size—they make scale errors. Existing theories suggest that scale errors may result from immaturities in children's action planning system, which might be overpowered by increased complexity of object representations or developing teleofunctional bias. We used computational modelling to emulate children's learning to associate objects with actions and to select appropriate actions, given object shape and size. A computational Developmental Deep Model of Action and Naming (DDMAN) was built on the dual‐route theory of action selection, in which actions on objects are selected via a direct (nonsemantic or visual) route or an indirect (semantic) route. As in case of children, DDMAN produced scale errors: the number of errors was high at the beginning of training and decreased linearly but did not disappear completely. Inspection of emerging object–action associations revealed that these were coarsely organized by shape, hence leading DDMAN to initially select actions based on shape rather than size. With experience, DDMAN gradually learned to use size in addition to shape when selecting actions. Overall, our simulations demonstrate that children's scale errors are a natural consequence of learning to associate objects with actions.  相似文献   

5.
A recent article by DeLoache et al. has documented an intriguing phenomenon in the development of action planning in young children. When children act on toy replicas of larger objects they make scale errors that are consistent with the full-sized object. Although the actions selected are inappropriate, their execution accurately takes into account the true size of the target. This phenomenon permits tests of the predictions of the perception-action and planning-control models of vision for action.  相似文献   

6.
Sensorimotor prediction and memory in object manipulation.   总被引:5,自引:0,他引:5  
When people lift objects of different size but equal weight, they initially employ too much force for the large object and too little force for the small object. However, over repeated lifts of the two objects, they learn to suppress the size-weight association used to estimate force requirements and appropriately scale their lifting forces to the true and equal weights of the objects. Thus, sensorimotor memory from previous lifts comes to dominate visual size information in terms of force prediction. Here we ask whether this sensorimotor memory is transient, preserved only long enough to perform the task, or more stable. After completing an initial lift series in which they lifted equally weighted large and small objects in alternation, participants then repeated the lift series after delays of 15 minutes or 24 hours. In both cases, participants retained information about the weights of the objects and used this information to predict the appropriate fingertip forces. This preserved sensorimotor memory suggests that participants acquired internal models of the size-weight stimuli that could be used for later prediction.  相似文献   

7.
When passing through apertures, individuals scale their actions to their shoulder width and rotate their shoulders or avoid apertures that are deemed too small for straight passage. Carrying objects wider than the body produces a person-plus-object system that individuals must account for in order to pass through apertures safely. The present study aimed to determine whether individuals scale their critical point to the widest horizontal dimension (shoulder or object width). Two responses emerged: Fast adapters adapted to the person-plus-object system by maintaining a consistent critical point regardless of whether the object was carried while slow adapters initially increased their critical point (overestimated) before adapting back to their original critical point. The results suggest that individuals can account for increases in body width by scaling actions to the size of the object width but people adapt at different rates.  相似文献   

8.
Prior research has suggested that priming on perceptual implicit tests is insensitive to changes in stimulus size and reflection. The present experiments were performed to investigate whether size and reflection effects can be obtained in priming under conditions that encourage the processing of this information at study and at test, as predicted by transfer-appropriate processing. The results indicate that priming was affected by a change in the physical size of an object when study and test tasks required a judgment about the real size of pictorial objects (e.g., deciding whether a zebra presented small or large on the screen was larger or smaller than a typical chair), and when the test task required the identification of fragmented pictures. However, a change in left-right orientation had no effect on priming when study and test tasks required a judgment about the left-right orientation of familiar objects, or when the test task involved the identification of fragmented pictures. This difference between size and reflection effects is discussed in terms of the differential importance of size and reflection information in shape identification.  相似文献   

9.
Children sometimes make scale errors, attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to 3.5-year-olds made frequent scale errors with tools in a free-play session. Study 2 utilized a novel forced-choice method, representing a stronger test by handing 2-year-olds a feasible alternative for goal achievement, but children continued to make scale errors. Study 3 confirmed that errors were not based in perceptual immaturity. Results are explained using a framework of teleofunctional (purpose-based) reasoning as a powerful and early developing influence on children's actions.  相似文献   

10.
Visual scenes contain information on both a local scale (e.g., a tree) and a global scale (e.g., a forest). The question of whether the visual system prioritizes local or global elements has been debated for over a century. Given that visual scenes often contain distinct individual objects, here we examine how regularities between individual objects prioritize local or global processing. Participants viewed Navon-like figures consisting of small local objects making up a global object, and were asked to identify either the shape of the local objects or the shape of the global object, as fast and accurately as possible. Unbeknown to the participants, local regularities (i.e., triplets) or global regularities (i.e., quadruples) were embedded among the objects. We found that the identification of the local shape was faster when individual objects reliably co-occurred immediately next to each other as triplets (local regularities, Experiment 1). This result suggested that local regularities draw attention to the local scale. Moreover, the identification of the global shape was faster when objects co-occurred at the global scale as quadruples (global regularities, Experiment 2). This result suggested that global regularities draw attention to the global scale. No participant was explicitly aware of the regularities in the experiments. The results suggest that statistical regularities can determine whether attention is directed to the individual objects or to the entire scene. The findings provide evidence that regularities guide the spatial scale of attention in the absence of explicit awareness.  相似文献   

11.
Infants as young as 5 months of age view familiar actions such as reaching as goal-directed (Woodward, 1998), but how do they construe the goal of an actor's reach? Six experiments investigated whether 12-month-old infants represent reaching actions as directed to a particular individual object, to a narrowly defined object category (e.g., an orange dump truck), or to a more broadly defined object category (e.g., any truck, vehicle, artifact, or inanimate object). The experiments provide evidence that infants are predisposed to represent reaching actions as directed to categories of objects at least as broad as the basic level, both when the objects represent artifacts (trucks) and when they represent people (dolls). Infants do not use either narrower category information or spatiotemporal information to specify goal objects. Because spatiotemporal information is central to infants' representations of inanimate object motions and interactions, the findings are discussed in relation to the development of object knowledge and action representations.  相似文献   

12.
13.
Pictures of handled objects such as a beer mug or frying pan are shown to prime speeded reach and grasp actions that are compatible with the object. To determine whether the evocation of motor affordances implied by this result is driven merely by the physical orientation of the object's handle as opposed to higher-level properties of the object, including its function, prime objects were presented either in an upright orientation or rotated 90° from upright. Rotated objects successfully primed hand actions that fit the object's new orientation (e.g., a frying pan rotated 90° so that its handle pointed downward primed a vertically oriented power grasp), but only when the required grasp was commensurate with the object's proper function. This constraint suggests that rotated objects evoke motor representations only when they afford the potential to be readily positioned for functional action.  相似文献   

14.
Prominent theories of action recognition suggest that during the recognition of actions the physical patterns of the action is associated with only one action interpretation (e.g., a person waving his arm is recognized as waving). In contrast to this view, studies examining the visual categorization of objects show that objects are recognized in multiple ways (e.g., a VW Beetle can be recognized as a car or a beetle) and that categorization performance is based on the visual and motor movement similarity between objects. Here, we studied whether we find evidence for multiple levels of categorization for social interactions (physical interactions with another person, e.g., handshakes). To do so, we compared visual categorization of objects and social interactions (Experiments 1 and 2) in a grouping task and assessed the usefulness of motor and visual cues (Experiments 3, 4, and 5) for object and social interaction categorization. Additionally, we measured recognition performance associated with recognizing objects and social interactions at different categorization levels (Experiment 6). We found that basic level object categories were associated with a clear recognition advantage compared to subordinate recognition but basic level social interaction categories provided only a little recognition advantage. Moreover, basic level object categories were more strongly associated with similar visual and motor cues than basic level social interaction categories. The results suggest that cognitive categories underlying the recognition of objects and social interactions are associated with different performances. These results are in line with the idea that the same action can be associated with several action interpretations (e.g., a person waving his arm can be recognized as waving or greeting).  相似文献   

15.
The visual system has been suggested to integrate different views of an object in motion. We investigated differences in the way moving and static objects are represented by testing for priming effects to previously seen ("known") and novel object views. We showed priming effects for moving objects across image changes (e.g., mirror reversals, changes in size, and changes in polarity) but not over temporal delays. The opposite pattern of results was observed for objects presented statically; that is, static objects were primed over temporal delays but not across image changes. These results suggest that representations for moving objects are: (1) updated continuously across image changes, whereas static object representations generalize only across similar images, and (2) more short-lived than static object representations. These results suggest two distinct representational mechanisms: a static object mechanism rather spatially refined and permanent, possibly suited for visual recognition, and a motion-based object mechanism more temporary and less spatially refined, possibly suited for visual guidance of motor actions.  相似文献   

16.
17.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   

18.
It is well documented that in the first year after birth, infants are able to identify self-performed actions. This ability has been regarded as the basis of conscious self-perception. However, it is not yet known whether infants are also sensitive to aspects of the self when they cannot control the sensory feedback by means of self-performed actions. Therefore, we investigated the contribution of visual–tactile contingency to self-perception in infants. In Experiment 1, 7- and 10-month-olds were presented with two video displays of lifelike baby doll legs. The infant’s left leg was stroked contingently with only one of the video displays. The results showed that 7- and 10-month-olds looked significantly longer at the contingent display than at the non-contingent display. Experiment 2 was conducted to investigate the role of morphological characteristics in contingency detection. Ten-month-olds were presented with video displays of two neutral objects (i.e., oblong wooden blocks of approximately the same size as the doll legs) being stroked in the same way as in Experiment 1. No preference was found for either the contingent or the non-contingent display but our results confirm a significant decrease in looking time to the contingent display compared to Experiment 1. These results indicate that detection of visual–tactile contingency as one important aspect of self-perception is present very early in ontogeny. Furthermore, this ability appears to be limited to the perception of objects that strongly resemble the infant’s body, suggesting an early sensitivity to the morphology of one’s own body.  相似文献   

19.
A change detection task was used to estimate the visual short-term memory storage capacity for either the orientation or the size of objects. On each trial, several,objects were briefly presented, followed by a blank interval and then by a second display of objects that either was identical to the first display or had a single object that was different (the object changed either orientation or size, in separate experiments). The task was to indicate whether the two displays were the same or different, and the number of objects remembered was estimated from the percent correct on this task. Storage capacity for a feature was nearly twice as large when that feature was defined by the object boundary, rather than by the surface texture of the object. This dramatic difference in storage capacity suggests that a particular feature (e.g., right tilted or small) is not stored in memory with an invariant abstract code. Instead, there appear to be different codes for the boundary and surface features of objects, and memory operates on boundary features more efficiently than it operates on surface features.  相似文献   

20.
How does visual long-term memory store representations of different entities (e.g., objects, actions, and scenes) that are present in the same visual event? Are the different entities stored as an integrated representation in memory, or are they stored separately? To address this question, we asked observers to view a large number of events; in each event, an action was performed within a scene. Afterward, the participants were shown pairs of action–scene sets and indicated which of the two they had seen. When the task required recognizing the individual actions and scenes, performance was high (80 %). Conversely, when the task required remembering which actions had occurred within which scenes, performance was significantly lower (59 %). We observed this dissociation between memory for individual entities and memory for entity bindings across multiple testing conditions and presentation durations. These experiments indicate that visual long-term memory stores information about actions and information about scenes separately from one another, even when an action and scene were observed together in the same visual event. These findings also highlight an important limitation of human memory: Situations that require remembering actions and scenes as integrated events (e.g., eyewitness testimony) may be particularly vulnerable to memory errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号