首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Forces are experienced in actions on objects. The mechanoreceptor system is stimulated by proximal forces in interactions with objects, and experiences of force occur in a context of information yielded by other sensory modalities, principally vision. These experiences are registered and stored as episodic traces in the brain. These stored representations are involved in generating visual impressions of forces and causality in object motion and interactions. Kinematic information provided by vision is matched to kinematic features of stored representations, and the information about forces and causality in those representations then forms part of the perceptual interpretation. I apply this account to the perception of interactions between objects and to motions of objects that do not have perceived external causes, in which motion tends to be perceptually interpreted as biological or internally caused. I also apply it to internal simulations of events involving mental imagery, such as mental rotation, trajectory extrapolation and judgment, visual memory for the location of moving objects, and the learning of perceptual judgments and motor skills. Simulations support more accurate judgments when they represent the underlying dynamics of the event simulated. Mechanoreception gives us whatever limited ability we have to perceive interactions and object motions in terms of forces and resistances; it supports our practical interventions on objects by enabling us to generate simulations that are guided by inferences about forces and resistances, and it helps us learn novel, visually based judgments about object behavior.  相似文献   

2.
When a moving object (A) contacts a stationary one (B) and Object B then moves, visual impressions of force occur along with a visual impression of causality. It is shown that findings about force impressions that occur with launching effect stimuli generalize to other forms of phenomenal causality, namely entraining, enforced disintegration, and shattering stimuli. In particular, evidence is reported for generality of the force asymmetry, in which the amount of perceived force exerted by Object A is greater than the amount of perceived resistance put up by Object B. Effects of manipulations of kinematic variables also resembled those found in previous experiments. Some unpredicted findings occurred. It is argued that these reflect a change in perceptual interpretation when both objects are in motion prior to contact, due to both objects being perceived as in autonomous motion. The results are consistent with a theoretical account in which force impressions occur by a process of matching kinematic information in visual stimuli to stored representations of actions on objects, which supply information about forces.  相似文献   

3.
Some things look more complex than others. For example, a crenulate and richly organized leaf may seem more complex than a plain stone. What is the nature of this experience—and why do we have it in the first place? Here, we explore how object complexity serves as an efficiently extracted visual signal that the object merits further exploration. We algorithmically generated a library of geometric shapes and determined their complexity by computing the cumulative surprisal of their internal skeletons—essentially quantifying the “amount of information” within each shape—and then used this approach to ask new questions about the perception of complexity. Experiments 1–3 asked what kind of mental process extracts visual complexity: a slow, deliberate, reflective process (as when we decide that an object is expensive or popular) or a fast, effortless, and automatic process (as when we see that an object is big or blue)? We placed simple and complex objects in visual search arrays and discovered that complex objects were easier to find among simple distractors than simple objects are among complex distractors—a classic search asymmetry indicating that complexity is prioritized in visual processing. Next, we explored the function of complexity: Why do we represent object complexity in the first place? Experiments 4–5 asked subjects to study serially presented objects in a self-paced manner (for a later memory test); subjects dwelled longer on complex objects than simple objects—even when object shape was completely task-irrelevant—suggesting a connection between visual complexity and exploratory engagement. Finally, Experiment 6 connected these implicit measures of complexity to explicit judgments. Collectively, these findings suggest that visual complexity is extracted efficiently and automatically, and even arouses a kind of “perceptual curiosity” about objects that encourages subsequent attentional engagement.  相似文献   

4.
When two objects interact they exert equal and opposite forces on each other. According to the causal asymmetry hypothesis, however, when one object has been identified as causal and the other as that in which the effect occurs, the causal object is perceived as exerting greater force on the effect object than the latter is perceived as exerting on the former. An example of this is a stimulus in which one object moves toward another stationary one, and when contact occurs the former stops and the latter moves away. In this situation the initially moving object is identified as causal, so the causal asymmetry hypothesis predicts that more force will be judged to be exerted by the moving object on the stationary one than by the stationary one on the moving one. Participants’ judgments consistently supported this hypothesis for a variety of stimuli in which kinematic parameters were varied, even when the initially moving object reversed direction after contact.  相似文献   

5.
Sensorimotor prediction and memory in object manipulation.   总被引:5,自引:0,他引:5  
When people lift objects of different size but equal weight, they initially employ too much force for the large object and too little force for the small object. However, over repeated lifts of the two objects, they learn to suppress the size-weight association used to estimate force requirements and appropriately scale their lifting forces to the true and equal weights of the objects. Thus, sensorimotor memory from previous lifts comes to dominate visual size information in terms of force prediction. Here we ask whether this sensorimotor memory is transient, preserved only long enough to perform the task, or more stable. After completing an initial lift series in which they lifted equally weighted large and small objects in alternation, participants then repeated the lift series after delays of 15 minutes or 24 hours. In both cases, participants retained information about the weights of the objects and used this information to predict the appropriate fingertip forces. This preserved sensorimotor memory suggests that participants acquired internal models of the size-weight stimuli that could be used for later prediction.  相似文献   

6.
Hubbard TL 《Psychological bulletin》2012,138(4):616-23; discussion 624-7
White (2012) proposed that kinematic features in a visual percept are matched to stored representations containing information regarding forces (based on prior haptic experience) and that information in the matched, stored representations regarding forces is then incorporated into visual perception. Although some elements of White's (2012) account appear consistent with previous findings and theories, other elements do not appear consistent with previous findings and theories or are in need of clarification. Some of the latter elements include the (a) differences between perception and impression (representation of force; relationship of force and resistance; role and necessity of stored representations and of concurrent simulation; roles of rules, cues, and heuristics), (b) characteristics of object motion and human movement (whether motion is internally generated or externally generated and whether motion is biological or nonbiological; generalization of human action and the extent to which perceived force depends upon similarity of object movement to human patterns of movement), (c) related perceptual and cognitive phenomena (representational momentum, imagery, psychophysics of force perception, perception of causality), and (d) scope and limitations of White's account (attributions of intentionality, falsifiability).  相似文献   

7.
Several tendencies found in explicit judgments about object motion have been interpreted as evidence that people possess a naive theory of impetus. The theory states that objects that are caused to move by other objects acquire force that determines the kind of motion exhibited by the object, and that this force gradually dissipates over time. I argue that the findings can better be understood as manifestations of a general understanding of externally caused motion based on experiences of acting on objects. Experiences of acting on objects yield the idea that properties of the cause of motion are transmitted to the effect object. This idea functions as a heuristic for explicit predictions of object motion under conditions of uncertainty. This accounts not only for the findings taken as evidence for the impetus theory, but also for several findings that fall outside the scope of the impetus theory. It has also been claimed that judgments about the location at which a moving object disappeared are influenced by the impetus theory. I argue that these judgments are better explained in a different way, as best-guess extrapolations made by the visual system as a practical guide to interactions with the object, such as interception.  相似文献   

8.
People often judge it unacceptable to directly harm a person, even when this is necessary to produce an overall positive outcome, such as saving five other lives. We demonstrate that similar judgments arise when people consider damage to owned objects. In two experiments, participants considered dilemmas where saving five inanimate objects required destroying one. Participants judged this unacceptable when it required violating another’s ownership rights, but not otherwise. They also judged that sacrificing another’s object was less acceptable as a means than as a side-effect; judgments did not depend on whether property damage involved personal force. These findings inform theories of moral decision-making. They show that utilitarian judgment can be decreased without physical harm to persons, and without personal force. The findings also show that the distinction between means and side-effects influences the acceptability of damaging objects, and that ownership impacts utilitarian moral judgment.  相似文献   

9.
In two experiments we investigated people's ability to judge the relative mass of two objects involved in a collision. It was found that judgments of relative mass were made on the basis of two heuristics. Roughly stated, these heuristics were (a) an object that ricochets backward upon impact is less massive than the object that it hit, and (b) faster moving objects are less massive. A heuristic model of judgment is proposed that postulates that different sources of information in any event may have different levels of salience for observers and that heuristic access is controlled by the rank ordering of salience. It was found that observers ranked dissimilarity in mass on the basis of the relative salience of angle and velocity information and not proportionally to the distal mass ratio. This heuristic model was contrasted with the notion that people can veridically extract dynamic properties of motion events when the kinematic data are sufficient for their specification.  相似文献   

10.
The utilization of static and kinetic information for depth by Mala?ian children and young adults in making monocular relative size judgments was investigated. Subjects viewed pairs of objects or photographic slides of the same pairs and judged which was the larger of each pair. The sizes and positions of the objects were manipulated such that the more distant object subtended a visual angle equal to, 80% of, or 70% of the nearer object. Motor parallax information was manipulated by allowing or preventing head movement. All subjects displayed sensitivity to static information for depth when the two objects subtended equal visual angles. When the more distant object was larger but subtended a smaller visual angle than the nearer object, subjects tended to base their judgments on retinal size. Motion parallax information increased accuracy of judgments of three-dimensional displays but reduced accuracy of judgments of pictorial displays. Comparisons are made between these results and those for American subjects.  相似文献   

11.
When a person moves in a straight line through a stationary environment, the images of object surfaces move in a radial pattern away from a single point. This point, known as the focus of expansion (FOE), corresponds to the person's direction of motion. People judge their heading from image motion quite well in this situation. They perform most accurately when they can see the region around the FOE, which contains the most useful information for this task. Furthermore, a large moving object in the scene has no effect on observer heading judgments unless it obscures the FOE. Therefore, observers may obtain the most accurate heading judgments by focusing their attention on the region around the FOE. However, in many situations (e.g., driving), the observer must pay attention to other moving objects in the scene (e.g., cars and pedestrians) to avoid collisions. These objects may be located far from the FOE in the visual field. We tested whether people can accurately judge their heading and the three-dimensional (3-D) motion of objects while paying attention to one or the other task. The results show that differential allocation of attention affects people's ability to judge 3-D object motion much more than it affects their ability to judge heading. This suggests that heading judgments are computed globally, whereas judgments about object motion may require more focused attention.  相似文献   

12.
Physical imagery: kinematic versus dynamic models.   总被引:1,自引:0,他引:1  
Physical imagery occurs when people imagine one object causing a change to a second object. To make inferences through physical imagery, people must represent information that coordinates the interactions among the imagined objects. The current research contrasts two proposals for how this coordinating information is realized in physical imagery. In the traditional kinematic formulation, imagery transformations are coordinated by geometric information in analog spatial representations. In the dynamic formulation, transformations may also be regulated by analog representations of force and resistance. Four experiments support the dynamic formulation. They show, for example, that without making changes to the spatial properties of a problem, dynamic perceptual information (e.g., torque) and beliefs about physical properties (e. g., viscosity) affect the inferences that people draw through imagery. The studies suggest that physical imagery is not so much an analog of visual perception as it is an analog of physical action. A simple model that represents force as a rate helps explain why inferences can emerge through imagined actions even though people may not know the answer explicitly. It also explains how and when perception, beliefs, and learning can influence physical imagery.  相似文献   

13.
The recognition heuristic (RH) theory states that, in comparative judgments (e.g., Which of two cities has more inhabitants?), individuals infer that recognized objects score higher on the criterion (e.g., population) than unrecognized objects. Indeed, it has often been shown that recognized options are judged to outscore unrecognized ones (e.g., recognized cities are judged as larger than unrecognized ones), although different accounts of this general finding have been proposed. According to the RH theory, this pattern occurs because the binary recognition judgment determines the inference and no other information will reverse this. An alternative account posits that recognized objects are chosen because knowledge beyond mere recognition typically points to the recognized object. A third account can be derived from the memory-state heuristic framework. According to this framework, underlying memory states of objects (rather than recognition judgments) determine the extent of RH use: When two objects are compared, the one associated with a “higher” memory state is preferred, and reliance on recognition increases with the “distance” between their memory states. The three accounts make different predictions about the impact of subjective recognition experiences—whether an object is merely recognized or recognized with further knowledge—on RH use. We estimated RH use for different recognition experiences across 16 published data sets, using a multinomial processing tree model. Results supported the memory-state heuristic in showing that RH use increases when recognition is accompanied by further knowledge.  相似文献   

14.
Hierarchical coding in the perception and memory of spatial layouts   总被引:4,自引:0,他引:4  
Two experiments were performed to investigate the organization of spatial information in perception and memory. Participants were confronted with map-like configurations of objects which were grouped by color (Experiment 1) or shape (Experiment 2) so as to induce cognitive clustering. Two tasks were administered: speeded verification of spatial relations between objects and unspeeded estimation of the Euclidean distance between object pairs. In both experiments, verification times, but not distance estimations, were affected by group membership. Spatial relations of objects belonging to the same color or shape group were verified faster than those of objects from different groups, even if the spatial distance was identical. These results did not depend on whether judgments were based on perceptually available or memorized information, suggesting that perceptual, not memory processes were responsible for the formation of cognitive clusters. Received: 7 October 1999 / Accepted: 17 February 2000  相似文献   

15.
ABSTRACT

Images of moving objects presented on computer screens may be perceived as animate or inanimate. A simple hypothesis, consistent with much research evidence, is that objects are perceived as inanimate if there is a visible external contact from another object immediately prior to the onset of motion, and as animate if that is not the case. Evidence is reported that is not consistent with that hypothesis. Objects (targets) moving on contact from another object (launcher) were perceived as actively resisting the impact of the launcher on them if the targets slowed rapidly. Rapid slowing is consistent with the laws of mechanics for objects moving in an environment that offers friction and air resistance. Despite that, ratings of inanimate motion were lower than ratings of active resistance for objects that slowed rapidly. The results are consistent with the hypothesis that there is a perceptual impression of active (animate) resistance that is evoked by the kinematic pattern of rapid slowing from an initial speed after contact from another object.  相似文献   

16.
Research on patients with apraxia, a deficit in skilled action, has shown that the ability to use objects may be differentially impaired relative to knowledge about object function. Here we show, using a modified neuropsychological test, that similar dissociations can be observed in response times in healthy adults. Participants were asked to decide which two of three presented objects shared the same manipulation or the same function; triads were presented in picture and word format, and responses were made manually (button press) or with a basic-level naming response (verbally). For manual responses (Experiment 1), participants were slower to make manipulation judgments for word stimuli than for picture stimuli, while there was no difference between word and picture stimuli for function judgments. For verbal-naming responses (Experiment 2), participants were again slower for manipulation judgments over word stimuli, as compared with picture stimuli; however, and in contrast to Experiment 1, function judgments over word stimuli were faster than function judgments over picture stimuli. These data support the hypotheses that knowledge of object function and knowledge of object manipulation correspond to dissociable types of object knowledge and that simulation over motor information is not necessary in order to retrieve knowledge of object function.  相似文献   

17.
ABSTRACT

Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near one's threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance. Regardless of viewing duration, distance judgments were more accurate (less biased towards underestimation) when multiple potential targets were presented, suggesting that the relative angular declinations between the objects are an additional source of useful information. Distance judgments were more precise with additional viewing time, but the benefit did not depend on set size and accuracy did not improve with longer viewing durations. The overall pattern suggests that distance can be efficiently derived from direction for floor-level objects. Controlling for age-related differences in the viewing time needed to support detection was sufficient to support distal localization but only when brief and longer glimpse trials were interspersed. Information extracted from longer glimpse trials presumably supported performance on subsequent trials when viewing time was more limited. This outcome suggests a particularly important role for prior visual experience in distance judgments for older observers.  相似文献   

18.
When a person moves in a straight line through a stationary environment, the images of object surfaces move in a radial pattern away from a single point. This point, known as thefocus of expansion (FOE), corresponds to the person’s direction of motion. People judge their heading from image motion quite well in this situation. They perform most accurately when they can see the region around the FOE, which contains the most useful information for this task. Furthermore, a large moving object in the scene has no effect on observer heading judgments unless it obscures the FOE. Therefore, observers may obtain the most accurate heading judgments by focusing their attention on the region around the FOE. However, in many situations (e.g., driving), the observer must pay attention to other moving objects in the scene (e.g., cars and pedestrians) to avoid collisions. These objects may be located far from the FOE in the visual field. We tested whether people can accurately judge their heading and the three-dimensional (3-D) motion of objects while paying attention to one or the other task. The results show that differential allocation of attention affects people’s ability to judge 3-D object motion much more than it affects their ability to judge heading. This suggests that heading judgments are computed globally, whereas judgments about object motion may require more focused attention.  相似文献   

19.
Understanding natural dynamics   总被引:1,自引:0,他引:1  
When making dynamical judgments, people can make effective use of only one salient dimension of information present in the event. People do not make dynamical judgments by deriving multidimensional quantities. The adequacy of dynamical judgments, therefore, depends on the degree of dimensionality that is both inherent in the physics of the event and presumed to be present by the observer. There are two classes of physical motion contexts in which objects may appear. In the simplest class, there exists only one dynamically relevant object parameter: the position over time of the object's center of mass. In the other class of motion contexts, there are additional object attributes, such as mass distribution and orientation, that are of dynamical relevance. In the former class, objects may be formally treated as extensionless point particles, whereas in the latter class some aspect of the object's extension in space is coupled into its motion. A survey of commonsense understandings showed that people are relatively accurate when specific dynamical judgments can be accurately based on a single information dimension; however, erroneous judgments are pervasive when simple motion contexts are misconstrued as being multidimensional, and when multidimensional quantities are the necessary basis for accurate judgments.  相似文献   

20.
Bolte A  Goschke T 《Cognition》2008,108(3):608-616
Intuition denotes the ability to judge stimulus properties on the basis of information that is activated in memory, but not consciously retrieved. In three experiments we show that participants discriminated better than chance fragmented line drawings depicting meaningful objects (coherent fragments) from fragments consisting of randomly displaced line segments (incoherent fragments) or from fragments which were rotated by 180 degrees (inverted fragments), even if participants did not consciously recognize the objects. Unrecognized coherent, but not incoherent or inverted fragments produced reliable priming of correct object names in a lexical decision task, indicating that coherent fragments activated an unconscious semantic object representation. Priming effects were larger for coherent fragments judged as coherent compared to coherent fragments judged as incoherent. We conclude that intuitive gestalt judgments for coherent fragments rested on the activation of semantic object representations, which biased participants' intuitive impression of "gestalt-ness" even when the underlying object representations remained unconscious.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号