首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Pavlova M  Sokolov A  Sokolov A 《Perception》2005,34(9):1107-1116
Perception of intentions and dispositions of others is an essential ingredient of adaptive daily-life social behaviour. Dynamics of moving images leads to veridical perception of social attributes. Anecdotal observations in art, science, and popular culture indicate that dynamic imbalance can be revealed in static images. Here, we ask whether perceived dynamics of abstract figures is related to emotional attribution. Participants first estimated instability of geometric shapes rotated in 15 degrees steps in the image plane, and then rated the intensity of basic emotions that can be ascribed to the figures. We found no substantial link between the deviation of the figures from the vertical orientation and perceived instability. Irrespective of shape, a strong positive correlation was found between negative emotions and perceived instability. By contrast, positive emotions were inversely linked with deviation of the figure from vertical orientation. The work demonstrates for the first time that dynamics conveyed by static images enables specific emotional attributions, and agrees well with the assumption that neural networks for production of movements and understanding the dispositions of others are intimately linked. The findings are also of importance for exploring the ability to reveal social properties through dynamics in normal and abnormal development, for example in patients with early brain injury or autistic spectrum disorders.  相似文献   

2.
This article proposes that visual encoding learning improves reading fluency by widening the span over which letters are recognized from a fixated text image so that fewer fixations are needed to cover a text line. Encoder is a connectionist model that learns to convert images like the fixated text images human readers encode into the corresponding letter sequences. The computational theory of classification learning predicts that fixated text-image size makes this learning difficult but that reducing image variability and biasing learning should help. Encoder confirms these predictions. It fails to learn as image size increases but achieves humanlike visual encoding accuracy when image variability is reduced by regularities in fixation positions and letter sequences and when learning is biased to discover mapping functions based on the sequential, componential structure of text. After training, Encoder exhibits many humanlike text familiarity effects.  相似文献   

3.
A single experiment investigated how younger (aged 18-32 years) and older (aged 62-82 years) observers perceive 3D object shape from deforming and static boundary contours. On any given trial, observers were shown two smoothly-curved objects, similar to water-smoothed granite rocks, and were required to judge whether they possessed the "same" or "different" shape. The objects presented during the "different" trials produced differently-shaped boundary contours. The objects presented during the "same" trials also produced different boundary contours, because one of the objects was always rotated in depth relative to the other by 5, 25, or 45 degrees. Each observer participated in 12 experimental conditions formed by the combination of 2 motion types (deforming vs. static boundary contours), 2 surface types (objects depicted as silhouettes or with texture and Lambertian shading), and 3 angular offsets (5, 25, and 45 degrees). When there was no motion (static silhouettes or stationary objects presented with shading and texture), the older observers performed as well as the younger observers. In the moving object conditions with shading and texture, the older observers' performance was facilitated by the motion, but the amount of this facilitation was reduced relative to that exhibited by the younger observers. In contrast, the older observers obtained no benefit in performance at all from the deforming (i.e., moving) silhouettes. The reduced ability of older observers to perceive 3D shape from motion is probably due to a low-level deterioration in the ability to detect and discriminate motion itself.  相似文献   

4.
The static form of the size-distance invariance hypothesis asserts that a given proximal stimulus size (visual angle) determines a unique and constant ratio of perceived object size to perceived object distance. A proposed kinetic invariance hypothesis asserts that a changing proximal stimulus size (an expanding or contracting solid visual angle) produces a constant perceived size and a changing perceived distance such that the instantaneous ratio of perceived size to perceived distance is determined by the instantaneous value of visual angle. The kinetic invariance hypothesis requires a new concept, an operating constraint, to mediate between the proximal expansion or contraction pattern and the perception of rigid object motion in depth. As a consequence of the operating constraint, expansion and contraction patterns are automatically represented in consciousness as rigid objects. In certain static situations, the operation of this constraint produces the anomalous perceived-size-perceived-distance relations called the size-distance paradox.  相似文献   

5.
Recent studies of perceptual learning have explored and commented on variation in learning trajectories. Although several factors have been suggested to account for this variation, thus far the idea that humans vary in their perceptual learning capacities has received scant attention. In the present experiment, we aimed at providing a detailed picture of the variation in this capacity by investigating the perceptual learning trajectories of a considerable number of participants. The learning process was studied using the paradigm of length perception by dynamic touch. The results showed that there are substantial individual differences in the way perceivers respond to feedback. Indeed, after feedback, the participants' perceptual performances diverged. We conclude that humans vary in their perceptual learning capacities. The implications of this finding for recent discussions on variation in perception are explored.  相似文献   

6.
Novice observers differ from each other in the kinematic variables they use for the perception of kinetic properties, but they converge on more useful variables after practice with feedback. The colliding-balls paradigm was used to investigate how the convergence depends on the relations between the candidate variables and the to-be-perceived property, relative mass. Experiment 1 showed that observers do not change in the variables they use if the variables with which they start allow accurate performance. Experiment 2 showed that, at least for some observers, convergence can be facilitated by reducing the correlations between commonly used nonspecifying variables and relative mass but not by keeping those variables constant. Experiments 3a and 3b further demonstrated that observers learn not to rely on a particular nonspecifying variable if the correlation between that variable and relative mass is reduced.  相似文献   

7.
We examined expert meteorologists as they created a weather forecast while working in a naturalistic environment. We examined the type of external representation they chose to examine (a static image, a sequence of static images, or a dynamic display) and the kind of information they extracted from those representations (static or dynamic). We found that even though weather is an extremely dynamic domain, expert meteorologists examined very few animations, examining primarily static images. However, meteorologists did extract large amounts of dynamic information from these static images, suggesting that they reasoned about the weather by mentally animating the static images rather than letting the software do it for them.  相似文献   

8.
9.
We quantitatively investigated the halt and recovery of illusory motion perception in static images. With steady fixation, participants viewed images causing four different motion illusions. The results showed that the time courses of the Fraser-Wilcox illusion and the modified Fraser-Wilcox illusion (i.e., "Rotating Snakes") were very similar, while the Ouchi and Enigma illusions showed quite a different trend. When participants viewed images causing the Fraser-Wilcox illusion and the modified Fraser-Wilcox illusion, they typically experienced disappearance of the illusory motion within several seconds. After a variable interstimulus interval (ISI), the images were presented again in the same retinal position. The magnitude of the illusory motion from the second image presentation increased as the ISI became longer. This suggests that the same adaptation process either directly causes or attenuates both the Fraser-Wilcox illusion and the modified Fraser-Wilcox illusion.  相似文献   

10.
In cart-pole balancing, one moves a cart in 1 dimension so as to balance an attached inverted pendulum. We approached perception-action and learning in this task from an ecological perspective. This entailed identifying a space of informational variables that balancers use as they perform the task and demonstrating that they improve by traversing the space to the loci of more useful variables. We presented a novel information space-including fractional derivatives of pendulum angle (e.g., halfway between angle and angular velocity)-as possible information for balancing. Fourteen college students tried to meet a criterion of balancing the pole for 30 s on 3 of 5 successive trials, up to a maximum of 150 attempts. Loci in the fractional derivative space predicted the time series of force production well. Systematic differences were seen in loci as a function of success, and systematic changes in locus were seen with learning. The fractional derivatives were shown to predict pole angles a short time interval into the future, allowing balancers to prospectively control the action and thereby nullify visuomotor delay. In addition to loci in the information space, we analyzed loci in a calibration space, reflecting the gain relating force to information. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

11.
Bowers JS  Davis CJ  Hanley DA 《Cognition》2005,97(3):B45-B54
We assessed the impact of visual similarity on written word identification by having participants learn new words (e.g. BANARA) that were neighbours of familiar words that previously had no neighbours (e.g. BANANA). Repeated exposure to these new words made it more difficult to semantically categorize the familiar words. There was some evidence of interference following an initial training phase, and clear evidence of interference the following day (without any additional training); interference was larger still following more training on the second day. These findings lend support to models of reading that include lexical competition as a key process.  相似文献   

12.
In contrast to the dominant discrepancy reduction model, which favors the most difficult items, people, given free choice, devoted most time to medium-difficulty items and studied the easiest items first. When study time was experimentally manipulated, best performance resulted when most time was given to the medium-difficulty items. Empirically determined information uptake functions revealed steep initial learning for easy items with little subsequent increase. For medium-difficulty items, initial gains were smaller but more sustained, suggesting that the strategy people had used, when given free choice, was largely appropriate. On the basis of the information uptake functions, a negative spacing effect was predicted and observed in the final experiment. Overall, the results favored the region of proximal learning framework.  相似文献   

13.
14.
Vision is based on spatial correspondences between physically different structures--in environment, retina, brain, and perception. An examination of the correspondence between environmental surfaces and their retinal images showed that this consists of 2-dimensional 2nd-order differential structure (effectively 4th-order) associated with local surface shape, suggesting that this might be a primitive form of spatial information. Next, experiments on hyperacuities for detecting relative motion and binocular disparity among separated image features showed that spatial positions are visually specified by the surrounding optical pattern rather than by retinal coordinates, minimally affected by random image perturbations produced by 3-D object motions. Retinal image space, therefore, involves 4th-order differential structure. This primitive spatial structure constitutes information about local surface shape.  相似文献   

15.
The mechanisms of perceptual learning are analyzed theoretically, probed in an orientation-discrimination experiment involving a novel nonstationary context manipulation, and instantiated in a detailed computational model. Two hypotheses are examined: modification of early cortical representations versus task-specific selective reweighting. Representation modification seems neither functionally necessary nor implied by the available psychophysical and physiological evidence. Computer simulations and mathematical analyses demonstrate the functional and empirical adequacy of selective reweighting as a perceptual learning mechanism. The stimulus images are processed by standard orientation- and frequency-tuned representational units, divisively normalized. Learning occurs only in the "read-out" connections to a decision unit; the stimulus representations never change. An incremental Hebbian rule tracks the task-dependent predictive value of each unit, thereby improving the signal-to-noise ratio of their weighted combination. Each abrupt change in the environmental statistics induces a switch cost in the learning curves as the system temporarily works with suboptimal weights.  相似文献   

16.
Terrestrial gravity restricts human locomotion to surfaces in which turns involve rotationsaround the body axis. Because observers are usually upright, one might expect the effects of gravity to induce differences in the processing of vertical versus horizontal turns. Subjects observed visual scenes of bending tunnels, either statically or dynamically, as if they were moving passively through the visual scene and were then asked to reproduce the turn deviation of the tunnel with a trackball. In order to disentangle inertia-related (earth-centered) from vision-related (body-centered) factors, the subjects were either upright or lying on their right side during the observations. Furthermore, the availability of continuous optic flow, geometrical cues, and eye movement were manipulated in three experiments. The results allowed us to characterize the factors' contributions as follows. Forward turns (pitch down) with all cues were largely overestimated, as compared with backward turns (pitch up). First, eye movements known to be irregular for vertical stimulation were largely responsible for this asymmetry. Second, geometry-based estimations are, to some extent, asymmetrical. Third, a cognitive effect corresponding to the evaluation of navigability for upward and downward turns was found (i.e.,top-down influences, such as the fear of falling often reported), which tended to increase the estimation of turns in the direction of gravity.  相似文献   

17.
This paper presents an approach to imitation learning in robotics focusing on low level behaviours, so that they do not need to be encoded into sets and rules, but learnt in an intuitive way. Its main novelty is that, rather than trying to analyse natural human actions and adapting them to robot kinematics, humans adapt themselves to the robot via a proper interface to make it perform the desired action. As an example, we present a successful experiment to learn a purely reactive navigation behaviour using robotic platforms. Using Case Based Reasoning, the platform learns from a human driver how to behave in the presence of obstacles, so that no kinematics studies or explicit rules are required.  相似文献   

18.
Therrien ME  Collin CA 《Perception》2010,39(8):1043-1064
Visual navigation is a task that involves processing two-dimensional light patterns on the retinas to obtain knowledge of how to move through a three-dimensional environment. Therefore, modifying the basic characteristics of the two-dimensional information provided to navigators should have important and informative effects on how they navigate. Despite this, few basic research studies have examined the effects of systematically modifying the available levels of spatial visual detail on navigation performance. In this study, we tested the effects of a range of visual blur levels--approximately equivalent to various degrees of low-pass spatial frequency filtering--on participants' visually guided route-learning performance using desktop virtual renderings of the Hebb-Williams mazes. Our findings show that the function of blur and time to finish the mazes follows a sigmoidal pattern, with the inflection point around +2 D of experienced defocus. This suggests that visually guided route learning is fairly robust to blur, with the threshold level being just above the limit for legal blindness. These findings have implications for models of route learning, as well as for practical situations in which humans must navigate under conditions of blur.  相似文献   

19.
Matching unfamiliar faces is a difficult task. Here we ask whether it is possible to improve performance by providing multiple images to support matching. In two experiments we observe that accuracy improves as viewers are provided with additional images on which to base their match. This technique leads to fast learning of an individual, but the effect is identity-specific: Despite large improvements in viewers’ ability to match a particular person's face, these improvements do not generalize to other faces. Experiment 2 demonstrated that trial-by-trial feedback provided no additional benefits over the provision of multiple images. We discuss these results in terms of familiar and unfamiliar face processing and draw out some implications for training regimes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号