首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
In two experiments, participants were trained to recognize a playground scene from four vantage points and were subsequently asked to recognize the playground from a novel perspective between the four learned viewing perspectives, as well as from the trained perspectives. In both experiments, people recognized the novel view more efficiently than those that they had recently used in order to learn the scene. Additionally, in Experiment 2, participants who viewed a novel stimulus on their very first test trial correctly recognized it more quickly (and also tended to recognize it more accurately) than did participants whose first test trial was a familiar view of the scene. These findings call into question the idea that scenes are recognized by comparing them with single previous experiences, and support a growing body of literature on the existence of psychological mechanisms that combine spatial information from multiple views of a scene.  相似文献   

2.
Internet of Video Things (IoVT) has been proposed and studied as a scenario where video cameras are ubiquitous and continuously acquiring data from their surroundings. In order to enable handling of a large amount of data generated in IoVT architectures, robust autonomous video processing must be performed. An important application is the recognition of different actions performed by humans in the context of security. This research evolves a previously published work, by reducing the input dimensionality to the recognition system, making it more robust to variations in the position of the body in each video frame, and by using a Multilayer Perceptron Artificial Neural Network whose hyperparameters are here optimized by a Genetic Algorithm. Significant improvements in the recognition rate have been obtained, despite the use of a more straightforward pre-processing phase and the increase in the number of viewpoints from the video cameras.  相似文献   

3.
Research on dynamic attention has shown that visual tracking is possible even if the observer’s viewpoint on the scene holding the moving objects changes. In contrast to smooth viewpoint changes, abrupt changes typically impair tracking performance. The lack of continuous information about scene motion, resulting from abrupt changes, seems to be the critical variable. However, hard onsets of objects after abrupt scene motion could explain the impairment as well. We report three experiments employing object invisibility during smooth and abrupt viewpoint changes to examine the influence of scene information on visual tracking, while equalizing hard onsets of moving objects after the viewpoint change. Smooth viewpoint changes provided continuous information about scene motion, which supported the tracking of temporarily invisible objects. However, abrupt and, therefore, discontinuous viewpoint changes strongly impaired tracking performance. Object locations retained with respect to a reference frame can account for the attentional tracking that follows invisible objects through continuous scene motion.  相似文献   

4.
This experiment examined the processing of information from multiple element visual displays, using techniques derived from the theory of signal detectability. The method allows one to specify how observers integrate information from individual elements of a display. The experiment tested numerical and graphical displays having different display sizes, durations, and arrangements of elements. Observer performance increased with the number, m, of display elements, but at less than the ideal √m rate. Observer performance was consistent with a model of information integration constrained by internal noise. Linear arrays of elements resulted in better performance than did square arrays. Graphically coded elements resulted in better performance than did numerical elements. Observer decision weighting of element information from graphical displays was approximately uniform across spatial position, but the weighting of information from numerical displays was concentrated on elements near the fixation point.  相似文献   

5.
Three experiments investigated scene recognition across viewpoint changes, involving same/different judgements on scenes consisting of three objects on a desktop. On same trials, the comparison scene appeared either from the same viewpoint as the standard scene or from a different viewpoint with the desktop rotated about one or more axes. Different trials were created either by interchanging the locations of two or three of the objects (location change condition), or by rotating either one or all three of the objects around their vertical axes (orientation change condition). Response times and errors increased as a function of the angular distance between the standard and comparison views, but this effect was bigger for rotations around the vertical axis than for those about the line of sight or horizontal axis. Furthermore, the time to detect location changes was less than that to detect orientation changes, and this difference increased with increasing angular disparity between the standard and comparison scenes. Rotation times estimated in a double-axis rotation were no longer than other rotations in depth, indicating that alignment was not necessarily simpler around a "natural" axis of rotation. These results are consistent with the hypothesis that scenes, like many objects, may be represented in a viewpoint dependent manner and recognized by aligning standard and comparison views, but that the alignment of scenes is not a holistic process.  相似文献   

6.
Environmental scenes are the settings in which human action occurs; since they constrain behavior, they are of interest to social, personality, and environmental psychologists. Scenes can also be viewed as a spatial generalization of objects, as well as the spatial contexts in which objects appear. As such, they are studied in perception and memory. Previous approaches to characterizing environments have relied on scaling techniques to yield a manageable number of dimensions or attributes by which environments can be compared. In contrast, the present research demonstrates development of a taxonomy of kinds of environmental scenes, where perceived attributes are obtained as a byproduct. A basic or preferred level of categorization in the taxonomy is also identified, based on measures of cognition, behavior, and communication. The basic level, for example, school, home, beach, mountains, corresponds to the level commonly used in the study of scene schemas in perception, memory, and environmental psychology, as well as to the level apparently most useful in other domains of knowledge concerned with environments, for example, architecture and geography.  相似文献   

7.
Visualization of compound scenes   总被引:1,自引:0,他引:1  
J R Beech  D A Allport 《Perception》1978,7(2):129-138
  相似文献   

8.
Tomkins [Tomkins, S. (1979). Script theory: Differential magnification of affects. In: C. Keasey (Ed.), Nebraska Symposium on Motivation, Vol. 26 (pp. 201–236). Lincoln: University of Nebraska Press; Tomkins, S. (1987). Script theory. In: J. Aronoff, A. Rabin, & R. Zucker (Eds.), The Emergence of Personality (pp. 147–216). New York: Springer] proposed that personality is built from the experience of scenes, which minimally consist of an emotion and an event evoking that emotion. This study sought to identify a taxonomy of emotion-eliciting events. Examples of specific events eliciting love, joy, sadness, anger, and fear were collected from 200 participants, another 120 participants independently sorted these examples for similarity, and a hierarchical cluster analysis was run on these similarity sorts. The resulting tree diagram displayed categories of events at varying levels of abstraction. Individual differences were found in the types of events offered for each of the five emotions.  相似文献   

9.
Knowledge about scene categories, the so-called gist, can be extracted very rapidly, while recognition and naming of individual scene objects is a more effortful process. We investigate this phenomenon by presenting action scenes involving two actors for durations varying between 100 and 300 ms. Incoherence was created by mirroring individual scene actors. Upon masked presentation participants had to report content, actors and objects and to indicate whether the scene was meaningful or not. Scene coherence was judged correctly at all presentation durations. Actors were correctly identified in about one-third of the cases even with presentation durations of 100 ms, and identification rate increased up to 80% with longer durations. Identification depended on scene coherence, on the position of agents in the scene, and on the position of actors relative to the fixation cross. These interdependencies of scene and object perception indicate that the visual system seems to be very sensitive to meaningful interactions of living entities. A series of fixations is not necessary to identify actors of a scene.  相似文献   

10.
A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation.  相似文献   

11.
We report a picture-memory phenomenon in which subjects' recall and recognition of photographed scenes reveal a pronounced extension of the pictures' boundaries. After viewing 20 pictures for 15 s each, 37 undergraduates exhibited this striking distortion; 95% of their drawings included information that had not been physically present but that would have been likely to have existed just outside the camera's field of view (Experiment 1). To determine if boundary extension is limited to recall and drawing ability, Experiment 2 tested recognition memory for boundaries. Eighty-five undergraduates rated targets and distractors on a boundary-placement scale. Subjects rated target pictures as being closer up than before and frequently mistook extended-boundary distractors as targets. Results are discussed in terms of picture comprehension and memory. In addition to its theoretical value, discovery of the phenomenon demonstrates the importance of more widespread use of open-ended tests in picture-memory methodology.  相似文献   

12.
Two experiments were designed to compare scene recognition reaction time (RT) and accuracy patterns following observer versus scene movement. In Experiment 1, participants memorized a scene from a single perspective. Then, either the scene was rotated or the participants moved (0°–360° in 36° increments) around the scene, and participants judged whether the objects’ positions had changed. Regardless of whether the scene was rotated or the observer moved, RT increased with greater angular distance between judged and encoded views. In Experiment 2, we varied the delay (0, 6, or 12 s) between scene encoding and locomotion. Regardless of the delay, however, accuracy decreased and RT increased with angular distance. Thus, our data show that observer movement does not necessarily update representations of spatial layouts and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.  相似文献   

13.
A visual search reaction time task was used to examine the relationships among encoding, comparison, and search processes. Targets, either pictures of objects or their names, were briefly presented and followed, at stimulus onset asynchronies (SOAs) ranging from 200 to 2000 ms, by pictorial scenes having a 50% probability of containing the target. Generally, subjects responded more rapidly with picture than word targets, this difference diminishing at longer SOAs. Searches appeared to be restricted to certain portions of the scene depending on the congruency between the target and the scene. The sequence of decisions in the search task was discussed.  相似文献   

14.
The visual world exists all around us, yet this information must be gleaned through a succession of eye fixations in which high visual acuity is limited to the small foveal region of each retina. In spite of these physiological constraints, we experience a richly detailed and continuous visual world. Research on transsaccadic memory, perception, picture memory and imagination of scenes will be reviewed. Converging evidence suggests that the representation of visual scenes is much more schematic and abstract than our immediate experience would indicate. The visual system may have evolved to maximize comprehension of discrete views at the expense of representing unnecessary detail, but through the action of attention it allows the viewer to access detail when the need arises. This capability helps to maintain the 'illusion' of seeing a rich and detailed visual world at every glance.  相似文献   

15.
One of the key perceptual errors that contributes to accidents on the road is ‘looking but failing to see’. Though this has previously been attributed to failures of attention or time gaps, the recent change blindness literature suggests another alternative. Researchers have proposed that we have a poor memory for the visual world, and as such, participants find it very hard to notice a change between two successive pictures providing the transients that normally catch attention are masked. Such masking can occur naturally due to blinks and saccadic suppression. It is suggested that these effects may contribute to accident liability. An experiment was undertaken to test the application of the change blindness paradigm to the driving domain. It was predicted that experienced drivers may have greater visual persistence for changed targets in a road scene provided they are relevant to a driver’s parsing of the road (i.e. if the targets are potential hazards such as pedestrians, rather than changes in background scenery). The experiment required drivers and non-drivers to view a complex driving-related visual scene that was constantly interrupted by a flash once per second. During the flashes one item in the scene was changed. This target was manipulated according to location and semantic relevance. Results showed an interaction between central and peripheral items with semantic relevance. Participants found it hard to detect central items that were inconsequential, relative to other classifications of targets. No effect of experience was noted. The results are discussed in relation to the general theoretical literature and their potential applications to the driving domain.  相似文献   

16.
In four experiments we explored the accuracy of memory for human action using displays with continuous motion. In Experiment 1, a desktop virtual environment was used to visually simulate ego‐motion in depth, as would be experienced by a passenger in a car. Using a task very similar to that employed in typical studies of representational momentum we probed the accuracy of memory for an instantaneous point in space/time, finding a consistent bias for future locations. In Experiment 2, we used the same virtual environment to introduce a new “interruption” paradigm in which the sensitivity to displacements during a continuous event could be assessed. Thresholds for detecting displacements in ego‐position in the direction of motion were significantly higher than those opposite the direction of motion. In Experiments 3 and 4 we extended previous work that has shown anticipation effects for frozen action photographs or isolated human figures by presenting observers with short video sequences of complex crowd scenes. In both experiments, memory for the stopping position of the video was shifted forward, consistent with representational momentum. Interestingly, when the video sequences were played in reverse, the magnitude of this forward bias was larger. Taken together, the results of all four experiments suggest that even when presented with complex, continuous motion, the visual system may sometimes try to anticipate the outcome of our own and others' actions.  相似文献   

17.
Eye movements of 30 4-month-olds were tracked as infants viewed animals and vehicles in “natural” scenes and, for comparison, in homogeneous “experimental” scenes. Infants showed equivalent looking time preferences for natural and experimental scenes overall, but fixated natural scenes and objects in natural scenes more than experimental scenes and objects in experimental scenes and shifted fixations between objects and contexts more in natural than in experimental scenes. The findings show how infants treat objects and contexts in natural scenes and suggest that they treat more commonly used experimental scenes differently.  相似文献   

18.
Prime pictures of emotional scenes appeared in parafoveal vision, followed by probe pictures either congruent or incongruent in affective valence. Participants responded whether the probe was pleasant or unpleasant (or whether it portrayed people or animals). Shorter latencies for congruent than for incongruent prime-probe pairs revealed affective priming. This occurred even when visual attention was focused on a concurrent verbal task and when foveal gaze-contingent masking prevented overt attention to the primes but only if these had been preexposed and appeared in the left visual field. The preexposure and laterality patterns were different for affective priming and semantic category priming. Affective priming was independent of the nature of the task (i.e., affective or category judgment), whereas semantic priming was not. The authors conclude that affective processing occurs without overt attention--although it is dependent on resources available for covert attention--and that prior experience of the stimulus is required and right-hemisphere dominance is involved.  相似文献   

19.
In the present study, memory for picture boundaries was measured with scenes that simulated self-motion along the depth axis. The results indicated that boundary extension (a distortion in memory for picture boundaries) occurred with moving scenes in the same manner as that reported previously for static scenes. Furthermore, motion affected memory for the boundaries but this effect of motion was not consistent with representational momentum of the self (memory being further forward in a motion trajectory than actually shown). We also found that memory for the final position of the depicted self in a moving scene was influenced by properties of the optical expansion pattern. The results are consistent with a conceptual framework in which the mechanisms that underlie boundary extension and representational momentum (a) process different information and (b) both contribute to the integration of successive views of a scene while the scene is changing.  相似文献   

20.
Recognition memory was investigated for individual frames extracted from temporally continuous, visually rich film segments of 5–15 min. Participants viewed a short clip from a film in either a coherent or a jumbled order, followed by a recognition test of studied frames. Foils came either from an earlier or a later part of the film (Experiment 1) or from deleted segments selected from random cuts of varying duration (0.5 to 30?s) within the film itself (Experiment 2). When the foils came from an earlier or later part of the film (Experiment 1), recognition was excellent, with the hit rate far exceeding the false-alarm rate (.78 vs. 18). In Experiment 2, recognition was far worse, with the hit rate (.76) exceeding the false-alarm rate only for foils drawn from the longest cuts (15 and 30?s) and matching the false-alarm rate for the 5?s segments. When the foils were drawn from the briefest cuts (0.5 and 1.0 s), the false-alarm rate exceeded the hit rate. Unexpectedly, jumbling had no effect on recognition in either experiment. These results are consistent with the view that memory for complex visually temporal events is excellent, with the integrity unperturbed by disruption of the global structure of the visual stream. Disruption of memory was observed only when foils were drawn from embedded segments of duration less than 5?s, an outcome consistent with the view that memory at these shortest durations are consolidated with expectations drawn from the previous stream.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号