首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
Six experiments investigated the nature of the object-file representation supporting object continuity. Participants viewed preview displays consisting of 2 stimuli (either line drawings or words) presented within square frames, followed by a target display consisting of a single stimulus (either a word or a picture) presented within 1 of the frames. The relationship between the target and preview stimuli was manipulated. The first 2 experiments found that participants responded more quickly when the target was identical to the preview stimulus in the same frame (object-specific priming). In Experiments 3, 4, 5, and 6, the physical form of the target stimulus (a word or picture in 1 frame) was changed completely from that of either preview stimulus (pictures or words in both frames). Despite this physical change, object-specific priming was observed. It is suggested that object files encode postcategorical information, rather than precise physical information.  相似文献   

2.
We describe the creation of the first multisensory stimulus set that consists of dyadic, emotional, point-light interactions combined with voice dialogues. Our set includes 238 unique clips, which present happy, angry and neutral emotional interactions at low, medium and high levels of emotional intensity between nine different actor dyads. The set was evaluated in a between-design experiment, and was found to be suitable for a broad potential application in the cognitive and neuroscientific study of biological motion and voice, perception of social interactions and multisensory integration. We also detail in this paper a number of supplementary materials, comprising AVI movie files for each interaction, along with text files specifying the three dimensional coordinates of each point-light in each frame of the movie, as well as unprocessed AIFF audio files for each dialogue captured. The full set of stimuli is available to download from: http://motioninsocial.com/stimuli_set/.  相似文献   

3.
Audiovisual integration (AVI) has been demonstrated to play a major role in speech comprehension. Previous research suggests that AVI in speech comprehension tolerates a temporal window of audiovisual asynchrony. However, few studies have employed audiovisual presentation to investigate AVI in person recognition. Here, participants completed an audiovisual voice familiarity task in which the synchrony of the auditory and visual stimuli was manipulated, and in which visual speaker identity could be corresponding or noncorresponding to the voice. Recognition of personally familiar voices systematically improved when corresponding visual speakers were presented near synchrony or with slight auditory lag. Moreover, when faces of different familiarity were presented with a voice, recognition accuracy suffered at near synchrony to slight auditory lag only. These results provide the first evidence for a temporal window for AVI in person recognition between approximately 100 ms auditory lead and 300 ms auditory lag.  相似文献   

4.
We replicated and extended studies showing that contextual cues for matching stimuli from 2 separate equivalence classes control the same derived relations as contextual cues for opposition frames in RFT studies. We conducted 2 experiments with 6 college students. In Phase 1, they received training in a conditional discrimination AB. Then, they received training for maintaining AB with X1 as context, and for reversing the sample–comparison relations of AB, with X2. In Phase 2, X1 functioned as context for matching same-class stimuli, and X2 functioned as context for matching separate-class stimuli. In Phase 3, X2 controlled the same derived arbitrary relations as cues for opposition frames in RFT studies. This functional equivalence may suggest that X2 functioned as a cue for opposition frames. In Phase 4, participants matched different stimuli with X2 as context, instead of matching most different (opposite) stimuli. In addition, Different, a cue for matching different stimuli, controlled the same derived arbitrary relations as X2. These results are incompatible with X2 being a cue for opposition frames. Contextual control over equivalence and responding by exclusion can explain these outcomes. The implications of these findings for RFT studies on opposition frames are discussed.  相似文献   

5.
Four experiments investigated the role of reference frames during the acquisition and development of spatial knowledge, when learning occurs incrementally across views. In two experiments, participants learned overlapping spatial layouts. Layout 1 was first studied in isolation, and Layout 2 was later studied in the presence of Layout 1. The Layout 1 learning view was manipulated, whereas the Layout 2 view was held constant. Manipulation of the Layout 1 view influenced the reference frame used to organize Layout 2, indicating that reference frames established during early environmental exposure provided a framework for organizing locations learned later. Further experiments demonstrated that reference frames established after learning served to reorganize an existing spatial memory. These results indicate that existing reference frames can structure the acquisition of new spatial memories and that new reference frames can reorganize existing spatial memories.  相似文献   

6.
Two experiments test whether isolated visible speech movements can be used for face matching. Visible speech information was isolated with a point-light methodology. Participants were asked to match articulating point-light faces to a fully illuminated articulating face in an XAB task. The first experiment tested single-frame static face stimuli as a control. The results revealed that the participants were significantly better at matching the dynamic face stimuli than the static ones. Experiment 2 tested whether the observed dynamic advantage was based on the movement itself or on the fact that the dynamic stimuli consisted of many more static and ordered frames. For this purpose, frame rate was reduced, and the frames were shown in a random order, a correct order with incorrect relative timing, or a correct order with correct relative timing. The results revealed better matching performance with the correctly ordered and timed frame stimuli, suggesting that matches were based on the actual movement itself. These findings suggest that speaker-specific visible articulatory style can provide information for face matching.  相似文献   

7.
This study examined the selection of spatial frames of reference for target localization in visual search. Participants searched for local target characters in global character configurations. The local targets could be localized relative to the character configuration in which they were embedded or relative to the presentation screen on which the configurations were displayed. We investigated under which conditions the configurations, or the screen served as frame of reference for target localization. Three experiments revealed an increasing impact of screen-related target localization with decreasing spatial uncertainty of targets in screen-related coordinates. The results indicate the capability of the visual system to localize relevant visual stimuli with respect to those frames of reference that yield the most redundant spatial distribution of these stimuli.  相似文献   

8.
The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.  相似文献   

9.
Four experiments utilizing tachistoscopic presentation of verbal and spatial stimuli to visual half-fields are presented. Three experiments failed to find any cerebral lateralization effect of the type predicted from existing models of cerebral lateralization processes. One experiment found marked lateralization effects. Since the experiments differ only in the ratio of trials to experimental stimuli, it is argued that cerebral lateralization experiments are detecting only a memory process occurring after subjects have learned all the stimuli to be presented. When new stimuli are presented on each trial, no cerebral lateralization effects are found, suggesting that active ongoing cognitive processing is independent of lateralization.  相似文献   

10.
11.
Psychophysical experiments involving moving stimuli require the rapid presentation of animated sequences of images. Although the Macintosh computer is widely used as a color graphics computer in research laboratories, its animation capabilities are generally ignored because of the speed limitations of drawing to the screen. New off-screen color graphics structures help to avoid the speed limitations so that real-time color or gray-scale visual motion stimuli may be generated. Precomputed animation frames are stored in off-screen memory and then rapidly transferred to the screen sequentially. The off-screen graphics structures may also be saved to disk in “Picture” form as “resources” for later retrieval and playback, allowing the experimenter to build in advance a collection of moving stimuli to use in future experiments. Code examples in the C programming language are provided, and the relative strengths and weaknesses of Macin-tosh color-frame animation for psychophysical experimentation are discussed.  相似文献   

12.
The perceived spatial organization of cutaneous patterns was examined in three experiments. People identified letters or numbers traced on surfaces of their body when the relative spatial orientations and positions of the body surfaces and of the stimuli were varied. Stimuli on the front or back of the head were perceived with respect to a frame of reference positioned behind those surfaces, independent of the surfaces' position and orientation. This independence may relate to the way in which the sensory apparatus on the front of the head is used in planning action. Stimuli on other surfaces of the head and body were perceived in relation to the position and orientation of the surface with respect to the whole body or trunk (most of which was usually upright). Stimuli on all transverse/horizontal surfaces were perceived with respect to frames of reference associated with the head/upper chest area. These frames were also used for stimuli on frontoparallel surfaces in front of the upper body. These observations may result from the use of "central" frames of reference that are independent of the head and are associated with the upper body. Stimuli on surfaces in other positions and orientations (with two exceptions) were perceived "externally"--that is, in frames of reference directly facing the stimulated surface. The spatial information processing we found may be fairly general because several of our main findings were also observed in very young children and blind adults and in paradigms studying perception by "active touch" and the spatial organization of the motor production of patterns.  相似文献   

13.
Two experiments examined the discrimination by pigeons of relative motion using computer-generated video stimuli. Using a go/no-go procedure, pigeons were tested with video stimuli in which the camera's perspective went either "around" or "through" an approaching object in a semi-realistic context. Experiment 1 found that pigeons could learn this discrimination and transfer it to videos composed from novel objects. Experiment 2 found that the order of the video's frames was critical to the discrimination of the videos. We hypothesize that the pigeons perceived a three-dimensional representation of the objects and the camera's relative motion and used this as the primary basis for discrimination. It is proposed that the pigeons might be able to form generalized natural categories for the different kinds of motions portrayed in the videos. Accepted after revision: 23 March 2001 Electronic Publication  相似文献   

14.
Experiments using two different methods and three types of stimuli tested whether stimuli at nonadjacent locations could be selected simultaneously. In one set of experiments, subjects attended to red digits presented in multiple frames with green digits. Accuracy was no better when red digits appeared successively than when pairs of red digits occurred simultaneously, implying allocation of attention to the two locations simultaneously. Different tasks involving oriented grating stimuli produced the same result. The final experiment demonstrated split attention with an array of spatial probes. When the probe at one of two target locations was correctly reported, the probe at the other target location was more often reported correctly than were any of the probes at distractor locations, including those between the targets. Together, these experiments provide strong converging evidence that when two targets are easily discriminated from distractors by a basic property, spatial attention can be split across both locations.  相似文献   

15.
Experiments using two different methods and three types of stimuli tested whether stimuli at non-adjacent locations could be selected simultaneously. In one set of experiments, subjects attended to red digits presented in multiple frames with green digits. Accuracy was no better when red digits appeared successively than when pairs of red digits occurred simultaneously, implying allocation of attention to the two locations simultaneously. Different tasks involving oriented grating stimuli produced the same result. The final experiment demonstrated split attention with an array of spatial probes. When the probe at one of two target locations was correctly reported, the probe at the other target location was more often reported correctly than were any of the probes at distractor locations, including those between the targets. Together, these experiments provide strong converging evidence that when two targets are easily discriminated from distractors by a basic property, spatial attention can be split across both locations.  相似文献   

16.
The eXpTools Library is a general-purpose tool for developing psychological experiments that combine animation with tachistoscopic presentation. The library’s C++ classes and assembly language functions are specialized for the creation of visual response time experiments. Its use is limited to variants of standard, 16-color, VGA high-graphics modes. However, it extends the capabilities of these modes through bit-plane animation techniques and a new, nonstandard, high-resolution graphics mode that will work with standard VGA cards and register-compatible cards. These techniques make possible a powerful animation class for managing complex animation or tachistoscopic presentations consisting of hundreds or thousands of frames. The library also combines such features as page flipping, screen blanking, video-refresh synchronization, interrupt-driven millisecond timing, interrupt-driven keyboard response collection, graphics primitives, bitmaps, and screen fonts. Utilities allow for the conversion of PCX graphics files and the creation of new screen fonts from monochrome bitmap files. The technologies and techniques underlying the library are presented along with an example program.  相似文献   

17.
Summary The effectiveness of the relative size cue to distance as a function of directional separation was investigated with the successive presentation of two luminous frames of the same shape but different visual angle. Either 0° or 180° of separation occurred between the successive presentations. In Exp. I the frames at 5 feet were viewed monocularly through a restrictive aperture. In Exp. II, the same frames were viewed monocularly through a lens such that the frames were at an accommodative distance of 100 feet. Reports of both the size and distance of the frames were obtained in each of the experiments. The results were analyzed in terms of the absolute size cue to distance occurring on the first presentations as well as the relative size cue to distance occurring between presentations. There was a tendency (not always statistically significant) to report the smaller frame at a farther distance than the larger frame, but only in Exp. II is there evidence that this tendency was greater on second than on first presentations. Therefore in Exp. II the relative size cue to distance was demonstrated to occur independently of the absolute size cue and it occurred at least as readily for the 180° as for the 0° separation between successive presentations. In both experiments the results from the size reports provide evidence for the presence of the relative size cue between successive presentations for both the 180° and 0° separations. It is concluded that the relative size cue is as effective when O must turn 180° to view the successive stimuli as when the successive stimuli are presented along the same line of sight. The procedure that is sometimes used of employing large directional separations in an attempt to avoid the relative size cue is, therefore, considered to be inappropriate. The results of the study were discussed in relation to a distinction between cognitive and perceptual processes in judgments of size and distance. Both the data of the present study and other results in the literature such as the size-distance paradox were analyzed in terms of a schema in which perceptual processes provided a basis for the rapid utilization of cognitive information.This investigation was supported by PHS Research Grant No. NS 08883, from the National Institute of Neurological Diseases and Strokes.  相似文献   

18.
Two reference frames for visual perception in two gravity conditions   总被引:2,自引:0,他引:2  
The processing and storage of visual information concerning the orientation of objects in space is carried out in anisotropic reference frames in which all orientations are not treated equally. The perceptual anisotropies, and the implicit reference frames that they define, are evidenced by the observation of 'oblique effects' in which performance on a given perceptual task is better for horizontally and vertically oriented stimuli. The question remains how the preferred horizontal and vertical reference frames are defined. In these experiments cosmonaut subjects reproduced the remembered orientation of a visual stimulus in 1g (on the ground) and in 0g, both attached to a chair and while free-floating within the International Space Station. Results show that while the remembered orientation of a visual stimulus may be stored in a multimodal reference frame that includes gravity, an egocentric reference is sufficient to elicit the oblique effect when all gravitational and haptic cues are absent.  相似文献   

19.
A FORTRAN system for constructing various kinds of stimulus materials is described. A user enters the basic components of an experiment (stimulus items, presentation parameters, trial identifiers, etc.) as files, and uses the system to combine the basic files, automatically constructing the desired stimuli. The system contains file-manipulation functions for combining files (including factorial combination), functions for separating out parts of a file, and functions for randomizing files. The user can use the standard FORTRAN function-embedding and function-definition features to easily specify elaborate operations on the basic files to construct complex stimulus files.  相似文献   

20.
Many models of the Simon effect assume that categorical spatial representations underlie the phenomenon. The present study tested this assumption explicitly in two experiments, both of which involved eight possible spatial positions of imperative stimuli arranged horizontally on the screen. In Experiment 1, the eight stimulus locations were marked with eight square boxes that appeared at the same time during a trial. Results showed gradually increasing Simon effects from the central locations to the outer locations. In Experiment 2, the eight stimulus locations consisted of a combination of three frames of spatial reference (hemispace, hemifield, and position relative to the fixation), with each frame appearing in different timings. In contrast to Experiment 1, results showed an oscillating pattern of the Simon effect across the horizontal positions. These findings are discussed in terms of grouping factors involved in the Simon task. The locations seem to be coded as a single continuous dimension when all are visible at once as in Experiment 1, but they are represented as a combination of the lateral categories (“left” vs. “right”) with multiple frames of reference when the reference frames are presented successively as in Experiment 2.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号