共查询到20条相似文献,搜索用时 0 毫秒
1.
W. B. Templeton 《Attention, perception & psychophysics》1973,14(3):451-457
It is argued that, contrary to the views of some theorists, the role of gravitational cues is essentially one of maintaining orientation constancy. In support of this claim, it is shown that the loss of relevant gravitational information when the body is supine results in a significant increase in the disorienting effects of both a tilted visual frame and tilt of the head relative to the trunk. 相似文献
2.
Recent research on social cognition suggests that lifelike visual and vocal information about a person may strongly mediate the impact of prior social categorical knowledge on social judgements. Other research, however, on the contribution of visual cues to impression formation, suggests that they have relatively little impact. This study sought to resolve these conflicting findings by examining the effect of visual cues on social judgements when subjects possess prior social categorical knowledge varying in salience to the experimental task. Videotaped target interviews were monitored by observers in either sound and vision or sound only, and measures were taken of the targets' perceived personality, their ‘actual’ and ‘predicted’ social performance, and social acceptance by observers. Whilst salience of categorization strongly influenced the quality of judgements, visual cues had little if any effect. However, visual cues strongly influenced subjects' confidence in all three sets of judgements, sound and vision subjects being consistently more confident than their sound only counterparts. The findings are discussed in relation to previous research in both social cognition and visual cues. 相似文献
3.
The ability to process center-embedded structures has been claimed to represent a core function of the language faculty. Recently, several studies have investigated the learning of center-embedded dependencies in artificial grammar settings. Yet some of the results seem to question the learnability of these structures in artificial grammar tasks. Here, we tested under which exposure conditions learning of center-embedded structures in an artificial grammar is possible. We used naturally spoken syllable sequences and varied the presence of prosodic cues. The results suggest that mere distributional information does not suffice for successful learning. Prosodic cues marking the boundaries of the major relevant units, however, can lead to learning success. Thus, our data are consistent with the hypothesis that center-embedded syntactic structures can be learned in artificial grammar tasks if language-like acoustic cues are provided. 相似文献
4.
By systematically varying cue availability in the stimulus and response phases of a series of same-modality and cross-modality distance matching tasks, we examined the contributions of static visual information, idiothetic information, and optic flow information. The experiment was conducted in a large-scale, open, outdoor environment. Subjects were presented with information about a distance and were then required to turn 180 before producing a distance estimate. Distance encoding and responding occurred via: (i) visually perceived target distance, or (ii) traversed distance through either blindfolded locomotion or during sighted locomotion. The results demonstrated that subjects performed with similar accuracy across all conditions. In conditions in which the stimulus and the response were delivered in the same mode, when visual information was absent, constant error was minimal; whereas, when visual information was present, overestimation was observed. In conditions in which the stimulus and response modes differed, a consistent error pattern was observed. By systematically comparing complementary conditions, we found that the availability of visual information during locomotion (particularly optic flow) led to an 'under-perception' of movement relative to conditions in which visual information was absent during locomotion. 相似文献
5.
Recent research [e.g., Carrozzo, M., Stratta, F., McIntyre, J., & Lacquaniti, F. (2002). Cognitive allocentric representations of visual space shape pointing errors. Experimental Brain Research 147, 426-436; Lemay, M., Bertrand, C. P., & Stelmach, G. E. (2004). Pointing to an allocentric and egocentric remembered target. Motor Control, 8, 16-32] reported that egocentric and allocentric visual frames of reference can be integrated to facilitate the accuracy of goal-directed reaching movements. In the present investigation, we sought to specifically examine whether or not a visual background can facilitate the online, feedback-based control of visually-guided (VG), open-loop (OL), and memory-guided (i.e. 0 and 1000 ms of delay: D0 and D1000) reaches. Two background conditions were examined in this investigation. In the first background condition, four illuminated LEDs positioned in a square surrounding the target location provided a context for allocentric comparisons (visual background: VB). In the second condition, the target object was singularly presented against an empty visual field (no visual background: NVB). Participants (N=14) completed reaching movements to three midline targets in each background (VB, NVB) and visual condition (VG, OL, D0, D1000) for a total of 240 trials. VB reaches were more accurate and less variable than NVB reaches in each visual condition. Moreover, VB reaches elicited longer movement times and spent a greater proportion of the reaching trajectory in the deceleration phase of the movement. Supporting the benefit of a VB for online control, the proportion of endpoint variability explained by the spatial location of the limb at peak deceleration was less for VB as opposed to NVB reaches. These findings suggest that participants are able to make allocentric comparisons between a VB and target (visible or remembered) in addition to egocentric limb and VB comparisons to facilitate online reaching control. 相似文献
6.
7.
Four experiments were conducted to compare valid and invalid cue conditions for peripheral and central cues. Experiments 1, 3, and 4 used reaction time (RT) as the dependent variable. Experiment 2 used a threshold measure. Peripheral and central cues were presented on each trial. The peripheral cue was uninformative in all experiments. The central cue was informative in Experiments 1 and 2, where it predicted stimulus side on 70% of the trials. Experiment 3 included 50% and 100% central-cue prediction conditions as well as the 70% treatment. Experiment 4 included 60%, 75%, and 90% central-cue prediction conditions. The effects of the central and peripheral cues were independent and additive in all four experiments, indicating that: (1) both cue types can act simultaneously, and that the relationship between them is additive under the conditions used in these experiments, (2) informativeness is not a necessary condition for attentional effects with peripheral cues, and (3) covert visual orientation influences sensory thresholds and RT in similar ways. The results of Experiments 3 and 4 demonstrated that the facilitation associated with peripheral cues was insensitive to manipulations which demonstrate that subjects use the informational value of the central cue to direct voluntary attention. The results are discussed with reference to two issues; first, the proposition that central and peripheral cues exert their influence on performance in independent information-processing stages, following the additive factor method, and, second, the problems raised for additive factors method when cues elicit both an “explicit” response—regarding the presence or absence of a specified letter—and an “implicit response”—involving the planning and possible execution of eye and hand movements. 相似文献
8.
Shape-from-shading for matte and glossy objects 总被引:1,自引:0,他引:1
We wanted to find out whether the presence of specular highlights on the otherwise matte objects would make a difference to the perceived surface relief. Six different, globally convex objects were displayed on a computer screen. The depicted objects were either matte or glossy and were illuminated from one of the two different directions. Shape-from-shading was evaluated with two different paradigms. In Experiment 1 observers were asked to set a number of local surface attitude probes such that the probes looked as if they were tangent to the objects' surfaces. In Experiment 2, observers were instructed to make traces of the contours of the depicted objects in the horizontal and vertical planes. Although the two tasks target different aspects of the perceived surface, they give essentially similar results here. In both tasks we found differences that were induced by changing the illumination direction. Surprisingly, no systematic difference was found between the results for matte and glossy objects. We must, therefore, conclude that there is no evidence from the current study that glossiness influences shape perception although to the observer matte and glossy objects look quite different. 相似文献
9.
Presenting an auditory or tactile cue in temporal synchrony with a change in the color of a visual target can facilitate participants’ visual search performance. In the present study, we compared the magnitude of unimodal auditory, vibrotactile, and bimodal (i.e., multisensory) cuing benefits when the nonvisual cues were presented in temporal synchrony with the changing of the target’s color (Experiments 1 and 2). The target (a horizontal or vertical line segment) was presented among a number of distractors (tilted line segments) that also changed color at various times. In Experiments 3 and 4, the cues were also made spatially informative with regard to the location of the visual target. The unimodal and bimodal cues gave rise to an equivalent (significant) facilitation of participants’ visual search performance relative to a no-cue baseline condition. Making the unimodal auditory and vibrotactile cues spatially informative produced further performance improvements (on validly cued trials), as compared with cues that were spatially uninformative or otherwise spatially invalid. A final experiment was conducted in order to determine whether cue location (close to versus far from the visual display) would influence participants’ visual search performance. Auditory cues presented close to the visual search display were found to produce significantly better performance than cues presented over headphones. Taken together, these results have implications for the design of nonvisual and multisensory warning signals used in complex visual displays. 相似文献
10.
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. 相似文献
11.
R. A. Kinchla J. Townsend J. I. Vellott R. C. Atkinson 《Attention, perception & psychophysics》1966,1(1):67-73
Two experiments investigated the effects on auditory signal detection of introducing visual cues that were partially correlated with the signal events. The results were analyzed in terms of a detection model that assumes that such cue-signal correlations will not affect sensitivity, but will instead cause the subject to develop separate response biases for each cue. The model specifies a functional relationship between the asymptotic values of these cuecontingent biases. The overall results of the experiments supported the detection assumptions of the model and the general bias learning assumption, but indicated a more complex learning process than that specified by the model. 相似文献
12.
The haptic perception of vertical, horizontal, +45°-oblique, and +135°-oblique orientations was studied in adults. The purpose was to establish whether the gravitational cues provided by the scanning arm—hand system were involved in the haptic oblique effect (lower performances in oblique orientations than in vertical—horizontal ones) and more generally in the haptic coding of orientation. The magnitude of these cues was manipulated by changing gravity constraints, and their variability was manipulated by changing the planes in which the task was performed (horizontal, frontal, and sagittal). In Experiment 1, only the horizontal plane was tested, either with the forearm resting on the disk supporting the rod (“supported forearm” condition) or with the forearm unsupported in the air. In the latter case, antigravitational forces were elicited during scanning. The oblique effect was present in the “unsupported” condition and was absent in the “supported” condition. In Experiment 2, the three planes were tested, either in a “natural” or in a “lightened forearm” condition in which the gravitational cues were reduced by lightening the subject’s forearm. The magnitude of the oblique effect was lower in the “lightened” condition than in the “natural” one, and there was no plane effect. In Experiment 3, the subject’s forearm was loaded with either a 500- or a 1,000-g bracelet, or it was not loaded. The oblique effect was the same in the three conditions, and the plane effect (lower performances in the horizontal plane than in the frontal and sagittal ones) was present only when the forearm was loaded. Taken together, these results suggested that gravitational cues may play a role in haptic coding of orientation, although the effects of decreasing or increasing these cues are not symmetrical. 相似文献
13.
Cheal M 《Genetic, social, and general psychology monographs》2001,127(4):409-457
To distinguish between theoretical concepts of how attention is allocated, participants were presented with different types of precues in 6 experiments. In 1 condition with 100% valid precues (Experiments 1 and 2), the time course of attention effects revealed that (a) higher accuracy was obtained with dynamic multiple-element precues (MEPs in which the unique element was defined by apparent motion) than with static MEPs, in which the elements did not move once they were presented (Cheal & Chastain, 1998); (b) a longer precue-target interval (stimulus-onset asynchrony; SOA) was needed to reach asymptote accuracy with dynamic MEPs than with dynamic single-element precues (SEPs); and (c) all dynamic precues (both MEPs and SEPs) resulted in a decline in accuracy at long SOAs. These results suggest that static and dynamic MEPs result in delayed engagement of attention relative to SEPs. Further, a decline in accuracy at long intervals is associated with static and dynamic SEPs and dynamic MEPs, but not with static MEPs. With irrelevant precues (Experiments 3 to 5), there was capture by precues in which the unique element moved briskly, smoothly, or abruptly, or simply flashed on and off, although there were differences in the amount of capture. The strongest capture occurred with smooth movement in static background elements and the weakest with smooth movement in abruptly moving background elements. It was shown in Experiment 6 that a static MEP will not capture attention if one element changes to a unique brightness near the time of precue onset, but if the element changes after 1,000 ms, it will capture attention. The authors suggest that different types of precues result in unequal influence of endogenous and exogenous components of attention, even when the same targets are used. In addition, they show that neither singleton detection mode nor contingent involuntary orienting is necessary for the capture of attention. 相似文献
14.
Visual short-term memory (VSTM) has traditionally been thought to have a very limited capacity of around 3-4 objects. However, recently several researchers have argued that VSTM may be limited in the amount of information retained rather than by a specific number of objects. Here we present a study of the effect of long-term practice on VSTM capacity. We investigated four age groups ranging from pre-school children to adults and measured the change in VSTM capacity for letters and pictures. We found a clear increase in VSTM capacity for letters with age but not for pictures. Our results indicate that VSTM capacity is dependent on the level of expertise for specific types of stimuli. 相似文献
15.
We investigated the nature of the bandwidth limit in the consolidation of visual information into visual short-term memory. In the first two experiments, we examined whether previous results showing differential consolidation bandwidth for colour and orientation resulted from methodological differences by testing the consolidation of colour information with methods used in prior orientation experiments. We briefly presented two colour patches with masks, either sequentially or simultaneously, followed by a location cue indicating the target. Participants identified the target colour via buttonpress (Experiment 1) or by clicking a location on a colour wheel (Experiment 2). Although these methods have previously demonstrated that two orientations are consolidated in a strictly serial fashion, here we found equivalent performance in the sequential and simultaneous conditions, suggesting that two colours can be consolidated in parallel. To investigate whether this difference resulted from different consolidation mechanisms or a common mechanism with different features consuming different amounts of bandwidth, Experiment 3 presented a colour patch and an oriented grating either sequentially or simultaneously. We found a lower performance in the simultaneous than the sequential condition, with orientation showing a larger impairment than colour. These results suggest that consolidation of both features share common mechanisms. However, it seems that colour requires less information to be encoded than orientation. As a result, two colours can be consolidated in parallel without exceeding the bandwidth limit, whereas two orientations or an orientation and a colour exceed the bandwidth and appear to be consolidated serially. 相似文献
16.
17.
18.
19.
In 4 studies, the authors tested the contributions of visual, kinesthetic, and verbal knowledge of results to the adaptive control of reaching movements toward visual targets. The same apparatus was used in all experiments, but the procedures differed in the sensory modality of the feedback that participants (N s = 5, 5, 6, and 6, respectively, in Experiments 1, 2, 3, and 4) received about their performances. Using biased visual, proprioceptive, or verbal feedback, the authors introduced a 5 degrees shift in the visuomanual relationship. Results showed no significant difference in the final amount of adaptation to the mismatch: On average, participants adapted to 79% of the perturbation. That finding is consistent with the view that adaptation is a multisensory, highly flexible process whose efficiency does not depend on the sensory channel conveying the error signal. 相似文献
20.
《Quarterly journal of experimental psychology (2006)》2013,66(2):260-274
Evidence from infant studies indicates that language learning can be facilitated by multimodal cues. We extended this observation to adult language learning by studying the effects of simultaneous visual cues (nonassociated object images) on speech segmentation performance. Our results indicate that segmentation of new words from a continuous speech stream is facilitated by simultaneous visual input that it is presented at or near syllables that exhibit the low transitional probability indicative of word boundaries. This indicates that temporal audio-visual contiguity helps in directing attention to word boundaries at the earliest stages of language learning. Off-boundary or arrhythmic picture sequences did not affect segmentation performance, suggesting that the language learning system can effectively disregard noninformative visual information. Detection of temporal contiguity between multimodal stimuli may be useful in both infants and second-language learners not only for facilitating speech segmentation, but also for detecting word–object relationships in natural environments. 相似文献