首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An information processing investigation was performed to quantify the Chevreul pendulum effect: the tendency of a small pendulum, when suspended from the hand and imaginatively concentrated on, to oscillate seemingly of its own accord. Using a time exposure photographic measurement technique, electronically automated visual and auditory imaginal prompts were presented to the subject during imaginal processing tasks. It was found that the pendulum effect was enhanced when vision of actual pendulum oscillations was permitted and visual or auditory spatially oscillating stimuli were present. Visual spatially oscillating stimuli were superior to their auditory counterparts. Results were discussed in terms of ideomotor and visual capture interpretations of signal and imaginal processing.  相似文献   

2.
Four experiments examined judgements of the duration of auditory and visual stimuli. Two used a bisection method, and two used verbal estimation. Auditory/visual differences were found when durations of auditory and visual stimuli were explicitly compared and when durations from both modalities were mixed in partition bisection. Differences in verbal estimation were also found both when people received a single modality and when they received both. In all cases, the auditory stimuli appeared longer than the visual stimuli, and the effect was greater at longer stimulus durations, consistent with a “pacemaker speed” interpretation of the effect. Results suggested that Penney, Gibbon, and Meck's (2000) “memory mixing” account of auditory/visual differences in duration judgements, while correct in some circumstances, was incomplete, and that in some cases people were basing their judgements on some preexisting temporal standard.  相似文献   

3.
In this study the ability of newborn infants to learn arbitrary auditory–visual associations in the absence versus presence of amodal (redundant) and contingent information was investigated. In the auditory-noncontingent condition 2-day-old infants were familiarized to two alternating visual stimuli (differing in colour and orientation), each accompanied by its ‘own’ sound: when the visual stimulus was presented the sound was continuously presented, independently of whether the infant looked at the visual stimulus. In the auditory-contingent condition the auditory stimulus was presented only when the infant looked at the visual stimulus: thus, presentation of the sound was contingent upon infant looking. On the post-familiarization test trials attention recovered strongly to a novel auditory–visual combination in the auditory-contingent condition, but remained low, and indistinguishable from attention to the familiar combination, in the auditory-noncontingent condition. These findings are a clear demonstration that newborn infants’ learning of arbitrary auditory–visual associations is constrained and guided by the presence of redundant (amodal) contingent information. The findings give strong support to Bahrick’s theory of early intermodal perception.  相似文献   

4.
The modality effect occurs when audio/visual instructions are superior to visual only instructions. The effect was explored in two experiments conducted within a cognitive load theory framework. In Experiment 1, two groups of primary school students (N = 24) were presented with either audio/visual or visual only instructions on how to read a temperature graph. The group presented with visual text and a diagram rather than audio text and a diagram was superior, reversing most previous data on the modality effect. It was hypothesized that the reason for the reversal was that the transitory auditory text component was too long to be processed easily in working memory compared to more permanent written information. Experiment 2 (N = 64) replicated the experiment with the variation of a reduced length of both auditory and visual text instructions. Results indicated a reinstatement of the modality effect with audio/visual instructions proving superior to visual only instructions. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
In a simple reaction time (RT) experiment, visual stimuli were stereoscopically presented either to one eye (single stimulation) or to both eyes (redundant stimulation), with brightness matched for single and redundant stimulations. Redundant stimulation resulted in two separate percepts when noncorresponding retinal areas were stimulated, whereas it resulted in a single fused percept when corresponding areas were stimulated. With stimulation of noncorresponding areas, mean RT was shorter to redundant than to single stimulation, replicating the redundant signals effect (RSE) commonly found with visual stimuli. With stimulation of corresponding areas, however, no RSE was observed. This suggests that the RSE is driven by the number of percepts rather than by the number of stimulated receptors or sensory organs. These results are consistent with previous findings in the auditory modality and have implications for models of the RSE.  相似文献   

6.
Four experiments examined judgements of the duration of auditory and visual stimuli. Two used a bisection method, and two used verbal estimation. Auditory/visual differences were found when durations of auditory and visual stimuli were explicitly compared and when durations from both modalities were mixed in partition bisection. Differences in verbal estimation were also found both when people received a single modality and when they received both. In all cases, the auditory stimuli appeared longer than the visual stimuli, and the effect was greater at longer stimulus durations, consistent with a “pacemaker speed” interpretation of the effect. Results suggested that Penney, Gibbon, and Meck's (2000) “memory mixing” account of auditory/visual differences in duration judgements, while correct in some circumstances, was incomplete, and that in some cases people were basing their judgements on some preexisting temporal standard.  相似文献   

7.
Six experiments examined the issue of whether one single system or separate systems underlie visual and auditory orienting of spatial attention. When auditory targets were used, reaction times were slower on trials in which cued and target locations were at opposite sides of the vertical head-centred meridian than on trials in which cued and target locations were at opposite sides of the vertical visual meridian or were not separated by any meridian. The head-centred meridian effect for auditory stimuli was apparent when targets were cued by either visual (Experiments 2, 3, and 6) or auditory cues (Experiment 5). Also, the head-centred meridian effect was found when targets were delivered either through headphones (Experiments 2, 3, and 5) or external loudspeakers (Experiment 6). Conversely, participants showed a visual meridian effect when they were required to respond to visual targets (Experiment 4). These results strongly suggest that auditory and visual spatial attention systems are indeed separate, as far as endogenous orienting is concerned.  相似文献   

8.
To investigate the effect of semantic congruity on audiovisual target responses, participants detected a semantic concept that was embedded in a series of rapidly presented stimuli. The target concept appeared as a picture, an environmental sound, or both; and in bimodal trials, the audiovisual events were either consistent or inconsistent in their representation of a semantic concept. The results showed faster detection latencies to bimodal than to unimodal targets and a higher rate of missed targets when visual distractors were presented together with auditory targets, in comparison to auditory targets presented alone. The findings of Experiment 2 showed a cross-modal asymmetry, such that visual distractors were found to interfere with the accuracy of auditory target detection, but auditory distractors had no effect on either the speed or the accuracy of visual target detection. The biased-competition theory of attention (Desimone & Duncan Annual Review of Neuroscience 18: 1995; Duncan, Humphreys, & Ward Current Opinion in Neurobiology 7: 255–261 1997) was used to explain the findings because, when the saliency of the visual stimuli was reduced by the addition of a noise filter in Experiment 4, visual interference on auditory target detection was diminished. Additionally, the results showed faster and more accurate target detection when semantic concepts were represented in a visual rather than an auditory format.  相似文献   

9.
Both auditory and visual emotional memories can be made less emotional by loading working memory (WM) during memory recall. Taxing WM during recall can be modality specific (giving an auditory [visuospatial] load during recall of an auditory [visual] memory) or cross modal (an auditory load during visual recall or vice versa). We tested whether modality specific loading taxes WM to a larger extent than cross modal loading. Ninety-six participants undertook a visual and auditory baseline Random Interval Repetition task (i.e. responding as fast as possible to a visual or auditory stimulus by pressing a button). Then, participants recalled a distressing visual and auditory memory, while performing the same visual and auditory Random Interval Repetition task. Increased reaction times (compared to baseline) were indicative of WM loading. Using Bayesian statistics, we compared five models in terms of general and modality specific taxation. There was support for the model describing the effect on WM of dual tasking in general, irrespective of modality specificity, and for the model describing the effect of modality specific loading. Both models combined gained the most support. The results suggest a general effect of dual tasking on taxing WM and a superimposed effect of taxing in matched modality.  相似文献   

10.
Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10 s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.  相似文献   

11.
Studies showing human behavior influenced by subliminal stimuli mainly focus on implicit processing per se, and little is known about its interaction with explicit processing. We examined this by using the Simon effect, wherein a task-irrelevant spatial distracter interferes with lateralized response. Lo and Yeh (2008) found that the visual Simon effect, although it occurred when participants were aware of the visual distracters, did not occur with subliminal visual distracters. We used the same paradigm and examined whether subliminal and supra-threshold stimuli are processed independently by adding a supra-threshold auditory distracter to ascertain whether it would interact with the subliminal visual distracter. Results showed auditory Simon effect, but there was still no visual Simon effect, indicating that supra-threshold and subliminal stimuli are processed separately in independent streams. In contrast to the traditional view that implicit processing precedes explicit processing, our results suggest that they operate independently in a parallel fashion.  相似文献   

12.
Synchronization of finger taps with an isochronous event sequence becomes difficult when the event rate exceeds a certain limit. In Experiment 1, the synchronization threshold was reached at interonset intervals (IOIs) above 100 ms with auditory tone sequences (in a 1:4 tapping task) but at IOIs above 400 ms with visual flash sequences (1:1 tapping). Using IOIs above those limits, the author investigated in Experiment 2 the reduction in the variability of asynchronies that tends to occur when the intervals between target events are subdivided by additional identical events (1:1 vs 1:n tapping). The subdivision benefit was found to decrease with IOI duration and to turn into a cost at IOIs of 200–250 ms in auditory sequences and at IOIs of 450–500 ms in visual sequences. The auditory results are relevant to the limits of metrical subdivision and beat rate in music. The visual results demonstrate the remarkably weak rhythmicity of (nonmoving) visual stimuli.  相似文献   

13.
Seven subjects were used in an experiment on the relation between signal modality and the effect of foreperiod duration (EP) on RT. With visual signals the usually reported systematic increase of RT as a function of FP duration (1, 5 and 15 s) was confirmed; with auditory signals no difference was found between FP's of 1 and 5 s while the effect at 15 s was equivalent to that found at 5 s with the visual signal. The results suggest that besides factors such as time uncertainty the FP effect is also largely dependent on the arousing quality of the signal.  相似文献   

14.
Repetition blindness (RB; Kanwisher, 1987) is the term used to describe people’s failure to detect or report an item that is repeated in a rapid serial visual presentation (RSVP) stream. Although RB is, by definition, a visual deficit, whether it is affected by an auditory signal remains unknown. In the present study, we added two sounds before, simultaneous with, or after the onset of the two critical visual items during RSVP to examine the effect of sound on RB. The results show that the addition of the sounds effectively reduced RB when they appeared at, or around, the critical items. These results indicate that it is easier to perceive an event containing multisensory information than unisensory ones. Possible mechanisms of how visual and auditory information interact are discussed.  相似文献   

15.
Three experiments are reported on the influence of different timing relations on the McGurk effect. In the first experiment, it is shown that strict temporal synchrony between auditory and visual speech stimuli is not required for the McGurk effect. Subjects were strongly influenced by the visual stimuli when the auditory stimuli lagged the visual stimuli by as much as 180 msec. In addition, a stronger McGurk effect was found when the visual and auditory vowels matched. In the second experiment, we paired auditory and visual speech stimuli produced under different speaking conditions (fast, normal, clear). The results showed that the manipulations in both the visual and auditory speaking conditions independently influenced perception. In addition, there was a small but reliable tendency for the better matched stimuli to elicit more McGurk responses than unmatched conditions. In the third experiment, we combined auditory and visual stimuli produced under different speaking conditions (fast, clear) and delayed the acoustics with respect to the visual stimuli. The subjects showed the same pattern of results as in the second experiment. Finally, the delay did not cause different patterns of results for the different audiovisual speaking style combinations. The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.  相似文献   

16.
Numerous studies of two-choice reaction tasks, including auditory and visual Simon tasks (i.e., tasks in which stimulus location is irrelevant) and visual compatibility tasks, have found that only spatial stimulus-response (S-R) correspondence affected S-R compatibility. Their results provided no indication that stimulus-hand correspondence was a significant factor. However, Wascher et al. (2001) suggested that hand coding plays a role in visual and auditory Simon tasks when the instructions are in terms of the finger/hand used for responding. The present experiments examined whether instructing subjects in terms of response locations or fingers/hands influenced the Simon effect for visual and auditory tasks. In Experiments 1-3, only spatial S-R correspondence contributed significantly to the Simon effect, even when the instructions were in terms of the fingers/hands. However, in Experiment 4, which used auditory stimuli and finger/hand instructions, the contribution of stimulus-hand correspondence increased with practice.  相似文献   

17.
从跨通道的角度入手,采用大小比较任务,对视听单通道及跨通道下数量空间表征的特点及表征过程中的相互影响进行探讨。结果发现,视觉通道和听觉通道均存在SNARC效应;在跨通道任务下,无论启动通道是视觉还是听觉通道,都表现出,当启动通道的数量大小信息与主通道的数量大小一致或无关时,主通道的SNARC效应没有显著变化;但当启动通道的数量大小信息与主通道不一致时,主通道的SNARC效应受到显著影响,表现为降低或消失。这进一步证明了SNARC效应受情境影响的特点,并发现在进行跨通道数量空间表征时,听觉通道的数量信息对视觉通道下的数量空间表征的影响大于视觉通道的数量信息对听觉通道下的数量空间表征的影响。  相似文献   

18.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

19.
In the McGurk effect, visual information specifying a speaker’s articulatory movements can influence auditory judgments of speech. In the present study, we attempted to find an analogue of the McGurk effect by using nonspeech stimuli—the discrepant audiovisual tokens of plucks and bows on a cello. The results of an initial experiment revealed that subjects’ auditory judgments were influenced significantly by the visual pluck and bow stimuli. However, a second experiment in which speech syllables were used demonstrated that the visual influence on consonants was significantly greater than the visual influence observed for pluck-bow stimuli. This result could be interpreted to suggest that the nonspeech visual influence was not a true McGurk effect. In a third experiment, visual stimuli consisting of the wordspluck andbow were found to have no influence over auditory pluck and bow judgments. This result could suggest that the nonspeech effects found in Experiment 1 were based on the audio and visual information’s having an ostensive lawful relation to the specified event. These results are discussed in terms of motor-theory, ecological, and FLMP approaches to speech perception.  相似文献   

20.
Rodway P 《Acta psychologica》2005,120(2):199-226
Which is better, a visual or an auditory warning signal? Initial findings suggested that an auditory signal was more effective, speeding reaction to a target more than a visual warning signal, particularly at brief foreperiods [Bertelson, P., & Tisseyre, F. (1969). The time-course of preparation: confirmatory results with visual and auditory warning signals. Acta Psychologica, 30. In W.G. Koster (Ed.), Attention and Performance II (pp. 145-154); Davis, R., & Green, F. A. (1969). Intersensory differences in the effect of warning signals on reaction time. Acta Psychologica, 30. In W.G. Koster (Ed.), Attention and Performance II (pp. 155-167)]. This led to the hypothesis that an auditory signal is more alerting than a visual warning signal [Sanders, A. F. (1975). The foreperiod effect revisited. Quarterly Journal of Experimental Psychology, 27, 591-598; Posner, M. I., Nissen, M. J., & Klein, R. M. (1976). Visual dominance: an information-processing account of its origins and significance. Psychological Review, 83, 157-171]. Recently [Turatto, M., Benso, F., Galfano, G., & Umilta, C. (2002). Nonspatial attentional shifts between audition and vision. Journal of Experimental Psychology: Human Perception and Performance, 28, 628-639] found no evidence for an auditory warning signal advantage and showed that at brief foreperiods a signal in the same modality as the target facilitated responding more than a signal in a different modality. They accounted for this result in terms of the modality shift effect, with the signal exogenously recruiting attention to its modality, and thereby facilitating responding to targets arriving in the modality to which attention had been recruited. The present study conducted six experiments to understand the cause of these conflicting findings. The results suggest that an auditory warning signal is not more effective than a visual warning signal. Previous reports of an auditory superiority appear to have been caused by using different locations for the visual warning signal and visual target, resulting in the target arriving at an unattended location when the foreperiod was brief. Turatto et al.'s results were replicated with a modality shift effect at brief foreperiods. However, it is also suggested that previous measures of the modality shift effect may still have been confounded by a location cuing effect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号