首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Memory (Hove, England)》2013,21(3):321-342
A late parietal positivity (P3) and behavioural measures were studied during performance of a two-item memory-scanning task. Stimuli were digits presented as memorised items in one modality (auditory or visual) while the following probe, also a digit, was presented in the same or the other modality. In a separate set of experiments, P3 and behaviour were similarly studied using only visual stimuli that were either lexical (digits) or non-lexical (novel fonts with the same contours as the digits) to which subjects assigned numerical values. Reaction times (RTs) and P3 latencies were prolonged to non-lexical compared to lexical stimuli. Although RTs were longer to auditory than to visual stimuli, P3 latencies to memorised items were prolonged in response to visually compared to auditorily presented memorised items, and were further prolonged when preceding visual probes. P3 amplitudes were smaller to auditory than to visual stimuli, and were smaller for the second memorised item when lexical/non-lexical comparisons were involved. The most striking finding was scalp distribution variations indicating changes in relative contributions of brain structures involved in processing memorised items, according to the probes that followed. These findings are compatible, in general, with a phonological memorisation, but they suggest that the process is modified by memorising the item in the same terms as the expected probe that follows.  相似文献   

2.
The design of a versatile and programmable transducer amplifier device with analogue display for self-monitoring of autonomic responses is described. The design features low cost, portability, and flexibility across direct-current transducer options (e.g., photoplethysmograph or thermistor). The device can be used for the visual or auditory display of continuous blood volume pulse or temperature measures where the relative amplitude or pulse rate is of concern. Auditory or visual biofeedback may be provided via the choice of a stacked bar-graph display or piezoelectric buzzer. A common circuit design to allow programming options for the estimation of heart rate, inter-beat interval, or pulse duration is provided.  相似文献   

3.
Perception of visual speech and the influence of visual speech on auditory speech perception is affected by the orientation of a talker's face, but the nature of the visual information underlying this effect has yet to be established. Here, we examine the contributions of visually coarse (configural) and fine (featural) facial movement information to inversion effects in the perception of visual and audiovisual speech. We describe two experiments in which we disrupted perception of fine facial detail by decreasing spatial frequency (blurring) and disrupted perception of coarse configural information by facial inversion. For normal, unblurred talking faces, facial inversion had no influence on visual speech identification or on the effects of congruent or incongruent visual speech movements on perception of auditory speech. However, for blurred faces, facial inversion reduced identification of unimodal visual speech and effects of visual speech on perception of congruent and incongruent auditory speech. These effects were more pronounced for words whose appearance may be defined by fine featural detail. Implications for the nature of inversion effects in visual and audiovisual speech are discussed.  相似文献   

4.
Ontario Institute for Studtes tn Education, University of Toronto, Toronto, Canada M5S 1 V6 Auditory space has been characterized as an entity without bound or dimension, as opposed to visual space which is limited in three dimensions. While there is evidence that visual space may be represented mentally in terms of contrastive values on these dimensions, no evidence exists concerning the representation of auditory space. Two experiments used an auditory Stroop-like task to investigate (1) whether the linguistic code for auditory space is comprised of component dimensions as in vision or whether it is unitary, and (2) whether auditory space can also be encoded in a nonlinguistic fashion. Subjects were required to respond to the spatial location of a locative term whose meaning could be congruent, incongruent, or neutral with respect to its location. The findings pointed to the conclusion that when subjects are encouraged to code auditory space linguistically, the code is an undifferentiated symbol or name. Furthermore, some tentative evidence existed that auditory space may also be encoded in a nonlinguistic manner.  相似文献   

5.
Nonhuman primates appear to capitalize more effectively on visual cues than corresponding auditory versions. For example, studies of inferential reasoning have shown that monkeys and apes readily respond to seeing that food is present (“positive” cuing) or absent (“negative” cuing). Performance is markedly less effective with auditory cues, with many subjects failing to use this input. Extending recent work, we tested eight captive tufted capuchins (Cebus apella) in locating food using positive and negative cues in visual and auditory domains. The monkeys chose between two opaque cups to receive food contained in one of them. Cup contents were either shown or shaken, providing location cues from both cups, positive cues only from the baited cup, or negative cues from the empty cup. As in previous work, subjects readily used both positive and negative visual cues to secure reward. However, auditory outcomes were both similar to and different from those of earlier studies. Specifically, all subjects came to exploit positive auditory cues, but none responded to negative versions. The animals were also clearly different in visual versus auditory performance. Results indicate that a significant proportion of capuchins may be able to use positive auditory cues, with experience and learning likely playing a critical role. These findings raise the possibility that experience may be significant in visually based performance in this task as well, and highlight that coming to grips with evident differences between visual versus auditory processing may be important for understanding primate cognition more generally.  相似文献   

6.
The neuronal system to process and transfer auditory information to the higher motor areas was investigated using fMRI. Two different types of internal modulation of auditory pacing (1 Hz) were combined to design a 2×2 condition experiment, and the activation was compared with that under a visual guidance. The bilateral anterior portion of the BA22 (ant-BA22) and the left BA41/42 were more extensively activated by the combined modulation condition under the auditory cue than that under the visual cue. Among the four auditory conditions with or without the two types of internal modulation, the activation in the ant-BA22 was augmented only on the left side by the combined modulation condition. The left ant-BA22 may be especially involved in integrating the external auditory cue with internal modulation, while the activation on the right side did not depend on the complexity. The role of the left BA41/42 in motor regulation may be more specific to the processing of an auditory cue than that on the right side. These two areas in the left temporal lobe may be organized as a subsystem to handle the timing of complex movements under auditory cues, while the higher motor areas in the frontal lobe support both sensory modalities for the cue. This architecture may be considered as ‘audio-motor control’, which is similar to the visuo-motor control of the front-parietal network.  相似文献   

7.
Dual-process accounts of working memory have suggested distinct encoding processes for verbal and visual information in working memory, but encoding for nonspeech sounds (e.g., tones) is not well understood. This experiment modified the sentence–picture verification task to include nonspeech sounds with a complete factorial examination of all possible stimulus pairings. Participants studied simple stimuli–pictures, sentences, or sounds–and encoded the stimuli verbally, as visual images, or as auditory images. Participants then compared their encoded representations to verification stimuli–again pictures, sentences, or sounds–in a two-choice reaction time task. With some caveats, the encoding strategy appeared to be as important or more important than the external format of the initial stimulus in determining the speed of verification decisions. Findings suggested that: (1) auditory imagery may be distinct from verbal and visuospatial processing in working memory; (2) visual perception but not visual imagery may automatically activate concurrent verbal codes; and (3) the effects of hearing a sound may linger for some time despite recoding in working memory. We discuss the role of auditory imagery in dual-process theories of working memory.  相似文献   

8.
Both auditory and visual emotional memories can be made less emotional by loading working memory (WM) during memory recall. Taxing WM during recall can be modality specific (giving an auditory [visuospatial] load during recall of an auditory [visual] memory) or cross modal (an auditory load during visual recall or vice versa). We tested whether modality specific loading taxes WM to a larger extent than cross modal loading. Ninety-six participants undertook a visual and auditory baseline Random Interval Repetition task (i.e. responding as fast as possible to a visual or auditory stimulus by pressing a button). Then, participants recalled a distressing visual and auditory memory, while performing the same visual and auditory Random Interval Repetition task. Increased reaction times (compared to baseline) were indicative of WM loading. Using Bayesian statistics, we compared five models in terms of general and modality specific taxation. There was support for the model describing the effect on WM of dual tasking in general, irrespective of modality specificity, and for the model describing the effect of modality specific loading. Both models combined gained the most support. The results suggest a general effect of dual tasking on taxing WM and a superimposed effect of taxing in matched modality.  相似文献   

9.
It is important that tacts are controlled by stimuli across all senses but teaching tacts to children with autism spectrum disorder (ASD) is often limited to visual stimuli. This study replicated and extended a study on the effects of antecedent-stimulus presentations on the acquisition of auditory tacts. We used a concurrent multiple probe across sets design and an embedded adapted alternating treatments design to evaluate acquisition of auditory tacts when auditory stimuli were presented alone (i.e., isolated) or with corresponding pictures (i.e., compound-with-known and compound-with-unknown) with two school-aged boys with ASD. Both participants' responding met the mastery criterion no matter the stimulus presentation with at least one set, but one participant failed to acquire one set of stimuli in the isolated condition. The isolated condition was rarely the most efficient. We conducted post-training stimulus-control probes, and we observed disrupted stimulus control in the isolated condition for one participant. Implications for arranging auditory tacts instruction are discussed.  相似文献   

10.
Two experiments examined any inhibition-of-return (IOR) effects from auditory cues and from preceding auditory targets upon reaction times (RTs) for detecting subsequent auditory targets. Auditory RT was delayed if the preceding auditory cue was on the same side as the target, but was unaffected by the location of the auditorytarget from the preceding trial, suggesting that response inhibition for the cue may have produced its effects. By contrast, visual detection RT was inhibited by the ipsilateral presentation of a visual target on the preceding trial. In a third experiment, targets could be unpredictably auditory or visual, and no peripheral cues intervened. Both auditory and visual detection RTs were now delayed following an ipsilateral versus contralateral target in either modality on the preceding trial, even when eye position was monitored to ensure central fixation throughout. These data suggest that auditory target—target IOR arises only when target modality is unpredictable. They also provide the first unequivocal evidence for cross-modal IOR, since, unlike other recent studies (e.g., Reuter-Lorenz, Jha, & Rosenquist, 1996; Tassinari & Berlucchi, 1995; Tassinari & Campara, 1996), the present cross-modal effects cannot be explained in terms of response inhibition for the cue. The results are discussed in relation to neurophysiological studies and audiovisual links in saccade programming.  相似文献   

11.
Doyle MC  Snowden RJ 《Perception》2001,30(7):795-810
Can auditory signals influence the processing of visual information? The present study examined the effects of simple auditory signals (clicks and noise bursts) whose onset was simultaneous with that of the visual target, but which provided no information about the target. It was found that such a signal enhances performance in the visual task: the accessory sound reduced response times for target identification with no cost to accuracy. The spatial location of the sound (whether central to the display or at the target location) did not modify this facilitation. Furthermore, the same pattern of facilitation was evident whether the observer fixated centrally or moved their eyes to the target. The results were not altered by changes in the contrast (and therefore visibility) of the visual stimulus or by the perceived utility of the spatial location of the sound. We speculate that the auditory signal may promote attentional 'disengagement' and that, as a result, observers are able to process the visual target sooner when sound accompanies the display relative to when visual information is presented alone.  相似文献   

12.
MacProbe is a program that turns an Apple Macintosh with a 68020 processor or greater and a floating point unit into an experimenter’s workstation for implementing a large class of experimental paradigms characteristic of the interdisciplinary fields constituting the cognitive sciences. The core of MacProbe is a structured, interpreted programming language with over 200 high-level commands that provide support for all facets of experimentation from design and presentation of visual and auditory probes, to real-time experiment control, to the analyses and management of experimental data and the presentation of results. The programming language is supplemented by a graphical user interface for such tasks as text and waveform editing and determining the placement of visual probes.  相似文献   

13.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

14.
Using a visual and an acoustic sample set that appeared to favour the auditory modality of the monkey subjects, in Experiment 1 retention gradients generated in closely comparable visual and auditory matching (go/no-go) tasks revealed a more durable short-term memory (STM) for the visual modality. In Experiment 2, potentially interfering visual and acoustic stimuli were introduced during the retention intervals of the auditory matching task. Unlike the case of visual STM, delay-interval visual stimulation did not affect auditory STM. On the other hand, delay-interval music decreased auditory STM, confirming that the monkeys maintained an auditory trace during the retention intervals. Surprisingly, monkey vocalizations injected during the retention intervals caused much less interference than music. This finding, which was confirmed by the results of Experiments 3 and 4, may be due to differential processing of “arbitrary” (the acoustic samples) and species-specific (monkey vocalizations) sounds by the subjects. Although less robust than visual STM, auditory STM was nevertheless substantial, even with retention intervals as long as 32 sec.  相似文献   

15.
Contrasting the traditional focus on alcohol‐related visual images, this study examined the impact of both alcohol‐related auditory cues and visual stimuli on inhibitory control (IC). Fifty‐eight participants completed a Go/No‐Go Task, with alcohol‐related and neutral visual stimuli presented with or without short or continuous auditory bar cues. Participants performed worse when presented with alcohol‐related images and auditory cues. Problematic alcohol consumption and higher effortful control (EC) were associated with better IC performance for alcohol images. It is postulated that those with higher EC may be better able to ignore alcohol‐related stimuli, while those with problematic alcohol consumption are unconsciously less attuned to these. This runs contrary to current dogma and highlights the importance of examining both auditory and visual stimuli when investigating IC.  相似文献   

16.
Abstract: In two experiments, we investigated how the number of auditory stimuli affected the apparent motion induced by visual stimuli. The multiple visual stimuli that induced the apparent motion on the front parallel plane, or in the depth dimension in terms of the binocular disparity cue, were accompanied by multiple auditory stimuli. Observers reported the number of visual stimuli (Experiments 1 and 2) and the displacement of the apparent motion that was defined by the distance between the first and last visual stimuli (Experiment 2). When the number of auditory stimuli was more/less than that of the visual stimuli, observers tended to perceive more/less visual stimuli and a larger/smaller displacement than when the numbers of the auditory and visual stimuli were the same, regardless of the dimension of motion. These results suggest that auditory stimulation may modify the visual processing of motion by modulating the spatiotemporal resolution and extent of the displacement.  相似文献   

17.
When making decisions as to whether or not to bind auditory and visual information, temporal and stimulus factors both contribute to the presumption of multimodal unity. In order to study the interaction between these factors, we conducted an experiment in which auditory and visual stimuli were placed in competitive binding scenarios, whereby an auditory stimulus was assigned to either a primary or a secondary anchor in a visual context (VAV) or a visual stimulus was assigned to either a primary or secondary anchor in an auditory context (AVA). Temporal factors were manipulated by varying the onset of the to-be-bound stimulus in relation to the two anchors. Stimulus factors were manipulated by varying the magnitudes of the visual (size) and auditory (intensity) signals. The results supported the dominance of temporal factors in auditory contexts, in that effects of time were stronger in AVA than in VAV contexts, and stimulus factors in visual contexts, in that effects of magnitude were stronger in VAV than in AVA contexts. These findings indicate the precedence for temporal factors, with particular reliance on stimulus factors when the to-be-assigned stimulus was temporally ambiguous. Stimulus factors seem to be driven by high-magnitude presentation rather than cross-modal congruency. The interactions between temporal and stimulus factors, modality weighting, discriminability, and object representation highlight some of the factors that contribute to audio–visual binding.  相似文献   

18.
Using a probe-recognition technique the signal detection theory parameters d' and Beta were estimated for three types of probe (common surnames, uncommon surnames and synonyms) for material contained in a prose passage. Subjects were presented with the prose passage either in the presence of noise (85dBA) or in quiet (60dBA). In two experiments the effects of noise on auditory and visual presentation of the passage were studied. In both cases the recognition test took place in quiet. Noise decreased values of Beta for rare names and increased Beta for common names in both auditory and visual versions of the task. Noise influenced d' values in the auditory version only, with d' increasing for common names in loud noise. The results support the view that noise influences performance by disturbing the pigeon-holing mechanism with the qualification that when material may not be recapitulated (as in the auditory presentation in the present study) greater attention may be allocated to easily recognizable material. The findings give little support to theories of noiseinduced deficits in performance based on the masking of inner speech.  相似文献   

19.
The metronome response task (MRT)—a sustained-attention task that requires participants to produce a response in synchrony with an audible metronome—was recently developed to index response variability in the context of studies on mind wandering. In the present studies, we report on the development and validation of a visual version of the MRT (the visual metronome response task; vMRT), which uses the rhythmic presentation of visual, rather than auditory, stimuli. Participants completed the vMRT (Studies 1 and 2) and the original (auditory-based) MRT (Study 2) while also responding to intermittent thought probes asking them to report the depth of their mind wandering. The results showed that (1) individual differences in response variability during the vMRT are highly reliable; (2) prior to thought probes, response variability increases with increasing depth of mind wandering; (3) response variability is highly consistent between the vMRT and the original MRT; and (4) both response variability and depth of mind wandering increase with increasing time on task. Our results indicate that the original MRT findings are consistent across the visual and auditory modalities, and that the response variability measured in both tasks indexes a non-modality-specific tendency toward behavioral variability. The vMRT will be useful in the place of the MRT in experimental contexts in which researchers’ designs require a visual-based primary task.  相似文献   

20.
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号