首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Using fMRI we investigated the neural basis of audio–visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio–visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio–visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.  相似文献   

2.
Crossmodal correspondences are a feature of human perception in which two or more sensory dimensions are linked together; for example, high-pitched noises may be more readily linked with small than with large objects. However, no study has yet systematically examined the interaction between different visual–auditory crossmodal correspondences. We investigated how the visual dimensions of luminance, saturation, size, and vertical position can influence decisions when matching particular visual stimuli with high-pitched or low-pitched auditory stimuli. For multidimensional stimuli, we found a general pattern of summation of the individual crossmodal correspondences, with some exceptions that may be explained by Garner interference. These findings have applications for the design of sensory substitution systems, which convert information from one sensory modality to another.  相似文献   

3.
Correlational research investigating the relationship between scores on self-report imagery questionnaires and measures of social desirable responding has shown only a weak association. However, researchers have argued that this research may have underestimated the size of the relationship because it relied primarily on the Marlowe-Crowne scale (MC; Crowne & Marlowe, Journal of Consulting Psychology, 24, 349-354, 1960), which loads primarily on the least relevant form of social desirable responding for this particular context, the moralistic bias. Here we report the analysis of data correlating the Vividness of Visual Imagery Questionnaire (VVIQ; Marks, Journal of Mental Imagery, 19, 153-166, 1973) with the Balanced Inventory of Desirable Responding (BIDR; Paulhus, 2002) and the MC scale under anonymous testing conditions. The VVIQ correlated significantly with the Self-Deceptive Enhancement (SDE) and Agency Management (AM) BIDR subscales and with the MC. The largest correlation was with SDE. The ability of SDE to predict VVIQ scores was not significantly enhanced by adding either AM or MC. Correlations between the VVIQ and BIDR egoistic scales were larger when the BIDR was continuously rather than dichotomously scored. This analysis indicates that the relationship between self-reported imagery and social desirable responding is likely to be stronger than previously thought.  相似文献   

4.
This study reports on the auditory and visual comprehension of Japanese idioms having both literal and figurative meanings. Experiment I conducted the rating of the semantic distance between the two meanings. Experiment II investigated the difference of comprehension between semantically far and close idioms. Here the materials are presented in isolation both auditorily and visually. Experiment III conducted the same investigation as Experiment II, except that idioms were presented embedded in literally and figuratively induced contexts. Experiment IV reinvestigated the findings obtained from the previous experiments. The results of these experiments show that in isolation visual presentation precedes auditory presentation, and that both in the auditory and visual presentations semantically far idioms are comprehended more accurately than semantically close idioms.  相似文献   

5.
Lucia M. Vaina 《Synthese》1990,83(1):49-91
In this paper we focus on the modularity of visual functions in the human visual cortex, that is, the specific problems that the visual system must solve in order to achieve recognition of objects and visual space. The computational theory of early visual functions is briefly reviewed and is then used as a basis for suggesting computational constraints on the higher-level visual computations. The remainder of the paper presents neurological evidence for the existence of two visual systems in man, one specialized for spatial vision and the other for object vision. We show further clinical evidence for the computational hypothesis that these two systems consist of several visual modules, some of which can be isolated on the basis of specific visual deficits which occur after lesions to selected areas in the visually responsive brain. We will provide examples of visual modules which solve information processing tasks that are mediated by specific anatomic areas. We will show that the clinical data from behavioral studies of monkeys (Ungerleider and Mishkin 1984) supports the distinction between two visual systems in monkeys, the what system, involved in object vision, and the where system, involved in spatial vision.I thank Carole Graybill for editorial help.  相似文献   

6.
Prior research has established significant relations between measures of sensory ability and cognitive function in adults of different ages, and several explanations for this relation have been proposed. One explanation is that sensory abilities restrict cognitive processing, a second is that cognitive abilities influence assessments of sensory ability, and a third is that both sensory function and cognition are affected by a common, potentially age-based, third factor. These explanations were investigated using mediation and moderation analyses, with near visual acuity as the sensory measure and scores on visual speed tests and auditory memory tests as the cognitive measures. Measures of visual acuity, speed, and memory were obtained from three moderately large samples, two cross-sectional (N?=?380, N?=?4,779) and one longitudinal (N?=?2,258), with participants ranging from 18 to 90 years of age. The visual acuity and cognitive measures had different age trajectories, and the visual acuity–cognition relations were similar in each 5-year age band. The results suggest that the age-related differences and changes in near visual acuity are unlikely to contribute to the age-related differences and changes in speed and memory measures.  相似文献   

7.
Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers can search and track within the same trial significantly better than would be predicted if the 2 tasks were mutually exclusive. In fact, the AOC for tracking and search is similar to that for tracking and auditory monitoring. The results of additional experiments support an attention-switching account for this high level of dual-task performance in which a single attentional resource is efficiently switched between the tracking and search. The results provide important constraints for architectures of visual selective attention and the mechanisms of multielement tracking.  相似文献   

8.
Binocular rivalry is a phenomenon of visual competition in which perception alternates between two monocular images. When two eye’s images only differ in luminance, observers may perceive shininess, a form of rivalry called binocular luster. Does dichoptic information guide attention in visual search? Wolfe and Franzel (Perception & Psychophysics, 44(1), 81–93, 1988) reported that rivalry could guide attention only weakly, but that luster (shininess) “popped out,” producing very shallow Reaction Time (RT) × Set Size functions. In this study, we have revisited the topic with new and improved stimuli. By using a checkerboard pattern in rivalry experiments, we found that search for rivalry can be more efficient (16 ms/item) than standard, rivalrous grating (30 ms/item). The checkerboard may reduce distracting orientation signals that masked the salience of rivalry between simple orthogonal gratings. Lustrous stimuli did not pop out when potential contrast and luminance artifacts were reduced. However, search efficiency was substantially improved when luster was added to the search target. Both rivalry and luster tasks can produce search asymmetries, as is characteristic of guiding features in search. These results suggest that interocular differences that produce rivalry or luster can guide attention, but these effects are relatively weak and can be hidden by other features like luminance and orientation in visual search tasks.  相似文献   

9.
Hering’s principles of visual direction are summarized axiomatically, and deductions are presented. The deductions allow one to predict when veridical or nonveridical judgments of visual direction should occur and when apparent movement should be seen. These predictions agree with the results obtained in special viewing conditions. However, one of the limitations of the principles is that some of the predictions fail in “normal” viewing conditions.  相似文献   

10.
Attention, Perception, & Psychophysics - This research compared the relative contributions of odor and visual cues in determining young children’s preferences. Thirty-two children were...  相似文献   

11.
12.
Examinations of interference between verbal and visual materials in working memory have produced mixed results. If there is a central form of storage (e.g., the focus of attention; N. Cowan, 2001), then cross-domain interference should be obtained. The authors examined this question with a visual-array comparison task (S. J. Luck & E. K. Vogel, 1997) combined with various verbal memory load conditions. Interference between tasks occurred if there was explicit retrieval of the verbal load during maintenance of a visual array. The effect was localized in the maintenance period of the visual task and was not the result of articulation per se. Interference also occurred when especially large silent verbal and visual loads were held concurrently. These results suggest central storage along with code-specific passive storage.  相似文献   

13.
When making decisions as to whether or not to bind auditory and visual information, temporal and stimulus factors both contribute to the presumption of multimodal unity. In order to study the interaction between these factors, we conducted an experiment in which auditory and visual stimuli were placed in competitive binding scenarios, whereby an auditory stimulus was assigned to either a primary or a secondary anchor in a visual context (VAV) or a visual stimulus was assigned to either a primary or secondary anchor in an auditory context (AVA). Temporal factors were manipulated by varying the onset of the to-be-bound stimulus in relation to the two anchors. Stimulus factors were manipulated by varying the magnitudes of the visual (size) and auditory (intensity) signals. The results supported the dominance of temporal factors in auditory contexts, in that effects of time were stronger in AVA than in VAV contexts, and stimulus factors in visual contexts, in that effects of magnitude were stronger in VAV than in AVA contexts. These findings indicate the precedence for temporal factors, with particular reliance on stimulus factors when the to-be-assigned stimulus was temporally ambiguous. Stimulus factors seem to be driven by high-magnitude presentation rather than cross-modal congruency. The interactions between temporal and stimulus factors, modality weighting, discriminability, and object representation highlight some of the factors that contribute to audio–visual binding.  相似文献   

14.
Adaptation of perceived movement during head motion (apparent concomitant motion, ACM) and the subsequent elimination of adaptation were studied in two experiments. During the adaptation phase of both experiments, subjects performed voluntary 1-Hz head oscillations for 6 min while fixating a stimulus moving either in the same (with) direction as or the opposite (against) direction of head movements. In Experiment 1, ACM adaptation was measured following either a 1- or a 4-min delay after the adaptation phase. Results indicated some loss of adaptation during the additional 3-min delay, demonstrating a tendency of the system linking head and image to return to its preadaptation state following removal of an adaptation stimulus. In Experiment 2, subjects viewed a stimulus after adaptation that appeared to move minimally in the same manner as the adaptation stimulus during 3 min of head oscillations. No loss of adaptation was measured in these subjects between the beginning and the end of the 3-min interval. In another condition, subjects viewed a stimulus that appeared to move alternately in the same direction as and in the opposite direction of the adaptation stimulus during a similar 3-min interval following adaptation. ACM adaptation was substantially reduced during this 3-min interval. These results implicate two mechanisms that operate to either maintain or eliminate ACM adaptation. One is passive and operates in the absence of visual feedback to eliminate the short-term adapted state, and the other responds to postadaptation visual feedback.  相似文献   

15.
16.
In an attempt to facilitate visual recall when material is presented under bisensory simultaneous conditions (i.e., visual and auditory stimuli are presented together), auditory material was delayed up to 1/4 sec relative to the onset of the visual material. Visual recall, however, remained stable across the auditory delays, suggesting a limitation in the visual system beyond that associated with the simultaneous occurrence of auditory material.  相似文献   

17.
Two factors hypothesized to affect shared visual attention in 9-month-olds were investigated in two experiments. In Experiment 1, we examined the effects of different attention-directing actions (looking, looking and pointing, and looking, pointing and verbalizing) on 9-month-olds’ engagement in shared visual attention. In Experiment 1 we also varied target object locations (i.e., in front, behind, or peripheral to the infant) to test whether 9-month-olds can follow an adult’s gesture past a nearby object to a more distal target. Infants followed more elaborate parental gestures to targets within their visual field. They also ignored nearby objects to follow adults’ attention to a peripheral target, but not to targets behind them. In Experiment 2, we rotated the parent 90° from the infant’s midline to equate the size of the parents’ head turns to targets within as well as outside the infants’ visual field. This manipulation significantly increased infants’ looking to target objects behind them, however, the frequency of such looks did not exceed chance. The results of these two experiments are consistent with perceptual and social experience accounts of shared visual attention.  相似文献   

18.
The present experiments introduce a new search technique for disentangling contributions of preattentive guidance and postattentive template matching to search efficiency. Participants performed searches (for negative or positive faces in Experiment 1; pop-out search in Experiment 2; conjunction search in Experiment 3) under either standard viewing conditions or a new restricted viewing condition in which items were occluded by black placeholders and revealed only when a participant moved the mouse pointer over the black square. Under full viewing conditions, search performance can be aided by both preattentive and postattentive mechanisms, whereas the mouse-contingent search relies solely on postattentive template-matching processes. Results demonstrate the utility of this new methodology for distinguishing contributions of preattentive guidance and postattentive template-matching processes in ambiguous search situations. Furthermore, application of the new restricting viewing method to search for emotionally expressive faces suggested that emotional information is processed preattentively and influences the allocation of focal attention.  相似文献   

19.
Skrzypulec  Błażej 《Synthese》2021,198(3):2101-2127
Synthese - It is commonly believed that human perceptual experiences can be, and usually are, multimodal. What is more, a stronger thesis is often proposed that some perceptual multimodal...  相似文献   

20.
Visual search has been studied extensively, yet little is known about how its constituent processes affect subsequent emotional evaluation of searched-for and searched-through items. In 3 experiments, the authors asked observers to locate a colored pattern or tinted face in an array of other patterns or faces. Shortly thereafter, either the target or a distractor was rated on an emotional scale (patterns, cheerfulness; faces, trustworthiness). In general, distractors were rated more negatively than targets. Moreover, distractors presented near the target during search were rated significantly more negatively than those presented far from the target. Target-distractor proximity affected distractor ratings following both simple-feature and difficult-conjunction search, even when items appeared at different locations during evaluation than during search and when faces previously tinted during search were presented in grayscale at evaluation. An attentional inhibition account is offered to explain these effects of attention on emotional evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号