首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
According to Stevens' explanation of cross-modal matching, the recruitment-like effects of masking seen in intramodal loudness judgements should be reflected in a brightness-to-loudness matching task. In an experiment with child observers, this failed to occur. The results are explicable in terms of category mediation of the cross-modal, but not the intramodal, task. In support of this account, it is shown that cross-modal judgements are unaffected by explicit category mediation. However, intramodal judgements, explicitly mediated in the same way, produce a pattern of results similar to those obtained in the cross-modal task. The experiments suggest that cross-modal matching does not provide a useful test of loudness recruitment in the bilaterally hearing impaired.  相似文献   

2.
In a same--different judgement task with successively presented signals, subjects matched dots in different vertical positions and tones of different frequencies intramodally and intermodally. The first and second stimuli of trials in each of the four modality conditions were drawn from a set consisting of two, three or five alternatives. In all intermodal set size conditions, the dimensions of pitch and vertical position were related by the same equivalence rule. While intramodal performance improvement depended only on the total number of practice trials at matching on the relevant dimensions, intermodal performance improvement appeared to be related to the number of trials practice with each heteromodal stimulus pairing in a particular set. After performance had approached asymptotic level neither intramodal nor intermodal matching reaction time depended on set size. Mean “same” reaction time was less than mean “different” reaction time, and this difference was greater for intermodal matching than for intramodal matching. The results indicated that intermodal equivalence exists between discrete stimulus values on heteromodal dimensions rather than between the dimensons themselves.  相似文献   

3.
The issues of visual-haptic perceptual equivalence and the impact of nonequivalence upon cross-modal performance were explored. A measure of cross-modal nonequivalence was developed from multidimensional scaling models of the perceptual structures of 24 nonrepresentative three-dimensional stimuli. In Experiment 1 the visual and haptic perceptual structures and measures of cross-modal nonequivalence were shown to be replicable. Experiments 2 and 3 employed sets of stimuli selected as cross modally similar or dissimilar (based upon the results of Experiment 1) and tested the impact of perceptual nonequivalence upon cross-modal performance with shape information. The experiments used somewhat different tasks and produced converging results. There was poorer cross-modal performance when cross modally dissimilar stimuli were involved than when only cross modally similar stimuli were involved, but there was no such pattern for intramodal performance. The findings are related to the theoretical notions of perceptual equivalence (Gibson, 1966; Marks, 1978) and the theoretical and practical importance of understanding the perceptual properties of stimuli used in cognitive tasks (Garner, 1970; Monahan & Lockhead, 1970).  相似文献   

4.
Subjects matched successively presented stimuli within and across modalities. In conditions in which they were informed of the modalities of the two stimuli, no differences in matching performance were obtained between the four types of match (visual-visual, auditory-auditory, visual-auditory, and auditory-visual). Thus, there appeared to be no difference between the modalities in ability to perceive or retain the particular stimuli used. In conditions in which subjects were informed of the modality of the first stimulus but only of the modality in which the second stimulus would appear on 80% of trials, there was again no significant difference between auditory-auditory and visual-visual matching. However, auditory-visual matching was much faster than visual-auditory matching when the second stimulus appeared in the unexpected modality. The results suggest that subjects prepare for both possible types of match when uncertain of the second stimulus modality and that the cross-modal asymmetry reflects the additional attentional load that this incurs.  相似文献   

5.
A series of four experiments explored how cross-modal similarities between sensory attributes in vision and hearing reveal themselves in speeded, two-stimulus discrimination. When subjects responded differentially to stimuli on one modality, speed and accuracy of response were greater on trials accompanied by informationally irrelevant "matching" versus "mismatching" stimuli from the other modality. Cross-modal interactions appeared in (a) responses to dim/bright lights and to dark/light colors accompanied by low-pitched/high-pitched tones; (b) responses to low-pitched/high-pitched tones accompanied by dim/bright lights or by dark/light colors; (c) responses to dim/bright lights, but not to dark/light colors, accompanied by soft/loud sounds; and (d) responses to rounded/sharp forms accompanied by low-pitched/high-pitched tones. These results concur with findings on cross-modal perception, synesthesia, and synesthetic metaphor, which reveal similarities between pitch and brightness, pitch and lightness, loudness and brightness, and pitch and form. The cross-modal interactions in response speed and accuracy may take place at a sensory/perceptual level of processing or after sensory stimuli are encoded semantically.  相似文献   

6.
Techniques for constructing auditory stimulus patterns in tests of cross-modal and intramodal matching ability are discussed. An example of a set of tests is given, and the method of construction for the auditory stimuli is described.  相似文献   

7.
This experiment investigated the effect of modality on temporal discrimination in children aged 5 and 8 years and adults using a bisection task with visual and auditory stimuli ranging from 200 to 800 ms. In the first session, participants were required to compare stimulus durations with standard durations presented in the same modality (within-modality session), and in the second session in different modalities (cross-modal session). Psychophysical functions were orderly in all age groups, with the proportion of long responses (judgement that a duration was more similar to the long than to the short standard) increasing with the stimulus duration, although functions were flatter in the 5-year-olds than in the 8-year-olds and adults. Auditory stimuli were judged to be longer than visual stimuli in all age groups. The statistical results and a theoretical model suggested that this modality effect was due to differences in the pacemaker speed of the internal clock. The 5-year-olds also judged visual stimuli as more variable than auditory ones, indicating that their temporal sensitivity was lower in the visual than in the auditory modality.  相似文献   

8.
Accuracy in matching was investigated cross-modally (touch and vision) and intra-modally. The stimuli were surfaces of sand-paper. Two experiments showed (a) that matched stimuli are linearly related to the standard stimuli for the two intra-modal conditions, whereas non-linear relations (power-functions) were obtained in the two cross-modal conditions (touch-vision and vision-touch), (b) that the variance of matched stimuli is approximately three times as large when matching is performed cross-modally as compared to intra-modal matching. Data were analyzed according to a model which suggests that the correlation between discriminal dispersions belonging to the same modality is higher than the correlation between discriminal dispersions arising from two modalities.  相似文献   

9.
One hundred observers participated in two experiments designed to investigate aging and the perception of natural object shape. In the experiments, younger and older observers performed either a same/different shape discrimination task (experiment 1) or a cross-modal matching task (experiment 2). Quantitative effects of age were found in both experiments. The effect of age in experiment 1 was limited to cross-modal shape discrimination: there was no effect of age upon unimodal (ie within a single perceptual modality) shape discrimination. The effect of age in experiment 2 was eliminated when the older observers were either given an unlimited amount of time to perform the task or when the number of response alternatives was decreased. Overall, the results of the experiments reveal that older observers can effectively perceive 3-D shape from both vision and haptics.  相似文献   

10.
Two groups of children, one of below average intelligence and one of above average intelligence, were adminitered nine tasks involving matching information between auditory and visual modalities and temporal and spatial presentations. The below average intelligence children made more matching errors than the above average intelligence group, with no indication that matching problems were particular to one modality or particular to cross-modal as opposed to intramodal matching. Factor analyses did suggest, however, that matching processes varied according to levels of intelligence. The results are discussed in terms of the differences between modality-specific and nonmodal theoretical views of sensory systems, and it is noted that neither view alone is adequate to account for the variations found for the two intelligence groups. It is suggested that the results are consistent with Luria's research on neurological organization.  相似文献   

11.
Extant results motivate 3 hypotheses on the role of attention in perceptual implicit memory. The first proposes that only intramodal manipulations of attention reduce perceptual priming. The second attributes reduced priming to the effects of distractor selection operating in a central bottleneck process. The third proposes that manipulations of attention only affect priming via disrupted stimulus identification. In Experiment 1, a standard cross-modal manipulation did not disrupt priming in perceptual identification. However, when study words and distractors were presented synchronously, cross-modal and intramodal distraction reduced priming. Increasing response frequency in the distractor task produced effects of attention regardless of target-distractor synchrony. These effects generalized to a different category of distractors arguing against domain-specific interference. The results support the distractor-selection hypothesis.  相似文献   

12.
The purpose of the present study was to investigate possible effects of exposure upon suprathreshold psychological responses when auditory magnitude estimation and cross-modal matching with audition as the standard are conducted within the same experiment. Four groups of 10 subjects each whose over-all age range was 18 to 23 yr. were employed. During the cross-modal matching task the Groups 1 and 2 subjects adjusted a vibrotactile stimulus presented to the dorsal surface of the tongue and the Groups 3 and 4 subjects adjusted a vibrotactile stimulus presented to the thenar eminence of the right hand to match binaurally presented auditory stimuli. The magnitude-estimation task was conducted before the cross-modal matching task for Groups 1 and 3 and the cross-modal matching task was conducted before the magnitude-estimation task for Groups 2 and 4. The psychophysical methods of magnitude estimation and cross-modal matching showed no effect of one upon the other when used in the same experiment.  相似文献   

13.
Whispered speech is very different acoustically from normally voiced speech, yet listeners appear to have little trouble perceiving whispered speech. Two selective adaptation experiments explored the basis for the common perception of whispered and voiced speech, using two synthetic /ba/-/wa/ continua (one voiced, and one whispered). In the first experiment the endpoints of each series were used as adaptors, along with several nonspeech adaptors. Speech adaptors produced reliable labeling shifts of syllables matching in periodicity (i.e., whispered-whispered or voiced-voiced); somewhat smaller effects were found with mismatched periodicity. A periodic nonspeech tone with short rise time (the "pluck") produced adaptation effects like those for /ba/. These shifts occurred for whispered test syllables as well as voiced ones, indicating a common abstract level of representation for voiced and whispered stimuli. Experiment 2 replicated and extended Experiment 1, using same-ear and cross-ear adaptation conditions. There was perfect cross-ear transfer of the nonspeech adaptation effect, again implicating an abstract level of representation. The results support the existence of two levels of processing for complex acoustic signals. The commonality of whispered and voiced speech arises at the second, abstract level. Both this level, and the earlier, more directly acoustic level, are susceptible to adaptation effects.  相似文献   

14.
In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other.  相似文献   

15.
Several studies have shown that the direction in which a visual apparent motion stream moves can influence the perceived direction of an auditory apparent motion stream (an effect known as crossmodal dynamic capture). However, little is known about the role that intramodal perceptual grouping processes play in the multisensory integration of motion information. The present study was designed to investigate the time course of any modulation of the cross-modal dynamic capture effect by the nature of the perceptual grouping taking place within vision. Participants were required to judge the direction of an auditory apparent motion stream while trying to ignore visual apparent motion streams presented in a variety of different configurations. Our results demonstrate that the cross-modal dynamic capture effect was influenced more by visual perceptual grouping when the conditions for intramodal perceptual grouping were set up prior to the presentation of the audiovisual apparent motion stimuli. However, no such modulation occurred when the visual perceptual grouping manipulation was established at the same time as or after the presentation of the audiovisual stimuli. These results highlight the importance of the unimodal perceptual organization of sensory information to the manifestation of multisensory integration.  相似文献   

16.
Efficient navigation of our social world depends on the generation, interpretation, and combination of social signals within different sensory systems. However, the influence of healthy adult aging on multisensory integration of emotional stimuli remains poorly explored. This article comprises 2 studies that directly address issues of age differences on cross-modal emotional matching and explicit identification. The first study compared 25 younger adults (19-40 years) and 25 older adults (60-80 years) on their ability to match cross-modal congruent and incongruent emotional stimuli. The second study looked at performance of 20 younger (19-40) and 20 older adults (60-80) on explicit emotion identification when information was presented congruently in faces and voices or only in faces or in voices. In Study 1, older adults performed as well as younger adults on tasks in which congruent auditory and visual emotional information were presented concurrently, but there were age-related differences in matching incongruent cross-modal information. Results from Study 2 indicated that though older adults were impaired at identifying emotions from 1 modality (faces or voices alone), they benefited from congruent multisensory information as age differences were eliminated. The findings are discussed in relation to social, emotional, and cognitive changes with age.  相似文献   

17.
Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5).  相似文献   

18.
In Experiments 1 and 2, we investigated long-term repetition priming effects in Serbian under crossalphabet and cross-modal conditions. In both experiments, results followed the same pattern: significant priming in all conditions and no significant reduction in priming in the cross-modal as opposed to the cross-alphabet condition. These results are different from those obtained in English (Experiment 3), in which a modality shift led to a reduction in priming. The findings are discussed within a theoretical framework, in which long-term priming is a by-product of learning within the language system. A full list of word stimuli for all three experiments presented in this article can be found at www.psychonomic.org/archive.  相似文献   

19.
ABSTRACT

Barsalou has recently argued against the strategy of identifying amodal neural representations by using their cross-modal responses (i.e., their responses to stimuli from different modalities). I agree that there are indeed modal structures that satisfy this “cross-modal response” criterion (CM), such as distributed and conjunctive modal representations. However, I argue that we can distinguish between modal and amodal structures by looking into differences in their cross-modal responses. A component of a distributed cell assembly can be considered unimodal because its responses to stimuli from a given modality are stable, whereas its responses to stimuli from any other modality are not (i.e., these are lost within a short time, plausibly as a result of cell assembly dynamics). In turn, conjunctive modal representations, such as superior colliculus cells in charge of sensory integration, are multimodal because they have a stable response to stimuli from different modalities. Finally, some prefrontal cells constitute amodal representations because they exhibit what has been called ‘adaptive coding’. This implies that their responses to stimuli from any given modality can be lost when the context and task conditions are modified. We cannot assign them a modality because they have no stable relation with any input type.

Abbreviatons: CM: cross-modal response criterion; CCR: conjuntive cross-modal representations; fMRI: functional magnetic resonance imaging; MVPA: multivariate pattern analysis; pre-SMA: pre-supplementary motor area; PFC: prefrontal cortex; SC: superior colliculus; GWS: global workspace  相似文献   

20.
On the cross-modal perception of intensity   总被引:2,自引:0,他引:2  
Are cross-modality matches based on absolute equivalences between the intensities of perceptual experiences in different senses, or are they based on relative positions within the respective sets of stimuli? To help answer this question, we conducted a series of three experiments; in each the levels of stimulus magnitude in one modality stayed constant while the levels in the other changed from session to session. Results obtained by two methods--magnitude matching and cross-modal difference estimation--agreed in revealing the following: First, the cross-modality matches seem to represent in all cases a compromise between absolute equivalence and relative (contextual) comparison, the compromise being about 50-50 for both auditory loudness versus vibratory loudness and auditory loudness versus visual brightness, but more nearly, though not wholly, absolute for perceived auditory duration versus visual duration. Second, individual variations abounded, with some subjects evidencing totally absolute matching, others totally relative matching (with little consistency, however, between tasks or between comparisons of different pairs of modalities). Third, the judgments of cross-modal difference were consistent with a model of linear subtraction, and in the case of loudness, the underlying scale was roughly compatible with Stevens's sone scale. Finally, a model designed to describe sequential dependencies in response can account for at least part of the context-induced changes in cross-modal equivalence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号