首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Models of duration bisection have focused on the effects of stimulus spacing and stimulus modality. However, interactions between stimulus spacing and stimulus modality have not been examined systematically. Two duration bisection experiments that address this issue are reported. Experiment 1 showed that stimulus spacing influenced the classification of auditory, but not visual, stimuli. Experiment 2 used a wider stimulus range, and showed stimulus spacing effects for both visual and auditory stimuli, although the effects were larger for auditory stimuli. A version of Temporal Range Frequency Theory was applied to the data, and was used to demonstrate that the qualitative pattern of results can be captured with the single assumption that the durations of visual stimuli are less discriminable from one another than are the durations of auditory stimuli.  相似文献   

2.
Three experiments are reported on the influence of different timing relations on the McGurk effect. In the first experiment, it is shown that strict temporal synchrony between auditory and visual speech stimuli is not required for the McGurk effect. Subjects were strongly influenced by the visual stimuli when the auditory stimuli lagged the visual stimuli by as much as 180 msec. In addition, a stronger McGurk effect was found when the visual and auditory vowels matched. In the second experiment, we paired auditory and visual speech stimuli produced under different speaking conditions (fast, normal, clear). The results showed that the manipulations in both the visual and auditory speaking conditions independently influenced perception. In addition, there was a small but reliable tendency for the better matched stimuli to elicit more McGurk responses than unmatched conditions. In the third experiment, we combined auditory and visual stimuli produced under different speaking conditions (fast, clear) and delayed the acoustics with respect to the visual stimuli. The subjects showed the same pattern of results as in the second experiment. Finally, the delay did not cause different patterns of results for the different audiovisual speaking style combinations. The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.  相似文献   

3.
The joint effects of stimulus modality, stimulus intensity, and foreperiod on simple RT were investigated. In experiment 1 an interaction was found between stimulus intensity, both visual and auditory, and a variable FP such that the intensity-effect on RT was largest at the shortest FP. Experiment 2 provided a successful replication with smaller and weaker visual stimuli. No interaction was observed with a constant FP, although the visual stimuli were identical and the auditory ones psychophysically equivalent to the visual stimuli of experiment 1.It is proposed that an additive or interactive relationship between stimulus intensity and FP can be inferred only when the mental processes called for by the various uses of FP are simultaneously considered. Another precondition is an adequate sampling of the intensity-continuum with special reference to the retinal size of visual stimuli.  相似文献   

4.
Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of auditory and visual stimuli was assessed during a subsequent testing phase. In Experiment 2, the familiarity of the auditory or visual stimulus was systematically manipulated by prefamiliarizing infants to either the auditory or visual stimulus prior to the experiment proper. With the exception of the prefamiliarized auditory condition in Experiment 2, infants in the multimodal conditions failed to increase looking when the visual component changed at test. This finding is noteworthy given that infants discriminated the same visual stimuli when presented unimodally, and there was no evidence that multimodal presentation attenuated auditory processing. Possible factors underlying these effects are discussed.  相似文献   

5.
This experiment investigated the effect of modality on temporal discrimination in children aged 5 and 8 years and adults using a bisection task with visual and auditory stimuli ranging from 200 to 800 ms. In the first session, participants were required to compare stimulus durations with standard durations presented in the same modality (within-modality session), and in the second session in different modalities (cross-modal session). Psychophysical functions were orderly in all age groups, with the proportion of long responses (judgement that a duration was more similar to the long than to the short standard) increasing with the stimulus duration, although functions were flatter in the 5-year-olds than in the 8-year-olds and adults. Auditory stimuli were judged to be longer than visual stimuli in all age groups. The statistical results and a theoretical model suggested that this modality effect was due to differences in the pacemaker speed of the internal clock. The 5-year-olds also judged visual stimuli as more variable than auditory ones, indicating that their temporal sensitivity was lower in the visual than in the auditory modality.  相似文献   

6.
The visual confusability of uppercase letters was manipulated in a successive same-different task to study the conditions under which visual generation from auditory inputs would occur and to investigate the figural specificity of the generated representations. Prior experiments have shown that visual confusions do occur when the initial stimulus is auditory and the second one is visual, which indicates that auditory stimuli can be encoded into visual forms. There has been some suggestion, however, that the generated visual code may have been too abstract to differentiate between the two cases in which letters can appear. In the present experiment, although the confusion effect was not eliminated when the subjects had no advance knowledge regarding the case in which the visual stimulus would appear, the marked confusion effect obtained when the visual stimulus was an uppercase letter was substantially attenuated when the letter appeared in lowercase. This was taken to indicate that the visual characteristics of a generated visual representation may be relatively specific. The results also suggested that subjects may wait until after the second stimulus is presented before they generate the visual representation of the initial auditory stimulus.  相似文献   

7.
Four experiments examined transfer of noncorresponding spatial stimulus-response associations to an auditory Simon task for which stimulus location was irrelevant. Experiment 1 established that, for a horizontal auditory Simon task, transfer of spatial associations occurs after 300 trials of practice with an incompatible mapping of auditory stimuli to keypress responses. Experiments 2-4 examined transfer effects within the auditory modality when the stimuli and responses were varied along vertical and horizontal dimensions. Transfer occurred when the stimuli and responses were arrayed along the same dimension in practice and transfer but not when they were arrayed along orthogonal dimensions. These findings indicate that prior task-defined associations have less influence on the auditory Simon effect than on the visual Simon effect, possibly because of the stronger tendency for an auditory stimulus to activate its corresponding response.  相似文献   

8.
Visual dominance in the pigeon   总被引:3,自引:0,他引:3       下载免费PDF全文
In Experiment 1, three pigeons were trained to obtain grain by depressing one foot treadle in the presence of a 746-Hertz tone stimulus and by depressing a second foot treadle in the presence of a red light stimulus. Intertrial stimuli included white light and the absence of tone. The latencies to respond on auditory element trials were as fast, or faster, than on visual element trials, but pigeons always responded on the visual treadle when presented with a compound stimulus composed of the auditory and visual elements. In Experiment 2, pigeons were trained on the auditory-visual discrimination task using as trial stimuli increases in the intensity of auditory or visual intertrial stimuli. Again, pigeons showed visual dominance on subsequent compound stimulus test trials. In Experiment 3, on compound test trials, the onset of the visual stimulus was delayed relative to the onset of the auditory stimulus. Visual treadle responses generally occurred with delay intervals of less than 500 milliseconds, and auditory treadle responses generally occurred with delay intervals of greater than 500 milliseconds. The results are discussed in terms of Posner, Nissen, and Klein's (1976) theory of visual dominance in humans.  相似文献   

9.
A further experiment is reported on reaction times to stimuli separated by short intervals. On this occasion an auditory stimulus was followed by a visual stimulus. Results indicate that the pattern of delays at short intervals is the same as the pattern of delays when the stimuli are presented in one modality only. This suggests a model of the human operator functioning as a single channel through which information from both sense modalities has to pass before appropriate responses are organized. An attempt is also made to reconcile data with the known facts about the peripheral and central components of reaction time and the possibility that delays are the result of occupation of the channel for a central time plus a central refractory time is suggested.  相似文献   

10.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

11.
Studies on teaching tacts to individuals with autism spectrum disorder (ASD) have primarily focused on visual stimuli, despite published clinical recommendations to teach tacts of stimuli in other sensory domains as well. In the current study, two children with ASD were taught to tact auditory stimuli under two stimulus‐presentation arrangements: isolated (auditory stimuli presented without visual cues) and compound (auditory stimuli presented with visual cues). Results indicate that compound stimulus presentation was a more effective teaching procedure, but that it interfered with prior object‐name tacts. A modified compound arrangement in which object‐name tact trials were interspersed with auditory‐stimulus trials mitigated this interference.  相似文献   

12.
In the McGurk effect, visual information specifying a speaker’s articulatory movements can influence auditory judgments of speech. In the present study, we attempted to find an analogue of the McGurk effect by using nonspeech stimuli—the discrepant audiovisual tokens of plucks and bows on a cello. The results of an initial experiment revealed that subjects’ auditory judgments were influenced significantly by the visual pluck and bow stimuli. However, a second experiment in which speech syllables were used demonstrated that the visual influence on consonants was significantly greater than the visual influence observed for pluck-bow stimuli. This result could be interpreted to suggest that the nonspeech visual influence was not a true McGurk effect. In a third experiment, visual stimuli consisting of the wordspluck andbow were found to have no influence over auditory pluck and bow judgments. This result could suggest that the nonspeech effects found in Experiment 1 were based on the audio and visual information’s having an ostensive lawful relation to the specified event. These results are discussed in terms of motor-theory, ecological, and FLMP approaches to speech perception.  相似文献   

13.
The ability of an experimentally experienced female California sea lion to form transitive relations across sensory modalities was tested using a matching-to-sample procedure. The subject was trained by trial-and-error, using differential reinforcement, to relate an acoustic sample stimulus to one member from each of two previously established visual classes. Once the two auditory–visual relations were formed, she was tested to determine whether untrained transitive relations would emerge between each of the acoustic stimuli and the remaining stimuli of each 10-member visual class. During testing, the sea lion demonstrated immediate transfer by responding correctly on 89 % of the 18 novel transfer trials compared to 88 % on familiar baseline trials. We then repeated this training and transfer procedure twice more with new auditory–visual pairings with similar positive results. Finally, the six explicitly trained auditory–visual relations and the 56 derived auditory–visual relations were intermixed in a single session, and the subject’s performance remained stable at high levels. This sea lion’s transfer performance indicates that a nonhuman animal is capable of forming new associations through cross-modal transitivity.  相似文献   

14.
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this “visual dominance”, earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual–auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual–auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set.  相似文献   

15.
The ability of auditory stimuli to modulate rats' tendency to orient to visual targets was assessed. In Experiment 1, trials where an auditory stimulus (A) signaled one visual array (X) were intermixed with unsignaled presentations of a second array (Y). Comparison of the orienting responses (ORs) to X and Y revealed that A produced a transient (unconditioned) and an emerging (conditioned) disruptive influence on the OR to X. In Experiments 2 and 3, trials where A signaled X were intermixed with others where another auditory stimulus (B) signaled Y. Stimulus A's ability to modulate the OR to X was then assessed by presenting A prior to test arrays containing both X and Y. Control rats were more likely to orient to Y than X (Experiments 2 and 3) and rats with excitotoxic lesions of the hippocampus were more likely to orient to X than Y (Experiment 3). These results show that auditory stimuli exert distinct modulatory influences on the OR to visual stimuli with which they are associated.  相似文献   

16.
When making decisions as to whether or not to bind auditory and visual information, temporal and stimulus factors both contribute to the presumption of multimodal unity. In order to study the interaction between these factors, we conducted an experiment in which auditory and visual stimuli were placed in competitive binding scenarios, whereby an auditory stimulus was assigned to either a primary or a secondary anchor in a visual context (VAV) or a visual stimulus was assigned to either a primary or secondary anchor in an auditory context (AVA). Temporal factors were manipulated by varying the onset of the to-be-bound stimulus in relation to the two anchors. Stimulus factors were manipulated by varying the magnitudes of the visual (size) and auditory (intensity) signals. The results supported the dominance of temporal factors in auditory contexts, in that effects of time were stronger in AVA than in VAV contexts, and stimulus factors in visual contexts, in that effects of magnitude were stronger in VAV than in AVA contexts. These findings indicate the precedence for temporal factors, with particular reliance on stimulus factors when the to-be-assigned stimulus was temporally ambiguous. Stimulus factors seem to be driven by high-magnitude presentation rather than cross-modal congruency. The interactions between temporal and stimulus factors, modality weighting, discriminability, and object representation highlight some of the factors that contribute to audio–visual binding.  相似文献   

17.
Simultaneous auditory discrimination.   总被引:1,自引:1,他引:0       下载免费PDF全文
Stimuli in many visual stimulus control studies typically are presented simultaneously; in contrast the stimuli in auditory discrimination studies are presented successively. Many everyday auditory stimuli that control responding occur simultaneously. This suggests that simultaneous auditory discriminations should be readily acquired. The purpose of the present experiment was to train rats in a simultaneous auditory discrimination. The apparatus consisted of a cage with two response levers mounted on one wall and a speaker mounted adjacent to each lever. A feeder was mounted on the opposite wall. In a go-right/go-left procedure, two stimuli were presented on each trial, a wide-band noise burst through one speaker and a 2-kHz complex signal through the other. The stimuli alternated randomly from side to side across trials, and the stimulus correlated with reinforcement for presses varied across subjects. The rats acquired the discrimination in 400 to 700 trials, and no response position preference developed during acquisition. The ease with which the simultaneous discrimination was acquired suggests that procedures, such as matching to sample, that require simultaneous presentation of stimuli can be used with auditory stimuli in animals having poor vision.  相似文献   

18.
The interaction between nonassociative learning (presentation frequencies) and associative learning (reinforcement rates) in stimulus discrimination performance was investigated. Subjects were taught to discriminate lists of visual pattern pairs. When they chose the stimulus designated as right they were symbolically rewarded and when they chose the stimulus designated as wrong they were symbolically penalised. Subjects first learned one list and then another list. For a "right" group the pairs of the second list consisted of right stimuli from the first list and of novel wrong stimuli. For a "wrong" group it was the other way round. The right group transferred some discriminatory performance from the first to the second list while the control and wrong groups initially only performed near chance with the second list. When the first list involved wrong stimuli presented twice as frequently as right stimuli, the wrong group exhibited a better transfer than the right group. In a final experiment subjects learned lists which consisted of frequent right stimuli paired with scarce wrong stimuli and frequent wrong stimuli paired with scarce right stimuli. In later test trials these stimuli were shown in new combinations and additionally combined with novel stimuli. Subjects preferred to choose the most rewarded stimuli and to avoid the most penalised stimuli when the test pairs included at least one frequent stimulus. With scarce/scarce or scarce/novel stimulus combinations they performed less well or even chose randomly. A simple mathematical model that ascribes stimulus choices to a Cartesian combination of stimulus frequency and stimulus value succeeds in matching all these results with satisfactory precision.  相似文献   

19.
To investigate whether learning to discriminate between visual compound stimuli depends on decomposing them into constituting features, hens were first trained to discriminate four features (red, green, horizontal, vertical) from two dimensions (colour, line orientation). After acquisition, hens were trained with compound stimuli made up from these dimensions in two ways: a separable (line on a coloured background) stimulus and an integral one (coloured line). This compound training included a reversal of reinforcement of only one of the two dimensions (half-reversal). After having achieved the compound stimulus discrimination, a second dimensional training identical to the first was performed. Finally, in the second compound training the other dimension was reversed. Two major results were found: (1) an interaction between the dimension reversed and the type of compound stimulus: in compound training with colour reversal, separable compound stimuli were discriminated worse than integral compounds and vice versa in compound training with line orientation reversed. (2) Performance in the second compound training was worse than in the first one. The first result points to a similar mode of processing for separable and integral compounds, whereas the second result shows that the whole stimulus is psychologically superior to its constituting features. Experiment 2 repeated experiment 1 using line orientation stimuli of reversed line and background brightness. Nevertheless, the results were similar to experiment 1. Results are discussed in the framework of a configural exemplar theory of discrimination that assumes the representation of the whole stimulus situation combined with transfer based on a measure of overall similarity.  相似文献   

20.
Microgenesis as traced by the transient paired-forms paradigm   总被引:2,自引:0,他引:2  
Two successive, spatially overlapping human faces were exposed for recognition with SOAs ranging from 20 to 160 msec. The subjects effectively perceived one face, which at short SOAs mostly resembled the first stimulus and with increasing SOAs gradually shifted towards the appearance of the second, dimmer stimulus. These results replicated those from the study by Calis, et al, and extended them to the experimental conditions of controlled simultaneity of each of the two temporally separate, extremely brief stimuli and to the conditions of personally unfamiliar stimulus-subjects. In the second experiment we employed a direct measurement of the microgenetic focus in real time by using a procedure by which the subjects' judgments about the relative temporal order of the critical visual stimulus and an auditory click were recorded. Via this procedure it was shown that one of the effects of the first visual stimulus is to speed up the microgenetic process for the second stimulus which then appears subjectively earlier as compared to the single-stimulus control.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号