首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
For some stimuli, dynamic changes are crucial for identifying just what the stimuli are. For example, spoken words (or any auditory stimuli) require change over time to be recognized. Kallman and Cameron (1989) have proposed that this sort of dynamic change underlies the enhanced recency effect found for auditory stimuli, relative to visual stimuli. The results of three experiments replicate and extend Kallman and Cameron's finding that dynamic visual stimuli (that is visual stimuli in which movement is necessary to identify the stimuli), relative to static visual stimuli, engender enhanced recency effects. In addition, an analysis based on individual differences is used to demonstrate that the processes underlying enhanced recency effects for auditory and dynamic visual stimuli are substantially similar. These results are discussed in the context of perceptual grouping processes.  相似文献   

3.
Changes in the distribution of attention among auditory and peripheral visual stimuli were examined in a choice reaction time paradigm. Two variables were manipulated: predictability of stimulus locations and arousal state of subjects. The arousal level of some subjects was raised by occasionally exposing them to brief, mild electric shocks. On most trials either a tone or a light was presented alone (single-stimulus trials). However, on 20% of the trials both a tone and light were presented simultaneously (dual trials). Two dependent variables were used to assess dominance of attention: reaction time (on all trials) and percentage of time each modality was chosen on dual trials. Neither modality was dominant when subjects were in a nonaroused state and stimulus locations were unpredictable. However, peripheral vision dominated when stimulus locations were predictable or when the subjects' level of arousal was raised. The results are discussed with reference to previous research on sensory dominance and on the facilitating or inhibiting effects of auditory stimuli on reaction time.  相似文献   

4.
Selective attention to multidimensional auditory stimuli   总被引:3,自引:0,他引:3  
Event-related brain potentials (ERPs) elicited by multidimensional auditory stimuli were recorded from the scalp in a selective-attention task. Subjects listened to tone pips varying orthogonally between two levels each of pitch, location, and duration and responded to longer duration target stimuli having specific values of pitch and location. The discriminability of the pitch and location attributes was varied between groups. By examining the ERPs to tones that shared pitch and/or locational cues with the designated target, we inferred interrelationships among the processing of these attributes. In all groups, stimuli that failed to match the target tone in an easily discriminable cue elicited only transitory ERP signs of selective processing. Tones sharing the "easy" but not the "hard" cue with the target elicited ERPs that indicated more extensive processing, but not as extensive as stimuli sharing both cues. In addition, reaction times and ERP latencies to the designated targets were not influenced by variations in the discriminability of pitch and location. This pattern of results is consistent with parallel, self-terminating models and holistic models of processing and contradicts models specifying either serial or exhaustive parallel processing of these dimensions. Both the parallel, self-terminating models and the holistic models provide a generalized mechanism for hierarchical stimulus selections that achieve an economy of processing, an underlying goal of classic, multiple-stage theories of selective attention.  相似文献   

5.
6.
Auditory thresholds were determined by a modified method of limits, in introverts, ambiverts and extraverts, under three intensities of light. Five determinations were made: one before, one during and three (30 sec., c. 8 and 16 min.) after the light stimulation. An analysis of variance of the data showed significant results for the interval conditions, the personality type and the interaction of these parameters, under the weak light. Interval effects were also significant under medium and strong light. A weak light increased, and a strong light decreased, auditory sensitivity when threshold determinations began 30 sec., but not more than about 8 min., after the light stimulation, in introverts.  相似文献   

7.
8.
We investigated implicit task sequence learning with auditory stimuli. In previous studies only visual stimuli have been used and thus learning may have been due to visuoperceptual learning. Further, we explored the generality of the correlated streams account which holds that correlated streams of information are necessary for implicit sequence learning to occur. We used three classification tasks with auditory stimuli. The presence or absence of a task sequence was orthogonally manipulated with that of a response sequence. Sequence-specific learning was found, but only in the condition with both a task and a response sequence. No learning was found in the conditions with a single task sequence and with a single response sequence. These results show that task–response sequence learning occurs with auditory stimuli and that visuoperceptual learning is not necessary. Moreover, they underscore the importance of correlated streams of information for implicit sequence learning.  相似文献   

9.
Emotional and neutral sounds rated for valence and arousal were used to investigate the influence of emotions on timing in reproduction and verbal estimation tasks with durations from 2 s to 6 s. Results revealed an effect of emotion on temporal judgment, with emotional stimuli judged to be longer than neutral ones for a similar arousal level. Within scalar expectancy theory (J. Gibbon, R. Church, & W. Meck, 1984), this suggests that emotion-induced activation generates an increase in pacemaker rate, leading to a longer perceived duration. A further exploration of self-assessed emotional dimensions showed an effect of valence and arousal. Negative sounds were judged to be longer than positive ones, indicating that negative stimuli generate a greater increase of activation. High-arousing stimuli were perceived to be shorter than low-arousing ones. Consistent with attentional models of timing, this seems to reflect a decrease of attention devoted to time, leading to a shorter perceived duration. These effects, robust across the 2 tasks, are limited to short intervals and overall suggest that both activation and attentional processes modulate the timing of emotional events.  相似文献   

10.
Pigeons were exposed to multiple second-order schedules of paired and unpaired brief stimuli in which responding on the main key was reinforced according to a fixed-interval thirty-second schedule by a brief stimulus (a tone in the paired schedule) and advancement to the next segment of the second-order schedule. In Experiment 1, a response on the second key was required during the tone in its fourth and final presentation to produce food. Responses during earlier brief stimuli indicated the extent to which the final brief stimulus was discriminated from preceding ones. Responding was comparable during all tones, extending prior findings with visual paired brief stimuli and weakening explanations of subjects' failure to discriminate between brief-stimulus presentations in terms of elicited responding. In Experiment 2 the number of fixed-interval segments comprising the second-order schedules varied from one through eight. Although main-key response rates increased across segments in both experiments, they increased much less sharply with a variable number of segments. These results suggest that the increase in main-key response rates across segments is due primarily to a degree of temporal discrimination not reflected on the second key. Main-key response rates were higher on paired auditory brief-stimulus schedules than on unpaired visual brief-stimulus schedules, especially in Experiment 2, thus further extending findings with visual brief stimuli to second-order schedules with auditory brief stimuli.  相似文献   

11.
Rare and unexpected changes (deviants) in an otherwise repeated stream of task‐irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory‐visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post‐deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction.  相似文献   

12.
The purpose of this study is to examine the effects of various levels of alcohol consumption on human response to auditory and visual stimuli in terms of reaction time, movement time, total reaction time, and error rate. Placebo level and three low-level alcohol doses were randomly assigned to 20 male university student volunteers. 30 min. after consuming the alcohol or placebo, participants responded to either auditory or visual stimuli. Total reaction time increased significantly at the mid-low dose of alcohol (0.3 g/kg). For alcohol doses less than .5 g/kg, the change in total reaction time was confined to reaction time, i.e., the processing time between onset of stimulus and onset of movement. Effects of alcohol were significantly more pronounced in the choice-type tests. Notably, the effects of alcohol on total reaction time and error rate were significant for auditory but not visual stimuli.  相似文献   

13.
A computer system for generation of auditory stimuli is described. The system produces natural-speech or software-generated stimuli for monaural, binaural, or dichotic presentation. Stimuli have been generated for experiments run both on-line and off-line.  相似文献   

14.
Auditory redundancy gains were assessed in two experiments in which a simple reaction time task was used. In each trial, an auditory stimulus was presented to the left ear, to the right ear, or simultaneously to both ears. The physical difference between auditory stimuli presented to the two ears was systematically increased across experiments. No redundancy gains were observed when the stimuli were identical pure tones or pure tones of different frequencies (Experiment 1). A clear redundancy gain and evidence of coactivation were obtained, however, when one stimulus was a pure tone and the other was white noise (Experiment 2). Experiment 3 employed a two-alternative forced choice localization task and provided evidence that dichotically presented pure tones of different frequencies are apparently integrated into a single percept, whereas a pure tone and white noise are not fused. The results extend previous findings of redundancy gains and coactivation with visual and bimodal stimuli to the auditory modality. Furthermore, at least within this modality, the results indicate that redundancy gains do not emerge when redundant stimuli are integrated into a single percept.  相似文献   

15.
Summary Two experiments were conducted. In the first experiment, psychometric functions were generated for tones of 988 Hz of 16 and 64 ms duration. The results indicate that for detection, both durations are within critical duration, i.e., equal detection levels were obtained for stimuli of equal energy. In the second experiment, pairs of equal-energy, equally detectable tones of 16 and 64 ms were used to test the ability of subjects to discriminate between them. The results indicate that equal-energy, equally detectable tones of different intensities and durations are discriminable from one another although the durations do not exceed the limits of complete reciprocity when the response measure is detection. Two different interpretations are presented and discussed.The authors would like to thank Jacob Gutgold and Miriam Izaks for their aid with the equipment and experiment.  相似文献   

16.
This study explored whether males and females differ in facial muscle activity when exposed to tone stimuli with different intensity. Males and females were repeatedly exposed to 95 dB and 75 dB 1000 Hz tones while their facial electromyographic (EMG) activity from corrugator and zygomatic muscle regions were measured. Skin conductance responses were also measured. It was found that 95 dB but not 75 dB tones evoked increased corrugator activity. This effect differed significantly between males and females. Thus, it was only females that reacted with a significant increased corrugator response to the high intensity tone. While facial responses differed between the sexes, the skin conductance response patterns did not. Consistent with previous research it is concluded that females are more facially expressive than are males.  相似文献   

17.
Electronically synthesized tone sequences with systematic manipulation of amplitude and pitch variation, pitch level and contour, tempo, envelope, and filtration were rated on emotional expressiveness. The results show that two-thirds to three-quarters of the variance in the emotion attributions can be explained by the manipulation of the acoustic cues, and that a linear model of the judges' cue utilization seems to be a good approximation to their response system. Implications for phylogenetic and ontogenetic aspects of the vocal expression of emotion and for the psychology of music are discussed.This paper is based on research supported by NIMH grant MH 19-569-01 to the first author while at the University of Pennsylvania. The authors gratefully acknowledge comments and suggestions by Paul Ekman, Ursula Scherer, and two anonymous reviewers.  相似文献   

18.
This study examined whether facial electromyographic (EMG) reactions differentiate between identical tone stimuli which subjects perceive as differently unpleasant. Subjects were repeatedly exposed to a 1000 Hz 75 dB tone stimulus while their facial EMG from the corrugator and zygomatic muscle regions were measured. Skin conductance and heart rate responses were also measured. The subjects rated the unpleasantness of the stimulus and based on these ratings they were divided into two groups, High and Low in perceived unpleasantness. As predicted the facial EMG activity reflected the perceived unpleasantness. That is, the High group but not the Low group reacted with an increased corrugator response. The autonomic data, on the other hand, did not differ between groups. The results are consistent with the proposition that the facial muscles function as a readout system for emotional reactions and that facial muscle activity is intimately related to the experiential level of the emotional response system.  相似文献   

19.
A listener presented with two speech signals must at times sacrifice the processing of one signal in order to understand the other. This study was designed to distinguish costs related to interference from a second signal (selective attention) from costs related to performing two tasks simultaneously (divided attention). Listeners presented with two processed speech-in-noise stimuli, one to each ear, either (1) identified keywords in both or (2) identified keywords in one and detected the presence of speech in the other. Listeners either knew which ear to report in advance (single task) or were cued afterward (partial-report dual task). When the dual task required two identification judgments, performance suffered relative to the single-task condition (as measured by percent correct judgments). Two different tasks (identification for one stimulus and detection for the other) resulted in much smaller reductions in performance when the cue came afterward. We concluded that the degree to which listeners can simultaneously process dichotic speech stimuli seems to depend not only on the amount of interference between the two stimuli, but also on whether there is competition for limited processing resources. We suggest several specific hypotheses as to the structural mechanisms that could constitute these limited resources.  相似文献   

20.
Hardware/software packages that digitize sound on the Apple Macintosh can help researchers prepare and present auditory materials needed for their experiments. Common features and benefits of commercially available sound digitizing packages are discussed in terms of some possible applications to cognitive psychology experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号