首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Individuals with developmental disabilities may fail to attend to multiple features in compound stimuli (e.g., arrays of pictures, letters within words) with detrimental effects on learning. Participants were 5 children with autism spectrum disorder who had low to intermediate accuracy scores (35% to 84%) on a computer‐presented compound matching task. Sample stimuli were pairs of icons (e.g., chair–tree), the correct comparison was identical to the sample, and each incorrect comparison had one icon in common with the sample (e.g., chair–sun, airplane–tree). A 5‐step tabletop sorting‐to‐matching training procedure was used to teach compound matching. The first step was sorting 3 single pictures; subsequent steps gradually changed the task to compound matching. If progress stalled, tasks were modified temporarily to prompt observing behavior. After tabletop training, participants were retested on the compound matching task; accuracy improved to at least 95% for all children. This procedure illustrates one way to improve attending to multiple features of compound stimuli.  相似文献   

2.
Three experiments are reported on the influence of different timing relations on the McGurk effect. In the first experiment, it is shown that strict temporal synchrony between auditory and visual speech stimuli is not required for the McGurk effect. Subjects were strongly influenced by the visual stimuli when the auditory stimuli lagged the visual stimuli by as much as 180 msec. In addition, a stronger McGurk effect was found when the visual and auditory vowels matched. In the second experiment, we paired auditory and visual speech stimuli produced under different speaking conditions (fast, normal, clear). The results showed that the manipulations in both the visual and auditory speaking conditions independently influenced perception. In addition, there was a small but reliable tendency for the better matched stimuli to elicit more McGurk responses than unmatched conditions. In the third experiment, we combined auditory and visual stimuli produced under different speaking conditions (fast, clear) and delayed the acoustics with respect to the visual stimuli. The subjects showed the same pattern of results as in the second experiment. Finally, the delay did not cause different patterns of results for the different audiovisual speaking style combinations. The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.  相似文献   

3.
In two experiments, each including a simple reaction time (RT) task, a localization task, and a passive oddball paradigm, the physical similarity between two dichotically presented auditory stimuli was manipulated. In both experiments, a redundant signals effect (RSE), high localization performance, and a reliable mismatch negativity (MMN) was observed for largely differing stimuli, suggesting that these are coded separately in auditory memory. In contrast, no RSE and a localization rate close to chance level (experiment 1) or at chance (experiment 2) were observed for stimuli differing to a lesser degree. Crucially, for such stimuli a small (experiment 1) or no (experiment 2) MMN were observed. These MMN results indicate that such stimuli tend to fuse into a single percept and that this fusion occurs rather early within information processing.  相似文献   

4.
The current study evaluated the effectiveness of a go/no-go successive matching-to-sample procedure (S-MTS) to establish auditory–visual equivalence classes with college students. A sample and a comparison were presented, one at a time, in the same location. During training, after an auditory stimulus was presented, a green box appeared in the center of the screen for participants to touch to produce the comparison. Touching the visual comparison that was related to the auditory sample (e.g., A1B1) produced points, while touching or refraining from touching an unrelated comparison (e.g., A1B2) produced no consequences. Following AB/AC training, participants were tested on untrained relations (i.e., BA/CA and BC/CB), as well as tacting and sorting. During BA/CA relations tests, after touching the visual sample, the auditory stimulus was presented along with a white box for participants to respond. During BC/CB relations tests, after touching the visual sample, a visual comparison appeared. Across 2 experiments, all participants met emergence criterion for untrained relations and for sorting. Additionally, 14 out of 24 participants tacted all visual stimuli correctly. Results suggest the auditory–visual S-MTS procedure is an effective alternative to simultaneous MTS for establishing conditional relations and auditory-visual equivalence classes.  相似文献   

5.
In view of the frequent clinical use of external auditory stimuli in fluency building programs, the purpose of the present experiment was to compare the effects of rhythmic pacing, delayed auditory feedback, and high intensity masking noise on the frequency of stuttering by dysfluency type. Twelve normal hearing young adult stutterers completed an oral reading (approximately 250 syllables) and conversational speech task (3 min) while listening to the three auditory stimuli and during a control condition presented in random order. The results demonstrated that during oral reading all three auditory stimuli were associated with significant reductions in stuttering frequency. However, during conversational speech, only the metronome produced a significant reduction in total stuttering frequency. Individual dysfluency types were not differentially affected by the three auditory stimuli.  相似文献   

6.
In a first experiment, we recorded event-related-potentials (ERPs) to "the" followed by meaningful words (Story) versus "the" followed by nonsense syllables (Nonse). Left and right lateral anterior positivities (LAPs) were seen from the onset of "the" up to 200 ms in both conditions. Later than 200 ms following the onset of "the", the left and right LAPs continued for "the" in the Story, but were replaced by a negativity in the Nonse Condition. In a second experiment, ERPs were recorded to "the" in the Story and Nonse contexts mixed together under two different task instructions (attend to the auditory stimuli versus ignore the auditory stimuli). The same pattern of findings as Experiment 1 were observed for the Story and Nonse contexts when the participants attended to the auditory stimuli. Ignoring the auditory stimuli led to an attenuation of the right LAP, supporting the hypothesis that it is an index of discourse processing.  相似文献   

7.
A dichotic color-naming VRT task and a dichotic digits recognition task were administered. The first required the naming of five colors paired with their backward pronunciation; the second of five nonsimilar digits. Forty-eight right-handed male and female subjects were used, controlling for the factors of sex, handedness, and familial sinistrality. A highly significant left-ear advantage was obtained, suggesting that previous results of dichotic studies may only pertain to a limited class of phenomic stimuli. Results implicate different cerebral processes in the analysis of familiar and novel auditory and phenomic stimuli.  相似文献   

8.
The purpose of the present study was to investigate possible effects of exposure upon suprathreshold psychological responses when auditory magnitude estimation and cross-modal matching with audition as the standard are conducted within the same experiment. Four groups of 10 subjects each whose over-all age range was 18 to 23 yr. were employed. During the cross-modal matching task the Groups 1 and 2 subjects adjusted a vibrotactile stimulus presented to the dorsal surface of the tongue and the Groups 3 and 4 subjects adjusted a vibrotactile stimulus presented to the thenar eminence of the right hand to match binaurally presented auditory stimuli. The magnitude-estimation task was conducted before the cross-modal matching task for Groups 1 and 3 and the cross-modal matching task was conducted before the magnitude-estimation task for Groups 2 and 4. The psychophysical methods of magnitude estimation and cross-modal matching showed no effect of one upon the other when used in the same experiment.  相似文献   

9.
The authors measured postural sway while participants (N = 20 in each experiment) stood on a rigid or a compliant surface, with their eyes open or closed, and while they did or did not perform a short-term memory (STM) task. In Experiment 1, the STM stimuli were presented visually; in Experiment 2, the stimuli were presented auditorily. In both experiments, fine-scaled, mediolateral postural-sway variability decreased as the cognitive load imposed by the STM task increased. That effect was independent of support surface and vision manipulations. The spatiotemporal profile of postural sway was affected by both visual and auditory STM tasks, but to a greater degree by the auditory task. The authors discuss implications of the results for theories and models of postural control.  相似文献   

10.
Two experiments assessed contextual dependencies in a predictive-learning task. Subjects learned to associate each of four pictorial stimuli with the occurrence or non-occurrence of a specific outcome. Each of these stimuli, the intentional stimuli, was presented against one of two different visual (Experiment 1) or auditory (Experiment 2) context stimuli. These context stimuli were incidental: subjects were not explicitly instructed to pay any attention to them and each of them in isolation was not predictive of the outcome. During acquisition and testing, subjects expressed the expected relationship between intentional stimulus and outcome by an appropriate key press. At test, intentional stimuli were presented either with the same contextual stimulus as also present during acquisition (same trials), or with the other one (switched trials). The response latency was slower on switched trials than on same trials in each experiment, a result extending previous findings on the effect of environmental contextual stimuli on task performance. Results are discussed in the framework of contextual occasion setting and habituation to contextual stimuli.  相似文献   

11.
Two experiments assessed contextual dependencies in a predictive-learning task. Subjects learned to associate each of four pictorial stimuli with the occurrence or non-occurrence of a specific outcome. Each of these stimuli, the intentional stimuli, was presented against one of two different visual (Experiment 1) or auditory (Experiment 2) context stimuli. These context stimuli were incidental: subjects were not explicitly instructed to pay any attention to them and each of them in isolation was not predictive of the outcome. During acquisition and testing, subjects expressed the expected relationship between intentional stimulus and outcome by an appropriate key press. At test, intentional stimuli were presented either with the same contextual stimulus as also present during acquisition (same trials), or with the other one (switched trials). The response latency was slower on switched trials than on same trials in each experiment, a result extending previous findings on the effect of environmental contextual stimuli on task performance. Results are discussed in the framework of contextual occasion setting and habituation to contextual stimuli.  相似文献   

12.
We investigated the effects of specific stimulus information on the use of rule information in a category learning task in 2 experiments, one presented here and an intercategory transfer task reported in an earlier article. In the present experiment photograph--name combinations, called identifiers, were associated with 4 demographic attributes. The same attribute information was shown to all participants. However, for one group of participants, half of the identifiers were paired with attribute values repeated over presentation blocks. For the other group the identifier information was new for each presentation block. The first group performed less well than the second group on stimuli with nonrepeated identifiers, indicating a negative effect of specific stimulus information on processing rule information. Application of a network model to the 2 experiments, which provided for the growth of connections between attribute values in learning, indicated that repetition of identifiers produced a unitizing effect on stimuli. Results suggested that unitization produced interference through connections between irrelevant attribute values.  相似文献   

13.
Reaction times to detect a known or unknown digit in paired or single auditory test stimuli were measured. The results suggest that in classification or matching tasks with stimuli belonging to separate verbal classes, parallel or selective processing may be possible. There was no interaction of type of task (classify vs match) with either dichotic vs mixed monaural presentation, or pairs vs single stimuli, or negative vs positive responses. An attempt was made to suggest the separate processing stages underlying performance in this task.  相似文献   

14.
Auditory redundancy gains were assessed in two experiments in which a simple reaction time task was used. In each trial, an auditory stimulus was presented to the left ear, to the right ear, or simultaneously to both ears. The physical difference between auditory stimuli presented to the two ears was systematically increased across experiments. No redundancy gains were observed when the stimuli were identical pure tones or pure tones of different frequencies (Experiment 1). A clear redundancy gain and evidence of coactivation were obtained, however, when one stimulus was a pure tone and the other was white noise (Experiment 2). Experiment 3 employed a two-alternative forced choice localization task and provided evidence that dichotically presented pure tones of different frequencies are apparently integrated into a single percept, whereas a pure tone and white noise are not fused. The results extend previous findings of redundancy gains and coactivation with visual and bimodal stimuli to the auditory modality. Furthermore, at least within this modality, the results indicate that redundancy gains do not emerge when redundant stimuli are integrated into a single percept.  相似文献   

15.
This paper describes an experimental demonstration of stimulus equivalence classes consisting entirely of auditory stimuli. Stimuli were digitized arbitrary syllables (e.g., “cug,” “vek”) presented via microcomputer. Training and testing were conducted with a two-choice auditory successive conditional discrimination procedure. On each trial, auditory samples and comparisons were presented successively. As each comparison was presented, a response location (a rectangle) appeared on the computer screen. After all stimuli for a trial were presented, subjects selected one of the response locations. Six subjects acquired the conditional discrimination baseline, 4 subjects demonstrated the formation of three-member auditory equivalence classes resulting from sample-S+ relations, and 1 subject demonstrated equivalence classes resulting from sample-S— relations. Four subjects received additional training and subsequently demonstrated expansion of the three-member classes to four members each.  相似文献   

16.
Two experiments on the internal representation of auditory stimuli compared the pairwise and grouping methodologies as means of deriving similarity judgements. A total of 45 undergraduate students participated in each experiment, judging the similarity of short auditory stimuli, using one of the methodologies. The experiments support and extend Bonebright's (1996) findings, using a further 60 stimuli. Results from both methodologies highlight the importance of category information and acoustic features, such as root mean square (RMS) power and pitch, in similarity judgements. Results showed that the grouping task is a viable alternative to the pairwise task with N > 20 sounds whilst highlighting subtle differences, such as cluster tightness, between the different task results. The grouping task is more likely to yield category information as underlying similarity judgements.  相似文献   

17.
The effects of viewing the face of the talker (visual speech) on the processing of clearly presented intact auditory stimuli were investigated using two measures likely to be sensitive to the articulatory motor actions produced in speaking. The aim of these experiments was to highlight the need for accounts of the effects of audio-visual (AV) speech that explicitly consider the properties of articulated action. The first experiment employed a syllable-monitoring task in which participants were required to monitor for target syllables within foreign carrier phrases. An AV effect was found in that seeing a talker's moving face (moving face condition) assisted in more accurate recognition (hits and correct rejections) of spoken syllables than of auditory-only still face (still face condition) presentations. The second experiment examined processing of spoken phrases by investigating whether an AV effect would be found for estimates of phrase duration. Two effects of seeing the moving face of the talker were found. First, the moving face condition had significantly longer duration estimates than the still face auditory-only condition. Second, estimates of auditory duration made in the moving face condition reliably correlated with the actual durations whereas those made in the still face auditory condition did not. The third experiment was carried out to determine whether the stronger correlation between estimated and actual duration in the moving face condition might have been due to generic properties of AV presentation. Experiment 3 employed the procedures of the second experiment but used stimuli that were not perceived as speech although they possessed the same timing cues as those of the speech stimuli of Experiment 2. It was found that simply presenting both auditory and visual timing information did not result in more reliable duration estimates. Further, when released from the speech context (used in Experiment 2), duration estimates for the auditory-only stimuli were significantly correlated with actual durations. In all, these results demonstrate that visual speech can assist in the analysis of clearly presented auditory stimuli in tasks concerned with information provided by viewing the production of an utterance. We suggest that these findings are consistent with there being a processing link between perception and action such that viewing a talker speaking will activate speech motor schemas in the perceiver.  相似文献   

18.
This research explored the effect of teaching conditional discriminations with three procedures on the derivation of 36 stimuli relations (derived relations). The stimuli used consisted of three characteristics musical instruments, along with the corresponding picture. In the first experiment six university students were trained with simple stimuli and tested with compound auditory–visual samples; therefore, a one‐to‐many structure was used. In the second experiment, auditory stimuli were replaced by visual stimuli, for the samples used, for new students. A third experiment was implemented with an extra phase of training with compound stimuli for six new students. The structure of the experiments was: pretests (Xbcd–A; Xacd–B; Xabd–C; Xabc–D), training (A–B; A–C; A–D), and posttests (same as pretests). The difference between these conditions was the kind of stimuli used and a new phase of teaching used in condition 3: (Xbcd–A). The results indicate that training with simple stimuli on discriminations that include stimuli that are easy to discriminate from each other (words and sounds) is a sufficient condition for good posttest performance. However, when comparisons are made difficult (words only), participants show better performance on new tests if they have a learning history with compound stimuli.  相似文献   

19.
Dual-task methodology was used to assess a multiple-resources account of information processing in which each cerebral hemisphere is assumed to have access to its own finite amount of attentional resources. A visually presented verbal memory task was paired with an auditory tone memory task, and subjects were paid to emphasize one task more than the other. When subjects were trying to remember tones presented to the right ear, they could trade performance between tasks as a function of the emphasis condition, whereas on left-ear trials they could not. In addition, a control session indicated that stimuli presented to the unattended ear demanded processing resources, even when it was to the detriment of performance. The data support the assumption of independence between the hemispheres' resource supplies.  相似文献   

20.
B Magnani  F Pavani  F Frassinetti 《Cognition》2012,125(2):233-243
The aim of the present study was to explore the spatial organization of auditory time and the effects of the manipulation of spatial attention on such a representation. In two experiments, we asked 28 adults to classify the duration of auditory stimuli as "short" or "long". Stimuli were tones of high or low pitch, delivered left or right of the participant. The time bisection task was performed either on right or left stimuli regardless of their pitch (Spatial experiment), or on high or low tones regardless of their location (Tonal experiment). Duration of left stimuli was underestimated relative to that of right stimuli, in the Spatial but not in the Tonal experiment, suggesting that a spatial representation of auditory time emerges selectively when spatial-encoding is enforced. Further, when we introduced spatial-attention shifts using the prismatic adaptation procedure, we found modulations of auditory time processing as a function of prismatic deviation, which correlated with the interparticipant adaptation effect. These novel findings reveal a spatial representation of auditory time, modulated by spatial attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号