首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Eye movements directed at emotional and neutral pictures were investigated. Participants were instructed to fixate on a target and avoid looking at a non‐target. Results of trials consisting of emotional and neutral pictures were analyzed based on participants’ evaluations of the stimuli before the experiment. Results indicated that the number and duration of fixations were larger and longer for targets than for non‐targets. Further, the probability of first fixation was less for neutral targets than for emotional targets. This suggests that unpleasant pictures capture visual attention and that the attentional orienting is subject to automatic response. The possibility that top‐down processes of visual attention may be involved in attentional capture was also discussed.  相似文献   

2.
In adults, decisions based on multisensory information can be faster and/or more accurate than those relying on a single sense. However, this finding varies significantly across development. Here we studied speeded responding to audio‐visual targets, a key multisensory function whose development remains unclear. We found that when judging the locations of targets, children aged 4 to 12 years and adults had faster and less variable response times given auditory and visual information together compared with either alone. Comparison of response time distributions with model predictions indicated that children at all ages were integrating (pooling) sensory information to make decisions but that both the overall speed and the efficiency of sensory integration improved with age. The evidence for pooling comes from comparison with the predictions of Miller's seminal ‘race model’, as well as with a major recent extension of this model and a comparable ‘pooling’ (coactivation) model. The findings and analyses can reconcile results from previous audio‐visual studies, in which infants showed speed gains exceeding race model predictions in a spatial orienting task (Neil et al., 2006) but children below 7 years did not in speeded reaction time tasks (e.g. Barutchu et al., 2009). Our results provide new evidence for early and sustained abilities to integrate visual and auditory signals for spatial localization from a young age.  相似文献   

3.
To successfully interact with a rich and ambiguous visual environment, the human brain learns to differentiate visual stimuli and to produce the same response to subsets of these stimuli despite their physical difference. Although this visual categorization function is traditionally investigated from a unisensory perspective, its early development is inherently constrained by multisensory inputs. In particular, an early‐maturing sensory system such as olfaction is ideally suited to support the immature visual system in infancy by providing stability and familiarity to a rapidly changing visual environment. Here, we test the hypothesis that rapid visual categorization of salient visual signals for the young infant brain, human faces, is shaped by another highly relevant human‐related input from the olfactory system, the mother's body odor. We observe that a right‐hemispheric neural signature of single‐glance face categorization from natural images is significantly enhanced in the maternal versus a control odor context in individual 4‐month‐old infant brains. A lack of difference between odor conditions for the common brain response elicited by both face and non‐face images rules out a mere enhancement of arousal or visual attention in the maternal odor context. These observations show that face‐selective neural activity in infancy is mediated by the presence of a (maternal) body odor, providing strong support for multisensory inputs driving category acquisition in the developing human brain and having important implications for our understanding of human perceptual development.  相似文献   

4.
The abilities to flexibly allocate attention, select between conflicting stimuli, and make anticipatory gaze movements are important for young children's exploration and learning about their environment. These abilities constitute voluntary control of attention and show marked improvements in the second year of a child's life. Here we investigate the effects of visual distraction and delay on 18-month-olds’ ability to predict the location of an occluded target in an experiment that requires switching of attention, and compare their performance to that of adults. Our results demonstrate that by 18 months of age children can readily overcome a previously learned response, even under a condition that involves visual distraction, but have difficulties with correctly updating their prediction when presented with a longer time delay. Further, the experiment shows that, overall, the 18-month-olds’ allocation of visual attention is similar to that of adults, the primary difference being that adults demonstrate a superior ability to maintain attention on task and update their predictions over a longer time period.  相似文献   

5.
An ability to detect the common location of multisensory stimulation is essential for us to perceive a coherent environment, to represent the interface between the body and the external world, and to act on sensory information. Regarding the tactile environment “at hand”, we need to represent somatosensory stimuli impinging on the skin surface in the same spatial reference frame as distal stimuli, such as those transduced by vision and audition. Across two experiments we investigated whether 6‐ (n = 14; Experiment 1) and 4‐month‐old (n = 14; Experiment 2) infants were sensitive to the colocation of tactile and auditory signals delivered to the hands. We recorded infants’ visual preferences for spatially congruent and incongruent auditory‐tactile events delivered to their hands. At 6 months, infants looked longer toward incongruent stimuli, whilst at 4 months infants looked longer toward congruent stimuli. Thus, even from 4 months of age, infants are sensitive to the colocation of simultaneously presented auditory and tactile stimuli. We conclude that 4‐ and 6‐month‐old infants can represent auditory and tactile stimuli in a common spatial frame of reference. We explain the age‐wise shift in infants’ preferences from congruent to incongruent in terms of an increased preference for novel crossmodal spatial relations based on the accumulation of experience. A comparison of looking preferences across the congruent and incongruent conditions with a unisensory control condition indicates that the ability to perceive auditory‐tactile colocation is based on a crossmodal rather than a supramodal spatial code by 6 months of age at least.  相似文献   

6.
Herding in financial markets refers to that investors are influenced by others. This study addresses the importance of consistency for herding. It is suggested that, in financial markets perceptions of consistency are based on repeated observations over time. Consistency may then be perceived as the agreement across time between investors' predictions. In addition, consistency may be related to variance over time in each investor's predictions. In an experiment using a Multiple Cue Probability Learning paradigm, 96 undergraduates made multi‐trial predictions of future stock prices given information about the current price and the predictions made by five fictitious others. Consistency was varied between the others' predictions (correlation) and within the others' predictions (variance). The results showed that the predictions were significantly influenced by the others' predictions when these were correlated. No effect of variance was observed. Hence, participants were influenced by the others when they were in agreement, regardless of whether they varied their predictions over trials or not. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Research on tact acquisition by children with autism spectrum disorder (ASD) has often focused on teaching participants to tact visual stimuli. It is important to evaluate procedures for teaching tacts of nonvisual stimuli (e.g., olfactory, tactile). The purpose of the current study was to extend the literature on secondary target instruction and tact training by evaluating the effects of a discrete‐trial instruction procedure involving (a) echoic prompts, a constant prompt delay, and error correction for primary targets; (b) inclusion of secondary target stimuli in the consequent portion of learning trials; and (c) multiple exemplar training on the acquisition of item tacts of olfactory stimuli, emergence of category tacts of olfactory stimuli, generalization of category tacts, and emergence of category matching, with three children diagnosed with ASD. Results showed that all participants learned the item and category tacts following teaching, participants demonstrated generalization across category tacts, and category matching emerged for all participants.  相似文献   

8.

Our experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? I discuss work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric), and coordinating multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using auditory cues. The work uses a model-based approach to compare participants’ behaviour with the predictions of alternative information processing models. This lets us see when and how—during development, and with experience—the perceptual-cognitive computations underpinning our experiences in space change. I discuss progress on understanding the limits of effective spatial computation for perception and action, and how lessons from the developing spatial cognitive system can inform approaches to augmenting human abilities with new sensory signals provided by technology.

  相似文献   

9.
The purpose of this three‐experiment study was to evaluate whether performance consistent with the formation of equivalence classes could be established after training adults to tact and intraverbally relate the names of visual stimuli. Fourteen participants were exposed to tact training, listener testing, and intraverbal training (A'B’ and B'C’) prior to matching‐to‐sample (MTS) and intraverbal posttests presented in different sequences across experiments. All participants demonstrated emergent MTS and intraverbal relations consistent with equivalence class formation. More importantly, all participants emitted experimentally defined or self‐generated tacts or intraverbally named the correct sample‐comparison pairs at some point during posttests. These results are consistent with the intraverbal naming account (Horne & Lowe, 1996) in that participants who passed novel relations MTS tests also demonstrated emergence of corresponding intraverbal relations. However, verbal reports and latency data suggest that participants did not necessarily have to use intraverbal naming as a problem solving strategy continuously throughout MTS posttests. These results extended previous research by showing that verbal behavior training of baseline relations (A'B’ and B'C’) is sufficient to establish novel conditional relations consistent with equivalence class formation.  相似文献   

10.
Spatial information processing takes place in different brain regions that receive converging inputs from several sensory modalities. Because of our own movements—for example, changes in eye position, head rotations, and so forth—unimodal sensory representations move continuously relative to one another. It is generally assumed that for multisensory integration to be an orderly process, it should take place between stimuli at congruent spatial locations. In the monkey posterior parietal cortex, the ventral intraparietal (VIP) area is specialized for the analysis of movement information using visual, somatosensory, vestibular, and auditory signals. Focusing on the visual and tactile modalities, we found that in area VIP, like in the superior colliculus, multisensory signals interact at the single neuron level, suggesting that this area participates in multisensory integration. Curiously, VIP does not use a single, invariant coordinate system to encode locations within and across sensory modalities. Visual stimuli can be encoded with respect to the eye, the head, or halfway between the two reference frames, whereas tactile stimuli seem to be prevalently encoded relative to the body. Hence, while some multisensory neurons in VIP could encode spatially congruent tactile and visual stimuli independently of current posture, in other neurons this would not be the case. Future work will need to evaluate the implications of these observations for theories of optimal multisensory integration.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

11.
Three experiments are reported that address the issue of awareness in evaluative learning in two different sensory modalities: visual and haptic. Attempts were made to manipulate the degree of awareness through a reduction technique (by use of a distractor task in Experiments 1 and 2 and by subliminally presenting affective stimuli in Experiment 3) and an induction technique (by unveiling the evaluative learning effect and requiring participants to try to discount the influence of the affective stimuli). The results indicate overall that evaluative learning was successful in the awareness-reduction groups but not in the awareness-induction groups. Moreover, an effect in the opposite direction to that normally observed in evaluative learning emerged in participants aware of the stimulus contingencies. In addition, individual differences in psychological reactance were found to be implicated in the strength and direction of the effect. It is argued that these results pose serious problems for the contention that awareness is necessary for evaluative learning.  相似文献   

12.
We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.  相似文献   

13.
Human contingency learning was used to compare the predictions of configural and elemental theories. In three experiments, participants were required to learn which indicators were associated with an increase in core temperature of a fictitious nuclear plant. Experiments 1 and 2 investigated the rate at which a triple-element stimulus (ABC) could be discriminated from either single-element stimuli (A, B, and C) or double-element stimuli (AB, BC, and AC). Experiment 1 used visual stimuli, whilst Experiment 2 used visual, auditory, and tactile stimuli. In both experiments the participants took longer to discriminate the triple-element stimulus from the more similar double-element stimuli than from the less similar single-element stimuli. Experiment 3 tested for summation with stimuli from either a single or multiple modalities, and summation was found only in the latter case. Thus, the pattern of results seen in Experiments 1 and 2 was not dependent on whether the stimuli were single modal or multimodal, nor was it dependent on whether the stimuli could elicit summation. This pattern of results is consistent with predictions derived from Pearce's (1987, 1994) configural theory.  相似文献   

14.
We examined the developmental differences in motor control and learning of a two‐segment movement. One hundred and five participants (53 female) were divided into three age groups (7–8 years, 9–10 years and 19–27 years). They performed a two‐segment movement task in four conditions (full vision, fully disturbed vision, disturbed vision in the first movement segment and disturbed vision in the second movement segment). The results for movement accuracy and overall movement time show that children, especially younger children, are more susceptible to visual perturbations than adults. The adults’ movement time in one of the movement segments could be increased by disturbing the vision of the other movement segment. The children's movement time for the second movement segment increased when their vision of the first movement segment was disturbed. Disturbing the vision of the first movement segment decreased the percentage of central control of the second movement in younger children, but not in the other two age groups. The children's normalized jerk was more easily increased by visual perturbations. The children showed greater improvement after practice in the conditions of partial vision disturbance. As the participants’ age increased, practice tended to improve their feedforward motor control rather than their feedback motor control. These results suggest that children's central movement control improves with age and practice. We discuss the theoretical implications and practical significance of the differential effects of visual perturbation and movement segmentation upon motor control and learning from a developmental viewpoint.  相似文献   

15.
  • This paper examines the use of sensory stimuli in the creation of store atmosphere in the online context. Parsons ( 2002 ) shows that online shoppers are motivated by many of the same non‐functional aspects of shopping as physical store shoppers (eg Tauber, 1972 ; Sheth, 1983 ), including sensory stimulation from aural and visual stimuli. This study investigates what customers desire from a virtual store atmosphere, and conducts an audit of the 15 top e‐tailers as ranked by Nielsen/NetRatings ( 2001 ) using the extant set of stimuli/responses established from the physical store literature. Findings suggest a strong desire from customers for sensory stimuli, with only partial matching by e‐tailers, and a surprising lack of differentiation among competitors. Examination of purchase responses to actual stimuli suggests a need for e‐tailers to match consumers' desires.
Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.  相似文献   

17.
When people synchronize taps with isochronously presented stimuli, taps usually precede the pacing stimuli [negative mean asynchrony (NMA)]. One explanation of NMA [sensory accumulation model (SAM), Aschersleben in Brain Cogn 48:66–79, 2002] is that more time is needed to generate a central code for kinesthetic-tactile information than for auditory or visual stimuli. The SAM predicts that raising the intensity of the pacing stimuli shortens the time for their sensory accumulation, thereby increasing NMA. This prediction was tested by asking participants to synchronize finger force pulses with target isochronous stimuli with various intensities. In addition, participants performed a simple reaction-time task, for comparison. Higher intensity led to shorter reaction times. However, intensity manipulation did not affect NMA in the synchronization task. This finding is not consistent with the predictions based on the SAM. Discrepancies in sensitivity to stimulus intensity between sensorimotor synchronization and reaction-time tasks point to the involvement of different timing mechanisms in these two tasks.  相似文献   

18.
Previous research has shown resistance to extinction of fear conditioned to racial out-group faces, suggesting that these stimuli may be subject to prepared fear learning. The current study replicated and extended previous research by using a different racial out-group, and testing the prediction that prepared fear learning is unaffected by verbal instructions. Four groups of Caucasian participants were trained with male in-group (Caucasian) or out-group (Chinese) faces as conditional stimuli; one paired with an electro-tactile shock (CS+) and one presented alone (CS−). Before extinction, half the participants were instructed that no more shocks would be presented. Fear conditioning, indexed by larger electrodermal responses to, and blink startle modulation during the CS+, occurred during acquisition in all groups. Resistance to extinction of fear learning was found only in the racial out-group, no instruction condition. Fear conditioned to a racial out-group face was reduced following verbal instructions, contrary to predictions for the nature of prepared fear learning.  相似文献   

19.
Studies on teaching tacts to individuals with autism spectrum disorder (ASD) have primarily focused on visual stimuli, despite published clinical recommendations to teach tacts of stimuli in other sensory domains as well. In the current study, two children with ASD were taught to tact auditory stimuli under two stimulus‐presentation arrangements: isolated (auditory stimuli presented without visual cues) and compound (auditory stimuli presented with visual cues). Results indicate that compound stimulus presentation was a more effective teaching procedure, but that it interfered with prior object‐name tacts. A modified compound arrangement in which object‐name tact trials were interspersed with auditory‐stimulus trials mitigated this interference.  相似文献   

20.
ABSTRACT. The authors investigated the retention of implicit sequence learning in 14 persons with Parkinson's disease (PPD), 14 persons who stutter (PWS), and 14 control participants. Participants completed a nonsense syllable serial reaction time task in a 120-min session. Participants named aloud 4 syllables in response to 4 visual stimuli. The syllables formed a repeating 8-item sequence not made known to participants. After 1 week, participants completed a 60-min retention session that included an explicit learning questionnaire and a sequence generation task. PPD showed retention of general learning equivalent to controls but PWS's reaction times were significantly slower on early trials of the retention test relative to other groups. Controls showed implicit learning during the initial session that was retained on the retention test. In contrast, PPD and PWS did not demonstrate significant implicit learning until the retention test suggesting intact, but delayed, learning and retention of implicit sequencing skills. All groups demonstrated similar limited explicit sequence knowledge. Performance differences between PWS and PPD relative to controls during the initial session and on early retention trials indicated possible dysfunction of the cortico-striato-thalamo-cortical loop. The etiological implications for stuttering, and clinical implications for both populations, of this dysfunction are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号