首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Individual performance was compared across three different tasks that tap into the binding of stimulus features in perception, the binding of action features in action planning, and the emergence of stimulus–response bindings (“event files”). Within a task correlations between the size of binding effects were found within visual perception (e.g., the strength of shape–location binding correlated positively with the strength of shape–colour binding) but not between perception and action planning, suggesting different, domain-specific binding mechanisms. To some degree, binding strength was predicted by priming effects of the respective features, especially if these features varied on a dimension that matched the current attentional set.  相似文献   

2.
Actions have been assumed to be cognitively represented by codes of relevant action features. Six experiments investigated whether irrelevant action features — conditioned response-contingent auditory events — are also coded and integrated into action codes. Subjects responded to visual stimuli by pressing a left- versus right-hand button or by touching a single key once versus twice. Responses produced certain action effects: tones on the left versus the right or tones of low versus high pitch. After subjects had some practice, an inducing stimulus was presented together with the reaction stimulus; this inducing stimulus shared features with the action effect of the correct or incorrect response. If action effects were integrated into action codes, inducing stimuli should activate or prime the associated response. Indeed, substantial effects of correspondence or compatibility between inducing stimuli and irrelevant action effects were found in a variety of tasks. Results are interpreted as evidence for an automatic integration of information about action effects and taken as support of an action-concept model of action-effect integration and stimulus-response compatibility.  相似文献   

3.
In selection tasks, target and distractor features can be encoded together with the response into the same short-lived memory trace, or event file (see Hommel, 2004), leading to bindings between stimulus and response features. The repetition of a stored target or distractor feature can lead to the retrieval of the entire episode, including the response—so-called “binding effects.” Binding effects due to distractor repetition are stronger for grouped than for nongrouped target and distractor stimulus configurations. Modulation of either of two mechanisms that lead to the observed binding effects might be responsible here: Grouping may influence either stimulus–response integration or stimulus–response retrieval. In the present study we investigated the influences of grouping on both mechanisms independently. In two experiments, target and distractor letters were grouped (or nongrouped) via color (dis)similarity separately during integration and retrieval. Grouping by color similarity affected integration and retrieval mechanisms independently and in different ways. Color dissimilarity enhanced distractor-based retrieval, whereas color similarity enhanced distractor integration. We concluded that stimulus grouping is relevant for binding effects, but that the mechanisms that contribute to binding effects should be carefully separated.  相似文献   

4.
Understanding how the human brain integrates features of perceived events calls for the examination of binding processes within and across different modalities and domains. Recent studies of feature-repetition effects have demonstrated interactions between shape, color, and location in the visual modality and between pitch, loudness, and location in the auditory modality: repeating one feature is beneficial if other features are also repeated, but detrimental if not. These partial-repetition costs suggest that co-occurring features are spontaneously bound into temporary event files. Here, we investigated whether these observations can be extended to features from different sensory modalities, combining visual and auditory features in Experiment 1 and auditory and tactile features in Experiment 2. The same types of interactions, as for unimodal feature combinations, were obtained including interactions between stimulus and response features. However, the size of the interactions varied with the particular combination of features, suggesting that the salience of features and the temporal overlap between feature-code activations plays a mediating role.  相似文献   

5.
Individual performance was compared across three different tasks that tap into the binding of stimulus features in perception, the binding of action features in action planning, and the emergence of stimulus-response bindings (“event files”). Within a task correlations between the size of binding effects were found within visual perception (e.g., the strength of shape-location binding correlated positively with the strength of shape-colour binding) but not between perception and action planning, suggesting different, domain-specific binding mechanisms. To some degree, binding strength was predicted by priming effects of the respective features, especially if these features varied on a dimension that matched the current attentional set.  相似文献   

6.
Individuals with autism spectrum disorders (ASDs) often present atypical auditory perception. Previous work has reported both enhanced low-level pitch discrimination and superior abilities to detect local pitch structure on higher-level melodic tasks in ASD. However, it is unclear how low and high levels of auditory perception are related in ASD or typical development (TD), or how this relationship might change across development and stimulus presentation rates. To these aims, in the present study, children with ASD and TD were tested on a low-level pitch direction discrimination task and a high-level melodic global-local task. Groups performed similarly on both of these auditory tasks. Moreover, individual differences in low-level pitch direction ability predicted performance on the higher-level global-local task, with a stronger relationship in ASD. Age did not affect the relationship between low-level and high-level pitch performance in either ASD or TD. However, there was a more positive effect of age on the high-level global-local task performance in TD than ASD. Finally, there was no effect of stimulus rate on the relationship between low-level and high-level pitch performance in either group. These findings provide a better understanding of how perception is associated across levels of processing in ASD versus TD. This work helps to better understand individual differences in auditory perception and to refine ASD phenotypes.  相似文献   

7.
Complex sounds vary along a number of acoustic dimensions. These dimensions may exhibit correlations that are familiar to listeners due to their frequent occurrence in natural sounds—namely, speech. However, the precise mechanisms that enable the integration of these dimensions are not well understood. In this study, we examined the categorization of novel auditory stimuli that differed in the correlations of their acoustic dimensions, using decision bound theory. Decision bound theory assumes that stimuli are categorized on the basis of either a single dimension (rule based) or the combination of more than one dimension (information integration) and provides tools for assessing successful integration across multiple acoustic dimensions. In two experiments, we manipulated the stimulus distributions such that in Experiment 1, optimal categorization could be accomplished by either a rule-based or an information integration strategy, while in Experiment 2, optimal categorization was possible only by using an information integration strategy. In both experiments, the pattern of results demonstrated that unidimensional strategies were strongly preferred. Listeners focused on the acoustic dimension most closely related to pitch, suggesting that pitch-based categorization was given preference over timbre-based categorization. Importantly, in Experiment 2, listeners also relied on a two-dimensional information integration strategy, if there was immediate feedback. Furthermore, this strategy was used more often for distributions defined by a negative spectral correlation between stimulus dimensions, as compared with distributions with a positive correlation. These results suggest that prior experience with such correlations might shape short-term auditory category learning.  相似文献   

8.
Several lines of evidence suggest that during processing of events, the features of these events become connected via episodic bindings. Such bindings have been demonstrated for a large number of visual and auditory stimulus features, like color and orientation, or pitch and loudness. Importantly, most visual and auditory events typically also involve temporal features, like onset time or duration. So far, however, whether temporal stimulus features are also bound into event representations has never been tested directly. The aim of the present study was to investigate possible binding between stimulus duration and other features of auditory events. In Experiment 1, participants had to respond with two keys to a low or high pitch sinus tone. Critically, the tones were presented with two different presentation durations. Sequential analysis of RT data indicated binding of stimulus duration into the event representation: at pitch repetitions, performance was better when both pitch and duration repeated, relative to when only pitch repeated and duration switched. This finding was replicated with loudness as relevant stimulus feature in Experiment 2. In sum, the results demonstrate that temporal features are bound into auditory event representations. This finding is an important advancement for binding theory in general, and raises several new questions for future research.  相似文献   

9.
Three experiments examine multimodal integration of tone pitch (high/low), facial expression stimuli (happy/angry), and responses (happy/angry) in a compatibility paradigm. When the participants' task is to imitate facial expressions (Experiment 1), smiles are facilitated by high tones whereas frowns are facilitated by low tones. Experiments 2 and 3 further analyse this effect and show that there is both integration between the tone stimulus and the facial stimulus and between the tone stimulus and the facial response. Results suggest that pitch height is associated with emotion. An interpretation in terms of an embodied cognition approach that emphasizes an interweavement of perception and action is discussed.  相似文献   

10.
Five experiments investigated the spontaneous integration of stimulus and response features. Participants performed simple, prepared responses (R1) to the mere presence of Go signals (S1) before carrying out another, freely chosen response (R2) to another stimulus (S2), the main question being whether the likelihood of repeating a response depends on whether or not the stimulus, or some of its features, are repeated. Indeed, participants were more likely to repeat the previous response if stimulus form or color was repeated than if it was alternated. The same was true for stimulus location, but only if location was made task-relevant, whether by defining the response set in terms of location, by requiring the report of S2 location, or by having S1 to be selected against a distractor. These findings suggest that task-relevant stimulus and response features are spontaneously integrated into independent, local event files, each linking one stimulus to one response feature. Upon reactivation of one member of the binary link activation is spread to the other, thereby increasing the likelihood to repeat a response if one or more stimulus features are repeated. These findings support the idea that both perceptual events and action plans are cognitively represented in terms of their features, and that feature-integration processes cross borders between perception and action.  相似文献   

11.
In this article, we extend Garner’s speeded classification procedure to investigate processes underlying the interaction of the auditory dimensions pitch, loudness, and timbre. In the experiments reported here, subjects classified attributes on these three auditory dimensions. Our extended procedure, calledmulticlass, is based conceptually on our model of how such dimensions interact; the model explains the perception of attributes from an attended dimension through the action of contextual constraints created by variation along an unattended dimension. Two forms of context are present simultaneously in each multiclass task:intraclass context, variation along the unattended dimension that interferes with the classification of attributes, andredundant context, variation along the unattended dimension that enhances classification. We find that such dual-context situations reliably distinguish two kinds of interacting dimensions. Subjects classifying HARD dimensions, here pitch and timbre, resist the ill effects of intraclass.context and reap gains from redundant context. Subjects classifying SOFT dimensions, here loudness, show interference because the attributes are veiled perceptually in dual context, These findings, we argue, demonstrate the power of the multiclass procedure and fit well our view-that dimensional interaction entails processing both at the level of the stimulus whale and at the level of stimulus attributes.  相似文献   

12.
Attentional requirements for the spontaneous integration of stimulus and response features were analyzed. In line with previous findings, carrying out a prepared response to the onset of a stimulus created bindings between the response and the features of that stimulus, thereby impairing subsequent performance on mismatching stimulus-response combinations. Findings demonstrate that a stimulus gets bound to a response even if its presence is neither necessary nor useful for the task at hand, it follows rather than precedes the response in time, it competes with a task-relevant stimulus, and if the response is suppressed--but only if the stimulus appears close to the response's eventual execution or abandonment. A multiple-integration model is suggested that assumes that the integration of stimulus features in perception and of response features in action planning are local processes that are independent of stimulus-response integration, which presumably is triggered by the success of the perception-action episode.  相似文献   

13.
Spatial representation of pitch plays a central role in auditory processing. However, it is unknown whether impaired auditory processing is associated with impaired pitch–space mapping. Experiment 1 examined spatial representation of pitch in individuals with congenital amusia using a stimulus–response compatibility (SRC) task. For amusic and non-amusic participants, pitch classification was faster and more accurate when correct responses involved a physical action that was spatially congruent with the pitch height of the stimulus than when it was incongruent. However, this spatial representation of pitch was not as stable in amusic individuals, revealed by slower response times when compared with control individuals. One explanation is that the SRC effect in amusics reflects a linguistic association, requiring additional time to link pitch height and spatial location. To test this possibility, Experiment 2 employed a colour-classification task. Participants judged colour while ignoring a concurrent pitch by pressing one of two response keys positioned vertically to be congruent or incongruent with the pitch. The association between pitch and space was found in both groups, with comparable response times in the two groups, suggesting that amusic individuals are only slower to respond to tasks involving explicit judgments of pitch.  相似文献   

14.
Auditory stream segregation can occur when tones of different pitch (A, B) are repeated cyclically: The larger the pitch separation and the faster the tempo, the more likely perception of two separate streams is to occur. The present study assessed stream segregation in perceptual and sensorimotor tasks, using identical ABBABB … sequences. The perceptual task required detection of single phase-shifted A tones; this was expected to be facilitated by the presence of B tones unless segregation occurred. The sensorimotor task required tapping in synchrony with the A tones; here the phase correction response (PCR) to shifted A tones was expected to be inhibited by B tones unless segregation occurred. Two sequence tempi and three pitch separations (2, 10, and 48 semitones) were used with musically trained participants. Facilitation of perception occurred only at the smallest pitch separation, whereas the PCR was reduced equally at all separations. These results indicate that auditory action control is immune to perceptual stream segregation, at least in musicians. This may help musicians coordinate with diverse instruments in ensemble playing.  相似文献   

15.
An influential theoretical perspective describes an implicit category-learning system that associates regions of perceptual space with response outputs by integrating information preattentionally and predecisionally across multiple stimulus dimensions. In this study, we tested whether this kind of implicit, information-integration category learning is possible across stimulus dimensions lying in different sensory modalities. Humans learned categories composed of conjoint visual–auditory category exemplars comprising a visual component (rectangles varying in the density of contained lit pixels) and an auditory component (in Exp. 1, auditory sequences varying in duration; in Exp. 2, pure tones varying in pitch). The categories had either a one-dimensional, rule-based solution or a two-dimensional, information-integration solution. Humans could solve the information-integration category tasks by integrating information across two stimulus modalities. The results demonstrated an important cross-modal form of sensory integration in the service of category learning, and they advance the field’s knowledge about the sensory organization of systems for categorization.  相似文献   

16.
Audio-visual simultaneity judgments   总被引:3,自引:0,他引:3  
The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.  相似文献   

17.
Early holistic models of perception presume that stimuli composed of interacting dimensions can be experienced initially as undifferentiated. This view, formalized through recourse to a Euclidean geometry of perceptual space, predicts that the orientation of axes used to create stimulus sets is unimportant to performance in speeded classification. We tested this idea by using the interacting vibrotactile dimensions of pitch and loudness. Despite perceivers’ relatively poor experience with these dimensions, we showed that the orientation corresponding to pitch and loudness was unique in vibrotactile perceptual space; subjects classified stimuli more efficiently at this orientation than at other orientations. Certain holistic models also claim that when stimulus differences are small, perceivers can recognize change without distinguishing the kind of change. We tested this idea by using a signal detection analysis of unspeededsame—different decisions. We found that subjects’ ability to notice the kind of change equaled their ability to notice the change alone. In view of these results, which indicate that pitch and loudness are primary in vibrotactile perception, we detail a new conception of dimensional interaction.  相似文献   

18.
Xiao M  Wong M  Umali M  Pomplun M 《Perception》2007,36(9):1391-1395
Perceptual integration of audio-visual stimuli is fundamental to our everyday conscious experience. Eye-movement analysis may be a suitable tool for studying such integration, since eye movements respond to auditory as well as visual input. Previous studies have shown that additional auditory cues in visual-search tasks can guide eye movements more efficiently and reduce their latency. However, these auditory cues were task-relevant since they indicated the target position and onset time. Therefore, the observed effects may have been due to subjects using the cues as additional information to maximize their performance, without perceptually integrating them with the visual displays. Here, we combine a visual-tracking task with a continuous, task-irrelevant sound from a stationary source to demonstrate that audio-visual perceptual integration affects low-level oculomotor mechanisms. Auditory stimuli of constant, increasing, or decreasing pitch were presented. All sound categories induced more smooth-pursuit eye movement than silence, with the greatest effect occurring with stimuli of increasing pitch. A possible explanation is that integration of the visual scene with continuous sound creates the perception of continuous visual motion. Increasing pitch may amplify this effect through its common association with accelerating motion.  相似文献   

19.
Some reaction time experiments are reported on the relation between the perception and production of phonetic features in speech. Subjects had to produce spoken consonant-vowel syllables rapidly in response to other consonant-vowel stimulus syllables. The stimulus syllables were presented auditorily in one condition and visually in another. Reaction time was measured as a function of the phonetic features shared by the consonants of the stimulus and response syllables. Responses to auditory stimulus syllables were faster when the response syllables started with consonants that had the same voicing feature as those of the stimulus syllables. A shared place-of-articulation feature did not affect the speed of responses to auditory stimulus syllables, even though the place feature was highly salient. For visual stimulus syllables, performance was independent of whether the consonants of the response syllables had the same voicing, same place of articulation, or no shared features. This pattern of results occurred in cases where the syllables contained stop consonants and where they contained fricatives. It held for natural auditory stimuli as well as artificially synthesized ones. The overall data reveal a close relation between the perception and production of voicing features in speech. It does not appear that such a relation exists between perceiving and producing places of articulation. The experiments are relevant to the motor theory of speech perception and to other models of perceptual-motor interactions.  相似文献   

20.
Contrary to the predictions of established theory, Schutz and Lipscomb (2007) have shown that visual information can influence the perceived duration of concurrent sounds. In the present study, we deconstruct the visual component of their illusion, showing that (1) cross-modal influence depends on visible cues signaling an impact event (namely, a sudden change of direction concurrent with tone onset) and (2) the illusion is controlled primarily by the duration of post-impact motion. Other aspects of the post-impact motion—distance traveled, velocity, acceleration, and the rate of its change (i.e., its derivative, jerk)—play a minor role, if any. Together, these results demonstrate that visual event duration can influence the perception of auditory event duration, but only when stimulus cues are sufficient to give rise to the perception of a causal cross-modal relationship. This refined understanding of the illusion’s visual aspects is helpful in comprehending why it contrasts so markedly with previous research on cross-modal integration, demonstrating that vision does not appreciably influence auditory judgments of event duration (Walker & Scott, 1981).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号