首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ??pip-and-pop effect?? refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch?Cbrightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.  相似文献   

2.
Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory–visual interactions, which rapidly increase the denoted target object’s salience. This would apply, in particular, to complex visual scenes.  相似文献   

3.
Recent investigations of loudness change within stimuli have identified differences as a function of direction of change and power range (e.g., Canévet, Acustica, 62, 2136–2142, 1986; Neuhoff, Nature, 395, 123–124, 1998), with claims of differences between dynamic and static stimuli. Experiment 1 provides the needed direct empirical evaluation of loudness change across static, dynamic, and hybrid stimuli. Consistent with recent findings for dynamic stimuli, quantitative and qualitative differences in pattern of loudness change were found as a function of power change direction. With identical patterns of loudness change, only quantitative differences were found across stimulus type. In Experiment 2, Points of Subjective loudness Equality (PSE) provided additional information about loudness judgments for the static and dynamic stimuli. Because the quantitative differences across stimulus type exceed the magnitude that could be expected based upon temporal integration by the auditory system, other factors need to be, and are, considered.  相似文献   

4.
The present study examined if and how the direction of planned hand movements affects the perceived direction of visual stimuli. In three experiments participants prepared hand movements that deviated regarding direction (“Experiment 1” and “2”) or distance relative to a visual target position (“Experiment 3”). Before actual execution of the movement, the direction of the visual stimulus had to be estimated by means of a method of adjustment. The perception of stimulus direction was biased away from planned movement direction, such that with leftward movements stimuli appeared somewhat more rightward than with rightward movements. Control conditions revealed that this effect was neither a mere response bias, nor a result of processing or memorizing movement cues. Also, shifting the focus of attention toward a cued location in space was not sufficient to induce the perceptual bias observed under conditions of movement preparation (“Experiment 4”). These results confirm that characteristics of planned actions bias visual perception, with the direction of bias (contrast or assimilation) possibly depending on the type of the representations (categorical or metric) involved.  相似文献   

5.
A major issue in visual scene recognition involves the extraction of recurring chunks from a sequence of complex scenes. Previous studies have suggested that this kind of learning is accomplished according to Bayesian principles that constrain the types of extracted chunks. Here we show that perceptual grouping cues are also incorporated in this Bayesian model, providing additional evidence for the possible span of chunks. Experiment 1 replicates previous results showing that observers can learn three-element chunks without learning smaller, two-element chunks embedded within them. Experiment 2 shows that the very same embedded chunks are learned if they are grouped by perceptual cues, suggesting that perceptual grouping cues play an important role in chunk extraction from complex scenes.  相似文献   

6.
The cognitive system adapts to disturbances caused by task-irrelevant information. For example, interference due to irrelevant spatial stimulation (e.g., the spatial Simon effect) typically diminishes right after a spatially incongruent event. These adaptation effects reflect processes that help to overcome the impact of task-irrelevant information. Interference with (or interruption of) task processing can also result from valent (i.e., positive or negative) stimuli, such as in the ??affective Simon?? task. In the present study, we tested whether the resolution of valence-based task disturbances generalizes to the resolution of other cognitive (spatial) types of interference, and vice versa. Experiments 1 and 2 explored the interplay of adaptation effects triggered by spatial and affective interference. Incongruent spatial information modified the spatial Simon effect but not affective interference effects, whereas incongruent affective information modified affective interference effects to some extent, but not spatial Simon effects. In Experiment 3, we investigated the interplay of adaptation effects triggered by spatial interference and by the interruption of task processing from valent information that did not overlap with the main task (??emotional Stroop?? effect). Again we observed domain-specific adaptation for the spatial Simon effect but found no evidence for cross-domain modulations. We assume that the processes used to resolve task disturbance from irrelevant affective and spatial information operate in largely independent manners.  相似文献   

7.
Similarities have been observed in the localization of the final position of moving visual and moving auditory stimuli: Perceived endpoints that are judged to be farther in the direction of motion in both modalities likely reflect extrapolation of the trajectory, mediated by predictive mechanisms at higher cognitive levels. However, actual comparisons of the magnitudes of displacement between visual tasks and auditory tasks using the same experimental setup are rare. As such, the purpose of the present free-field study was to investigate the influences of the spatial location of motion offset, stimulus velocity, and motion direction on the localization of the final positions of moving auditory stimuli (Experiment 1 and 2) and moving visual stimuli (Experiment 3). To assess whether auditory performance is affected by dynamically changing binaural cues that are used for the localization of moving auditory stimuli (interaural time differences for low-frequency sounds and interaural intensity differences for high-frequency sounds), two distinct noise bands were employed in Experiments 1 and 2. In all three experiments, less precise encoding of spatial coordinates in paralateral space resulted in larger forward displacements, but this effect was drowned out by the underestimation of target eccentricity in the extreme periphery. Furthermore, our results revealed clear differences between visual and auditory tasks. Displacements in the visual task were dependent on velocity and the spatial location of the final position, but an additional influence of motion direction was observed in the auditory tasks. Together, these findings indicate that the modality-specific processing of motion parameters affects the extrapolation of the trajectory.  相似文献   

8.
In five experiments, we examined whether the number of items can guide visual focal attention. Observers searched for the target area with the largest (or smallest) number of dots (squares in Experiment 4 and “checkerboards” in Experiment 5) among distractor areas with a smaller (or larger) number of dots. Results of Experiments 1 and 2 show that search efficiency is determined by target to distractor dot ratios. In searches where target items contained more dots than did distractor items, ratios over 1.5:1 yielded efficient search. Searches for targets where target items contained fewer dots than distractor items were harder. Here, ratios needed to be lower than 1:2 to yield efficient search. When the areas of the dots and of the squares containing them were fixed, as they were in Experiments 1 and 2, dot density and total dot area increased as dot number increased. Experiment 3 removed the density and area cues by allowing dot size and total dot area to vary. This produced a marked decline in search performance. Efficient search now required ratios of above 3:1 or below 1:3. By using more realistic and isoluminant stimuli, Experiments 4 and 5 show that guidance by numerosity is fragile. As is found with other features that guide focal attention (e.g., color, orientation, size), the numerosity differences that are able to guide attention by bottom-up signals are much coarser than the differences that can be detected in attended stimuli.  相似文献   

9.
Complex sounds vary along a number of acoustic dimensions. These dimensions may exhibit correlations that are familiar to listeners due to their frequent occurrence in natural sounds—namely, speech. However, the precise mechanisms that enable the integration of these dimensions are not well understood. In this study, we examined the categorization of novel auditory stimuli that differed in the correlations of their acoustic dimensions, using decision bound theory. Decision bound theory assumes that stimuli are categorized on the basis of either a single dimension (rule based) or the combination of more than one dimension (information integration) and provides tools for assessing successful integration across multiple acoustic dimensions. In two experiments, we manipulated the stimulus distributions such that in Experiment 1, optimal categorization could be accomplished by either a rule-based or an information integration strategy, while in Experiment 2, optimal categorization was possible only by using an information integration strategy. In both experiments, the pattern of results demonstrated that unidimensional strategies were strongly preferred. Listeners focused on the acoustic dimension most closely related to pitch, suggesting that pitch-based categorization was given preference over timbre-based categorization. Importantly, in Experiment 2, listeners also relied on a two-dimensional information integration strategy, if there was immediate feedback. Furthermore, this strategy was used more often for distributions defined by a negative spectral correlation between stimulus dimensions, as compared with distributions with a positive correlation. These results suggest that prior experience with such correlations might shape short-term auditory category learning.  相似文献   

10.
In three experiments, we considered the relative contribution of frequency change (Δf) and time change (Δt) to perceived velocity (Δft) for sounds that moved either continuously in frequency space (Experiment 1) or in discrete steps (Experiments 2 and 3). In all the experiments, participants estimated “how quickly stimuli changed in pitch” on a scale ranging from 0 (not changing at all) to 100 (changing very quickly). Objective frequency velocity was specified in terms of semitones per second (ST/s), with ascending and descending stimuli presented on each trial at one of seven velocities (2, 4, 6, 8, 10, 12, and 14 ST/s). Separate contributions of frequency change (Δf) and time change (Δt) to perceived velocity were assessed by holding total Δt constant and varying Δf or vice versa. For tone glides that moved continuously in frequency space, both Δf and Δt cues contributed approximately equally to perceived velocity. For tone sequences, in contrast, perceived velocity was based almost entirely on Δt, with surprisingly little contribution from Δf. Experiment 3 considered separate judgments about Δf and Δt in order to rule out the possibility that the results of Experiment 2 were due to the inability to judge frequency change in tone sequences.  相似文献   

11.
A coherent discourse exhibits certain structures in that subunits of discourses are related to one another in various ways and in that subunits that contribute to the same discourse purpose are joined to create a larger unit so as to produce an effect on the reader. To date, this crucial aspect of discourse has been largely neglected in the psycholinguistic literature. In two experiments, we examined whether semantic integration in discourse context was influenced by the difference of discourse structure. Readers read discourses in which the last sentence was locally congruent but either semantically congruent or incongruent when interpreted with the preceding sentence. Furthermore, the last sentence was either in the same discourse unit or not in the same discourse unit as the preceding sentence, depending on whether they shared the same discourse purpose. Results from self-paced reading (Experiment 1) and eye tracking (Experiment 2) showed that discourse-incongruous words were read longer than discourse-congruous words only when the critical sentence and the preceding sentence were in the same discourse unit, but not when they belonged to different discourse units. These results establish discourse structure as a new factor in semantic integration and suggest that discourse effects depend both on the content of what is being said and on the way that the contents are organized.  相似文献   

12.
Attention operates perceptually on items in the environment, and internally on objects in visuospatial working memory. In the present study, we investigated whether spatial and temporal constraints affecting endogenous perceptual attention extend to internal attention. A retro-cue paradigm in which a cue is presented beyond the range of iconic memory and after stimulus encoding was used to manipulate shifts of internal attention. Participants?? memories were tested for colored circles (Experiments 1, 2, 3a, 4) or for novel shapes (Experiment 3b) and their locations within an array. In these experiments, the time to shift internal attention (Experiments 1 and 3) and the eccentricity of encoded objects (Experiments 2?C4) were manipulated. Our data showed that, unlike endogenous perceptual attention, internal shifts of attention are not modulated by stimulus eccentricity. Across several timing parameters and stimuli, we found that shifts of internal attention require a minimum quantal amount of time regardless of the object eccentricity at encoding. Our findings are consistent with the view that internal attention operates on objects whose spatial information is represented in relative terms. Although endogenous perceptual attention abides by the laws of space and time, internal attention can shift across spatial representations without regard for physical distance.  相似文献   

13.
Recognition without identification is the finding that, among recognition test items that go unidentified (as when a word is unidentified from a fragment), participants can discriminate those that were studied from those that were unstudied. In the present study, we extended this phenomenon to the more life-like situation of discriminating known from novel stimuli. Pictures of famous and nonfamous faces (Exp. 1), famous and nonfamous scenes (Exp. 2), and threatening and nonthreatening images (Exp. 3) were filtered in order to impede identification. As in list-learning recognition-without-identification paradigms, participants attempted to identify each image (e.g., whose face it was, what scene it was, or what was in the picture) and rated how familiar the image seemed on a scale of 0 (very unfamiliar) to 10 (very familiar). Among the unidentified stimuli, higher familiarity ratings were given to famous than to nonfamous faces (Exp. 1) and scenes (Exp. 2), and to threatening than to nonthreatening living/animate (but not to nonliving/nonanimate) images (Exp. 3). These findings suggest that even when a stimulus is too occluded to allow for conscious identification, enough information can be processed to allow a sense of familiarity or novelty with it, which appears also to be related to the sense of whether or not a living creature is a threat. That the sense of familiarity for unidentified stimuli may be related to threat detection for living or animate things suggests that it may be an adaptive aspect of human memory.  相似文献   

14.
Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime–target pairs were phonologically onset related (e.g., pijl–pijn, arrowpain), were from the same semantic category (e.g., pijl–zwaard, arrowsword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.  相似文献   

15.
Context affects multiple cognitive and perceptual processes. In the present study, we asked how the context of a set of faces would affect the perception of a target face??s race in two distinct tasks. In Experiments 1 and 2, participants categorized target faces according to perceived racial category (Black or White). In Experiment 1, the target face was presented alone or with Black or White flanker faces. The orientation of flanker faces was also manipulated to investigate how face inversion effect would interact with the influences of flanker faces on the target face. The results showed that participants were more likely to categorize the target face as White when it was surrounded by inverted White faces (an assimilation effect). Experiment 2 further examined how different aspects of the visual context would affect the perception of the target face by manipulating flanker faces?? shape and pigmentation, as well as their orientation. The results showed that flanker faces?? shape and pigmentation affected the perception of the target face differently. While shape elicited a contrast effect, pigmentation appeared to be assimilative. These novel findings suggest that the perceived race of a face is modulated by the appearance of other faces and their distinct shape and pigmentation properties. However, the contrast and assimilation effects elicited by flanker faces?? shape and pigmentation may be specific to race categorization, since the same stimuli used in a delayed matching task (Experiment 3) revealed that flanker pigmentation induced a contrast effect on the perception of target pigmentation.  相似文献   

16.
The functionalist memory perspective predicts that information of adaptive value may trigger specific processing modes. It was recently demonstrated that women’s memory is sensitive to cues of male sexual dimorphism (i.e., masculinity) that convey information of adaptive value for mate choice because they signal health and genetic quality, as well as personality traits important in relationship contexts. Here, we show that individual differences in women’s mating strategies predict the effect of facial masculinity cues upon memory, strengthening the case for functional design within memory. Using the revised socio-sexual orientation inventory, Experiment 1 demonstrates that women pursuing a short-term, uncommitted mating strategy have enhanced source memory for men with exaggerated versus reduced masculine facial features, an effect that reverses in women who favor long-term committed relationships. The reversal in the direction of the effect indicates that it does not reflect the sex typicality of male faces per se. The same pattern occurred within women’s source memory for women’s faces, implying that the memory bias does not reflect the perceived attractiveness of faces per se. In Experiment 2, we reran the experiment using men’s faces to establish the reliability of the core finding and replicated Experiment 1’s results. Masculinity cues may therefore trigger a specific mode within women’s episodic memory. We discuss why this mode may be triggered by female faces and its possible role in mate choice. In so doing, we draw upon the encoding specificity principle and the idea that episodic memory limits the scope of stereotypical inferences about male behavior.  相似文献   

17.
Deadlines (DLs) and response signals (RSs) are two well-established techniques for investigating speed–accuracy trade-offs (SATs). Methodological differences imply, however, that corresponding data do not necessarily reflect equivalent processes. Specifically, the DL procedure grants knowledge about trial-specific time demands and requires responses before a prespecified period has elapsed. In contrast, RS intervals often vary unpredictably between trials, and responses must be given after an explicit signal. Here, we investigated the effects of these differences in a flanker task. While all conditions yielded robust SAT functions, a right-shift of the curves pointed to reduced performance in RS conditions (Experiment 1, blocked; Experiments 2 and 3, randomized), as compared with DL conditions (Experiments 13, blocked), indicating that the detection of the RS imposes additional task demands. Moreover, the flanker effect vanished at long intervals in RS settings, suggesting that stimulus-related effects are absorbed in a slack when decisions are completed prior to the signal. In turn, effects of a flat (Experiment 2) versus a performance-contingent payment (Experiment 3) indicated that susceptibility to response strategies is higher in the DL than in the RS method. Finally, the RS procedure led to a broad range of slow responses and high accuracies, whereas DL conditions resulted in smaller variations in the upper data range (Experiments 1 and 2); with performance-contingent payment (Experiment 3), though, data ranges became similar. Together, the results uncover characteristic procedure-related effects and should help in selection of the appropriate technique.  相似文献   

18.
Many studies have shown that students learn better when they are given repeated exposures to different concepts in a way that is shuffled or interleaved, rather than blocked (e.g., Rohrer Educational Psychology Review, 24, 355367, 2012). The present study explored the effects of interleaving versus blocking on learning French pronunciations. Native English speakers learned several French words that conformed to specific pronunciation rules (e.g., the long “o” sound formed by the letter combination “eau,” as in bateau), and these rules were presented either in blocked fashion (bateau, carreau, fardeau . . . mouton, genou, verrou . . . tandis, verglas, admis) or in interleaved fashion (bateau, mouton, tandis, carreau, genou, verglas . . .). Blocking versus interleaving was manipulated within subjects (Experiments 13) or between subjects (Experiment 4), and participants’ pronunciation proficiency was later tested through multiple-choice tests (Experiments 1, 2, and 4) or a recall test (Experiment 3). In all experiments, blocking benefited the learning of pronunciations more than did interleaving, and this was true whether participants learned only 4 words per rule (Experiments 13) or 15 words per rule (Experiment 4). Theoretical implications of these findings are discussed.  相似文献   

19.
Harmful events often have a strong physical component??for instance, car accidents, plane crashes, fist fights, and military interventions. Yet there has been very little systematic work on the degree to which physical factors influence our moral judgments about harm. Since physical factors are related to our perception of causality, they should also influence our subsequent moral judgments. In three experiments, we tested this prediction, focusing in particular on the roles of motion and contact. In Experiment 1, we used abstract video stimuli and found that intervening on a harmful object was judged as being less bad than intervening directly on the victim, and that setting an object in motion was judged as being worse than redirecting an already moving object. Experiment 2 showed that participants were sensitive not only to the presence or absence of motion and contact, but also to the magnitudes and frequencies associated with these dimensions. Experiment 3 extended the findings from Experiment 1 to verbally presented moral dilemmas. These results suggest that domain-general processes play a larger role in moral cognition than is currently assumed.  相似文献   

20.
This study aimed to evaluate how well fluid reasoning can be predicted by a task that involves the monitoring of patterns of stimuli. This task is believed to measure the effectiveness of relational integration—the process that binds mental representations into more complex relational structures. In Experiments 1 and 2, the task was indeed validated as a proper measure of relational integration, since participants’ performance depended on the number of bindings that had to be constructed in the diverse conditions of the task, whereas neither the number of objects to be bound nor the amount of elicited interference could affect this performance. In Experiment?3, by means of structural equation modeling and variance partitioning, the relation integration task was found to be the strongest predictor of fluid reasoning, explaining variance above and beyond the amounts accounted for by four other kinds of well-established working memory tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号