首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In three experiments, we investigated the spatial allocation of attention in response to central gaze cues. In particular, we examined whether the allocation of attentional resources is influenced by context information—that is, the presence or absence of reference objects (i.e., placeholders) in the periphery. On each trial, gaze cues were followed by a target stimulus to which participants had to respond by keypress or by performing a target-directed saccade. Targets were presented either in an empty visual field (Exps. 1 and 2) or in previewed location placeholders (Exp. 3) and appeared at one of either 18 (Exp. 1) or six (Exps. 2 and 3) possible positions. The spatial distribution of attention was determined by comparing response times as a function of the distance between the cued and target positions. Gaze cueing was not specific to the exact cued position, but instead generalized equally to all positions in the cued hemifield, when no context information was provided. However, gaze direction induced a facilitation effect specific to the exact gazed-at position when reference objects were presented. We concluded that the presence of possible objects in the periphery to which gaze cues could refer is a prerequisite for attention shifts being specific to the gazed-at position.  相似文献   

2.
Discriminating personally significant from nonsignificant sounds is of high behavioral relevance and appears to be performed effortlessly outside of the focus of attention. Although there is no doubt that we automatically monitor our auditory environment for unexpected, and hence potentially significant, events, the characteristics of detection mechanisms based on individual memory schemata have been far less explored. The experiments in the present study were designed to measure event-related potentials (ERPs) sensitive to the discrimination of personally significant and nonsignificant nonlinguistic sounds. Participants were presented with random sequences of acoustically variable sounds, one of which was associated with personal significance for each of the participants. In Experiment 1, each participant’s own mobile SMS ringtone served as his or her significant sound. In Experiment 2, a nonsignificant sound was instead trained to become personally significant to each participant over a period of one month. ERPs revealed differential processing of personally significant and nonsignificant sounds from about 200 ms after stimulus onset, even when the sounds were task-irrelevant. We propose the existence of a mechanism for the detection of significant sounds that does not rely on the detection of acoustic deviation. From a comparison of the results from our active- and passive-listening conditions, this discriminative process based on individual memory schemata seems to be obligatory, whereas the impact of individual memory schemata on further stages of auditory processing may require top-down guidance.  相似文献   

3.
Attention operates perceptually on items in the environment, and internally on objects in visuospatial working memory. In the present study, we investigated whether spatial and temporal constraints affecting endogenous perceptual attention extend to internal attention. A retro-cue paradigm in which a cue is presented beyond the range of iconic memory and after stimulus encoding was used to manipulate shifts of internal attention. Participants?? memories were tested for colored circles (Experiments 1, 2, 3a, 4) or for novel shapes (Experiment 3b) and their locations within an array. In these experiments, the time to shift internal attention (Experiments 1 and 3) and the eccentricity of encoded objects (Experiments 2?C4) were manipulated. Our data showed that, unlike endogenous perceptual attention, internal shifts of attention are not modulated by stimulus eccentricity. Across several timing parameters and stimuli, we found that shifts of internal attention require a minimum quantal amount of time regardless of the object eccentricity at encoding. Our findings are consistent with the view that internal attention operates on objects whose spatial information is represented in relative terms. Although endogenous perceptual attention abides by the laws of space and time, internal attention can shift across spatial representations without regard for physical distance.  相似文献   

4.
In seven experiments, we explored the potential for strength-based, within-list criterion shifts in recognition memory. People studied a mix of target words, some presented four times (strong) and others studied once (weak). In Experiments 1, 2, 4A, and 4B, the test was organized into alternating blocks of 10, 20, or 40 trials. Each block contained lures intermixed with strong targets only or weak targets only. In strength-cued conditions, test probes appeared in a unique font color for strong and weak blocks. In the uncued conditions of Experiments 1 and 2, similar strength blocks were tested, but strength was not cued with font color. False alarms to lures were lower in blocks containing strong target words, as compared with lures in blocks containing weak targets, but only when strength was cued with font color. Providing test feedback in Experiment 2 did not alter these results. In Experiments 3A–3C, test items were presented in a random order (i.e., not blocked by strength). Of these three experiments, only one demonstrated a significant shift even though strength cues were provided. Overall, the criterion shift was larger and more reliable as block size increased, and the shift occurred only when strength was cued with font color. These results clarify the factors that affect participants’ willingness to change their response criterion within a test list.  相似文献   

5.
Harmful events often have a strong physical component??for instance, car accidents, plane crashes, fist fights, and military interventions. Yet there has been very little systematic work on the degree to which physical factors influence our moral judgments about harm. Since physical factors are related to our perception of causality, they should also influence our subsequent moral judgments. In three experiments, we tested this prediction, focusing in particular on the roles of motion and contact. In Experiment 1, we used abstract video stimuli and found that intervening on a harmful object was judged as being less bad than intervening directly on the victim, and that setting an object in motion was judged as being worse than redirecting an already moving object. Experiment 2 showed that participants were sensitive not only to the presence or absence of motion and contact, but also to the magnitudes and frequencies associated with these dimensions. Experiment 3 extended the findings from Experiment 1 to verbally presented moral dilemmas. These results suggest that domain-general processes play a larger role in moral cognition than is currently assumed.  相似文献   

6.
For preference comparisons of paired successive musical excerpts, Koh (American Journal of Psychology, 80, 171?C185, 1967) found time-order effects (TOEs) that correlated negatively with stimulus valence??the first (vs. the second) of two unpleasant (vs. two pleasant) excerpts tended to be preferred. We present three experiments designed to investigate whether valence-level-dependent order effects for aesthetic preference (a) can be accounted for using Hellstr?m??s (e.g., Journal of Experimental Psychology: Human Perception and Performance, 5, 460?C477, 1979) sensation-weighting (SW) model, (b) can be generalized to successive and to simultaneous visual stimuli, and (c) vary, in accordance with the stimulus weighting, with interstimulus interval (ISI; for successive stimuli) or stimulus duration (for simultaneous stimuli). Participants compared paired successive jingles (Exp. 1), successive color patterns (Exp. 2), and simultaneous color patterns (Exp. 3), selecting the preferred stimulus. The results were well described by the SW model, which provided a better fit than did two extended versions of the Bradley?CTerry?CLuce model. Experiments 1 and 2 revealed higher weights for the second stimulus than for the first, and negatively valence-level-dependent TOEs. In Experiment 3, there was no laterality effect on the stimulus weighting and no valence-level-dependent space-order effects (SOEs). In terms of the SW model, the valence-level-dependent TOEs can be explained as a consequence of differential stimulus weighting in combination with stimulus valence varying from low to high, and the absence of valence-level-dependent SOEs as a consequence of the absence of differential weighting. For successive stimuli, there were no important effects of ISI on weightings and TOEs, and, for simultaneous stimuli, duration had only a small effect on the weighting.  相似文献   

7.
When people view images, their saccades are predominantly horizontal and show a positively skewed distribution of amplitudes. How are these patterns affected by the information close to fixation and the features in the periphery? We recorded saccades while observers encoded a set of scenes with a gaze-contingent window at fixation: Features inside a rectangular (Experiment 1) or elliptical (Experiment 2) window were intact; peripheral background was masked completely or blurred. When the window was asymmetric, with more information preserved either horizontally or vertically, saccades tended to follow the information within the window, rather than exploring unseen regions, which runs counter to the idea that saccades function to maximize information gain on each fixation. Window shape also affected fixation and amplitude distributions, but horizontal windows had less of an impact. The findings suggest that saccades follow the features currently being processed and that normal vision samples these features from a horizontally elongated region.  相似文献   

8.
Cognitive and neural models have proposed the existence of a single inhibitory process that regulates behavior and depends on the right frontal operculum (rFO). The aim of this study was to make a contribution to the ongoing debate as to whether inhibition is a single process or is composed of multiple, independent processes. Here, within a single paradigm, we assessed the links between two inhibitory phenomena—namely, resistance to involuntary visual capture by abrupt onsets and resolving of spatial stimulus–response conflict. We did so by conducting three experiments, two involving healthy volunteers (Exps. 1 and 3), and one with the help of a well-documented patient, R.J., with selectively weakened inhibition following a lesion of the rFO. The results suggest that resistance to capture and stimulus–response conflict are independent, because (a) additive effects were found (Exps. 1 and 3), (b) capture did not correlate with compatibility effects (Exp. 1), (c) dual tasking affected the two phenomena differently (Exp. 3), and (d) a dissociation was found between the two in patient R.J. (Exp. 2). However, the results also show that these two phenomena may share some processing components, given that (a) both were affected in patient R.J., but to different degrees (Exp. 2), and (b) increasing the difficulty of dual tasking produced an increasingly negative correlation between capture and compatibility (Exp. 3), which suggests that when resources are withdrawn from the control of the former, they are used to control the latter.  相似文献   

9.
In a series of preferential-looking experiments, infants 5 to 6 months of age were tested for their responsiveness to crossed and uncrossed horizontal disparity. In Experiments 1 and 2, infants were presented with dynamic random dot stereograms displaying a square target defined by either a 0.5° crossed or a 0.5° uncrossed horizontal disparity and a square control target defined by a 0.5° vertical disparity. In Experiment 3, infants were presented with the crossed and the uncrossed horizontal disparity targets used in Experiments 1 and 2. According to the results, the participants looked more often at the crossed (Experiment 1), as well as the uncrossed (Experiment 2), horizontal disparity targets than at the vertical disparity target. These results suggest that the infants were sensitive to both crossed and uncrossed horizontal disparity information. Moreover, the participants exhibited a natural visual preference for the crossed over the uncrossed horizontal disparity (Experiment 3). Since prior research established natural looking and reaching preferences for the (apparently) nearer of two objects, this finding is consistent with the hypothesis that the infants were able to extract the depth relations specified by crossed (near) and uncrossed (far) horizontal disparity.  相似文献   

10.
Previous research has suggested that two color patches can be consolidated into visual short-term memory (VSTM) via an unlimited parallel process. Here we examined whether the same unlimited-capacity parallel process occurs for two oriented grating patches. Participants viewed two gratings that were presented briefly and masked. In blocks of trials, the gratings were presented either simultaneously or sequentially. In Experiments 1 and 2, the presentation of the stimuli was followed by a location cue that indicated the grating on which to base one’s response. In Experiment 1, participants responded whether the target grating was oriented clockwise or counterclockwise with respect to vertical. In Experiment 2, participants indicated whether the target grating was oriented along one of the cardinal directions (vertical or horizontal) or was obliquely oriented. Finally, in Experiment 3, the location cue was replaced with a third grating that appeared at fixation, and participants indicated whether either of the two test gratings matched this probe. Despite the fact that these responses required fairly coarse coding of the orientation information, across all methods of responding we found superior performance for sequential over simultaneous presentations. These findings suggest that the consolidation of oriented gratings into VSTM is severely limited in capacity and differs from the consolidation of color information.  相似文献   

11.
Complex sounds vary along a number of acoustic dimensions. These dimensions may exhibit correlations that are familiar to listeners due to their frequent occurrence in natural sounds—namely, speech. However, the precise mechanisms that enable the integration of these dimensions are not well understood. In this study, we examined the categorization of novel auditory stimuli that differed in the correlations of their acoustic dimensions, using decision bound theory. Decision bound theory assumes that stimuli are categorized on the basis of either a single dimension (rule based) or the combination of more than one dimension (information integration) and provides tools for assessing successful integration across multiple acoustic dimensions. In two experiments, we manipulated the stimulus distributions such that in Experiment 1, optimal categorization could be accomplished by either a rule-based or an information integration strategy, while in Experiment 2, optimal categorization was possible only by using an information integration strategy. In both experiments, the pattern of results demonstrated that unidimensional strategies were strongly preferred. Listeners focused on the acoustic dimension most closely related to pitch, suggesting that pitch-based categorization was given preference over timbre-based categorization. Importantly, in Experiment 2, listeners also relied on a two-dimensional information integration strategy, if there was immediate feedback. Furthermore, this strategy was used more often for distributions defined by a negative spectral correlation between stimulus dimensions, as compared with distributions with a positive correlation. These results suggest that prior experience with such correlations might shape short-term auditory category learning.  相似文献   

12.
Similarities have been observed in the localization of the final position of moving visual and moving auditory stimuli: Perceived endpoints that are judged to be farther in the direction of motion in both modalities likely reflect extrapolation of the trajectory, mediated by predictive mechanisms at higher cognitive levels. However, actual comparisons of the magnitudes of displacement between visual tasks and auditory tasks using the same experimental setup are rare. As such, the purpose of the present free-field study was to investigate the influences of the spatial location of motion offset, stimulus velocity, and motion direction on the localization of the final positions of moving auditory stimuli (Experiment 1 and 2) and moving visual stimuli (Experiment 3). To assess whether auditory performance is affected by dynamically changing binaural cues that are used for the localization of moving auditory stimuli (interaural time differences for low-frequency sounds and interaural intensity differences for high-frequency sounds), two distinct noise bands were employed in Experiments 1 and 2. In all three experiments, less precise encoding of spatial coordinates in paralateral space resulted in larger forward displacements, but this effect was drowned out by the underestimation of target eccentricity in the extreme periphery. Furthermore, our results revealed clear differences between visual and auditory tasks. Displacements in the visual task were dependent on velocity and the spatial location of the final position, but an additional influence of motion direction was observed in the auditory tasks. Together, these findings indicate that the modality-specific processing of motion parameters affects the extrapolation of the trajectory.  相似文献   

13.
Recognition without identification is the finding that, among recognition test items that go unidentified (as when a word is unidentified from a fragment), participants can discriminate those that were studied from those that were unstudied. In the present study, we extended this phenomenon to the more life-like situation of discriminating known from novel stimuli. Pictures of famous and nonfamous faces (Exp. 1), famous and nonfamous scenes (Exp. 2), and threatening and nonthreatening images (Exp. 3) were filtered in order to impede identification. As in list-learning recognition-without-identification paradigms, participants attempted to identify each image (e.g., whose face it was, what scene it was, or what was in the picture) and rated how familiar the image seemed on a scale of 0 (very unfamiliar) to 10 (very familiar). Among the unidentified stimuli, higher familiarity ratings were given to famous than to nonfamous faces (Exp. 1) and scenes (Exp. 2), and to threatening than to nonthreatening living/animate (but not to nonliving/nonanimate) images (Exp. 3). These findings suggest that even when a stimulus is too occluded to allow for conscious identification, enough information can be processed to allow a sense of familiarity or novelty with it, which appears also to be related to the sense of whether or not a living creature is a threat. That the sense of familiarity for unidentified stimuli may be related to threat detection for living or animate things suggests that it may be an adaptive aspect of human memory.  相似文献   

14.
Face perception is widely believed to involve integration of facial features into a holistic perceptual unit, but the mechanisms underlying this integration are relatively unknown. We examined whether perceptual grouping cues influence a classic marker of holistic face perception, the “composite-face effect.” Participants made same–different judgments about a cued part of sequentially presented chimeric faces, and holistic processing was indexed as the degree to which the task-irrelevant face halves impacted performance. Grouping was encouraged or discouraged by adjusting the backgrounds behind the face halves: Although the face halves were always aligned, their respective backgrounds could be misaligned and of different colors. Holistic processing of face, but not of nonface, stimuli was significantly reduced when the backgrounds were misaligned and of different colors, cues that discouraged grouping of the face halves into a cohesive unit (Exp. 1). This effect was sensitive to stimulus orientation at short (200 ms) but not at long (2,500 ms) encoding durations, consistent with the previously documented temporal properties of the holistic processing of upright and inverted faces (Exps. 2 and 3). These results suggest that grouping mechanisms, typically involved in the perception of objecthood more generally, might contribute in important ways to the holistic perception of faces.  相似文献   

15.
Although the benefits of spaced retrieval for long-term retention are well established, the majority of this work has involved spacing over relatively short intervals (on the order of seconds or minutes). In the present experiments, we evaluated the effectiveness of spaced retrieval across relatively short intervals (within a single session), as compared to longer intervals (between sessions spaced a day apart), for long-term retention (i.e., one day or one week). Across a series of seven experiments, participants (N = 536) learned paired associates to a criterion of 70 % accuracy and then received one test–feedback trial for each item. The test–feedback trial occurred within 10 min of reaching criterion (short lag) or one day later (long lag). Then, a final test occurred one day (Exps. 13) or one week (Exps. 4 and 5) after the test–feedback trial. Across the different materials and methods in Experiments 13, we found little benefit for the long-lag relative to the short-lag schedule in final recall performance—that is, no lag effect—but large effects on the retention of information from the test–feedback to the final test phase. The results from the experiments with the one-week retention interval (Exps. 4 and 5) indicated a benefit of the long-lag schedule on final recall performance (a lag effect), as well as on retention. This research shows that even when the benefits of lag are eliminated at a (relatively long) one-day retention interval, the lag effect reemerges after a one-week retention interval. The results are interpreted within an extension of the bifurcation model to the spacing effect.  相似文献   

16.
17.
Sunny and von Mühlenen (Psychonomic Bulletin & Review, 18, 1050–1056, 2011) showed that an onset of motion captured attention only when the motion was jerky (refreshed at 8 or 17 Hz), but not when it was smooth (33 or 100 Hz). However, it remained unclear why the onset of jerky motion captures attention. In the present study, we systematically tested the role of different aspects of jerky motion in capturing attention. Simple flicker without motion did not capture attention in the same way as jerky motion (Exp. 1). An abrupt displacement between 0.26° and 1.05° captured attention, irrespective of whether the stimulus subsequently continued to move smoothly (Exp. 2) or whether it remained stationary (Exps. 3 and 4). A displaced stimulus that was preceded briefly at the new location by a figure-8 placeholder did not capture attention (Exp. 5). These results are explained within a masking account, according to which abrupt onsets and abrupt displacements receive a processing advantage because they escape forward masking by the preceding figure-8 placeholders.  相似文献   

18.
Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime–target pairs were phonologically onset related (e.g., pijl–pijn, arrowpain), were from the same semantic category (e.g., pijl–zwaard, arrowsword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.  相似文献   

19.
Proactive interference occurs when information from the past disrupts current processing and is a major source of confusion and errors in short-term memory (STM; Wickens, Born, & Allen, Journal of Verbal Learning and Verbal Behavior, 2:440–445, 1963). The present investigation examines potential boundary conditions for interference, testing the hypothesis that potential competitors must be similar along task-relevant dimensions to influence proactive interference effects. We manipulated both the type of task being completed (Experiments 1, 2, and 3) and dimensions of similarity irrelevant to the current task (Experiments 4 and 5) to determine how the recent presentation of a probe item would affect the speed with which participants could reject that item. Experiments 1, 2, and 3 contrasted STM judgments, which require temporal information, with semantic and perceptual judgments, for which temporal information is irrelevant. In Experiments 4 and 5, task-irrelevant information (perceptual similarity) was manipulated within the recent probes task. We found that interference from past items affected STM task performance but did not affect performance in semantic or perceptual judgment tasks. Conversely, similarity along a nominally irrelevant perceptual dimension did not affect the magnitude of interference in STM tasks. Results are consistent with the view that items in STM are represented by noisy codes consisting of multiple dimensions and that interference occurs when items are similar to each other and, thus, compete along the dimensions relevant to target selection.  相似文献   

20.
Can recognition memory be constrained “at the front end,” such that people are more likely to retrieve information about studying a recognition-test probe from a specified target source than they are to retrieve such information about a probe from a nontarget source? We adapted a procedure developed by Jacoby, Shimizu, Daniels, and Rhodes (Psychonomic Bulletin & Review 12:852–857, 2005) to address this question. Experiment 1 yielded evidence of source-constrained retrieval, but that pattern was not significant in Experiments 2, 3, and 4 (nor in several unpublished pilot experiments). In Experiment 5, in which items from the two studied sources were perceptibly different, a pattern consistent with front-end constraint of recognition emerged, but this constraint was likely exercised via visual attention rather than memory. Experiment 6 replicated both the absence of a significant constrained-retrieval pattern when the sources did not differ perceptibly (as in Exps. 2, 3 and 4) and the presence of that pattern when they did differ perceptibly (as in Exp. 5). Our results suggest that people can easily constrain recognition when items from the to-be-recognized source differ perceptibly from items from other sources (presumably via visual attention), but that it is difficult to constrain retrieval solely on the basis of source memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号