首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.  相似文献   

2.
Our brain constantly tries to anticipate the future by using a variety of memory mechanisms. Interestingly, studies using the intermittent presentation of multistable displays have shown little perceptual persistence for interruptions longer than a few hundred milliseconds. Here we examined whether we can facilitate the perceptual stability of bistable displays following a period of invisibility by employing a physically plausible and ecologically valid occlusion event sequence, as opposed to the typical intermittent presentation, with sudden onsets and offsets. To this end, we presented a bistable rotating structure-from-motion display that was moving along a linear horizontal trajectory on the screen and either was temporarily occluded by another object (a cardboard strip in Exp. 1, a computer-generated image in Exp. 2) or became invisible due to eye closure (Exp. 3). We report that a bistable rotation direction reliably persisted following occlusion or interruption only (1) if the pre- and postinterruption locations overlapped spatially (an occluder with apertures in Exp. 2 or brief, spontaneous blinks in Exp. 3) or (2) if an object’s size allowed for the efficient grouping of dots on both sides of the occluding object (large objects in Exp. 1). In contrast, we observed no persistence whenever the pre- and postinterruption locations were nonoverlapping (large solid occluding objects in Exps. 1 and 2 and long, prompted blinks in Exp. 3). We report that the bistable rotation direction of a moving object persisted only for spatially overlapping neural representations, and that persistence was not facilitated by a physically plausible and ecologically valid occlusion event.  相似文献   

3.
Learning is often specific to the conditions of training, making it important to identify which aspects of the testing environment are crucial to be matched in the training environment. In the present study, we examined training specificity in time and distance estimation tasks that differed only in the focus of processing (FOP). External spatial cues were provided for the distance estimation task and for the time estimation task in one condition, but not in another. The presence of a concurrent alphabet secondary task was manipulated during training and testing in all estimation conditions in Experiment 1. For distance as well as for time estimation in both conditions, training of the primary estimation task was found to be specific to the presence of the secondary task. In Experiments 2 and 3, we examined transfer between one estimation task and another, with no secondary task in either case. When all conditions were equal aside from the FOP instructions, including the presence of external spatial cues, Experiment 2 showed “transfer” between tasks, suggesting that training might not be specific to the FOP. When the external spatial cues were removed from the time estimation task, Experiment 3 showed no transfer between time and distance estimations, suggesting that external task cues influenced the procedures used in the estimation tasks.  相似文献   

4.
When interacting with categories, representations focused on within-category relationships are often learned, but the conditions promoting within-category representations and their generalizability are unclear. We report the results of three experiments investigating the impact of category structure and training methodology on the learning and generalization of within-category representations (i.e., correlational structure). Participants were trained on either rule-based or information-integration structures using classification (Is the stimulus a member of Category A or Category B?), concept (e.g., Is the stimulus a member of Category A, Yes or No?), or inference (infer the missing component of the stimulus from a given category) and then tested on either an inference task (Experiments 1 and 2) or a classification task (Experiment 3). For the information-integration structure, within-category representations were consistently learned, could be generalized to novel stimuli, and could be generalized to support inference at test. For the rule-based structure, extended inference training resulted in generalization to novel stimuli (Experiment 2) and inference training resulted in generalization to classification (Experiment 3). These data help to clarify the conditions under which within-category representations can be learned. Moreover, these results make an important contribution in highlighting the impact of category structure and training methodology on the generalization of categorical knowledge.  相似文献   

5.
The extrastriate body area (EBA) is involved in perception of human bodies and nonfacial body parts, but its role in representing body identity is not clear. Here, we used on-line high-frequency repetitive transcranial magnetic stimulation (rTMS) to test the role of EBA in self–other distinction. In Experiments 1 and 2 we compared rTMS of right EBA with stimulation of left ventral premotor cortex (vPM), whereas in Experiment 3 we compared stimulation of right and left EBA. RTMS was applied during a hand laterality task in which self or others’ hand images were presented in first- versus third-person view (Experiments 1 and 3), or while participants had to explicitly recognize their own hands (Experiment 2) presented in first- versus third-person view. Experiment 1 showed that right EBA stimulation selectively speeded judgments on others’ hands, whereas no effect of left vPM stimulation was found. Experiment 2 did not reveal any effect of rTMS. Experiment 3 confirmed faster responses on others’ hands while stimulating right EBA and also showed an advantage when judging self with respect to others’ hands during stimulation of left EBA. These results would demonstrate that EBA responds to morphological features of human body contributing to identity processing.  相似文献   

6.
Previous studies on how people set and modify decision criteria in old-new recognition tasks (in which they have to decide whether or not a stimulus was seen in a study phase) have almost exclusively focused on properties of the study items, such as presentation frequency or study list length. In contrast, in the three studies reported here, we manipulated the quality of the test cues in a scene-recognition task, either by degrading through Gaussian blurring (Experiment 1) or by limiting presentation duration (Experiment 2 and 3). In Experiments 1 and 2, degradation of the test cue led to worse old-new discrimination. Most importantly, however, participants were more liberal in their responses to degraded cues (i.e., more likely to call the cue “old”), demonstrating strong within-list, item-by-item, criterion shifts. This liberal response bias toward degraded stimuli came at the cost of increasing the false alarm rate while maintaining a constant hit rate. Experiment 3 replicated Experiment 2 with additional stimulus types (words and faces) but did not provide accuracy feedback to participants. The criterion shifts in Experiment 3 were smaller in magnitude than Experiments 1 and 2 and varied in consistency across stimulus type, suggesting, in line with previous studies, that feedback is important for participants to shift their criteria.  相似文献   

7.
How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.  相似文献   

8.
The goal of this research was to examine memories created for the number of items during a visual search task. Participants performed a visual search task for a target defined by a single feature (Experiment 1A), by a conjunction of features (Experiment 1B), or by a specific spatial configuration of features (Experiment 1C). On some trials following the search task, subjects were asked to recall the total number of items in the previous display. In all search types, participants underestimated the total number of items, but the severity of the underestimation varied depending on the efficiency of the search. In three follow-up studies (Experiments 2A, 2B, and 2C) using the same visual stimuli, the participants’ only task was to estimate the number of items on each screen. Participants still underestimated the numerosity of the items, although the degree of underestimation was smaller than in the search tasks and did not depend on the type of visual stimuli. In Experiment 3, participants were asked to recall the number of items in a display only once. Subjects still displayed a tendency to underestimate, indicating that the underestimation effects seen in Experiments 1A-1C were not attributable to knowledge of the estimation task. The degree of underestimation depends on the efficiency of the search task, with more severe underestimation in efficient search tasks. This suggests that the lower attentional demands of very efficient searches leads to less encoding of numerosity of the distractor set.  相似文献   

9.
Feature–reward association elicits value-driven attentional capture (VDAC) regardless of the task relevance of associated features. What are the necessary conditions for feature–reward associations in VDAC? Recent studies claim that VDAC is based on Pavlovian conditioning. In this study, we manipulated the temporal relationships among feature, response, and reward in reward learning to elucidate the necessary components of VDAC. We presented reward-associated features in a variety of locations in a flanker task to form a color–reward association (training phase) and then tested VDAC in a subsequent visual search task (test phase). In Experiment 1, we showed reward-associated features in a task display requiring response selection and observed VDAC, consistent with most previous studies. In Experiment 2, features presented at a fixation display before a task display also induced VDAC. Moreover, in Experiment 3, we reduced the time interval between features and rewards so that features appeared after a task display and we obtained marginally significant VDAC. However, no VDAC was observed when features and rewards were simultaneously presented in a feedback display in Experiments 4 and 5, suggesting that a direct association between feature and reward is not sufficient for VDAC. These results are in favor of the idea that response selection does not mediate feature–reward association in VDAC. Moreover, the evidence suggests that the time interval of feature and reward is flexible with some restriction in the learning of feature–reward association. The present study supports the hypothesis that theories of Pavlovian conditioning can account for feature–reward association in VDAC.  相似文献   

10.
Researchers have often determined how cues influence judgments of learning (JOLs; e.g., concrete words are assigned higher JOLs than are abstract words), and recently there has been an emphasis in understanding why cues influence JOLs (i.e., the mechanisms that underlie cue effects on JOLs). The analytic-processing (AP) theory posits that JOLs are constructed in accordance with participants’ beliefs of how a cue will influence memory. Even so, some evidence suggests that fluency is also important to cue effects on JOLs. In the present experiments, we investigated the contributions of participants’ beliefs and processing fluency to the concreteness effect on JOLs. To evaluate beliefs, participants estimated memory performance in a hypothetical experiment (Experiment 1), and studied concrete and abstract words and made a pre-study JOL for each (Experiments 2 and 3). Participants’ predictions demonstrated the belief that concrete words are more likely to be remembered than are abstract words, consistent with the AP theory. To evaluate fluency, response latencies were measured during lexical decision (Experiment 4), self-paced study (Experiment 5), and mental imagery (Experiment 7). Number of trials to acquisition was also evaluated (Experiment 6). Fluency did not differ between concrete and abstract words in Experiments 5 and 6, and it did not mediate the concreteness effect on JOLs in Experiments 4 and 7. Taken together, these results demonstrate that beliefs are a primary mechanism driving the concreteness effect on JOLs.  相似文献   

11.
12.
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 13, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a “depth-of-processing” account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.  相似文献   

13.
Repeatedly searching through invariant spatial arrangements in visual search displays leads to the buildup of memory about these displays (contextual-cueing effect). In the present study, we investigate (1) whether contextual cueing is influenced by global statistical properties of the task and, if so, (2) whether these properties increase the overall strength (asymptotic level) or the temporal development (speed) of learning. Experiment 1a served as baseline against which we tested the effects of increased or decreased proportions of repeated relative to nonrepeated displays (Experiments 1b and 1c, respectively), thus manipulating the global statistical properties of search environments. Importantly, probability variations were achieved by manipulating the number of nonrepeated (baseline) displays so as to equate the total number of repeated displays across experiments. In Experiment 1d, repeated and nonrepeated displays were presented in longer streaks of trials, thus establishing a stable environment of sequences of repeated displays. Our results showed that the buildup of contextual cueing was expedited in the statistically rich Experiments 1b and 1d, relative to the baseline Experiment 1a. Further, contextual cueing was entirely absent when repeated displays occurred in the minority of trials (Experiment 1c). Together, these findings suggest that contextual cueing is modulated by observers’ assumptions about the reliability of search environments.  相似文献   

14.
Humans have developed a specific capacity to rapidly perceive and anticipate other people’s facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people’s levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an “immediate perceptual history” in the perceiver before leading to an emotional anticipation of the agent’s upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process—one through which we can swiftly and involuntarily detect other people’s pain.  相似文献   

15.
This paper does two things. Firstly, it clarifies the way that phenomenological data is meant to constrain cognitive science according to enactivist thinkers. Secondly, it points to inconsistencies in the ‘Radical Enactivist’ handling of this issue, so as to explicate the commitments that enactivists need to make in order to tackle the explanatory gap. I begin by sketching the basic features of enactivism in sections 12, focusing upon enactive accounts of perception. I suggest that enactivist ideas here rely heavily upon the endorsement of a particular explanatory constraint that I call the structural resemblance constraint (SRC), according to which the structure of our phenomenology ought to be mirrored in our cognitive science. Sections 35 delineate the nature of, and commitment to, SRC amongst enactivists, showing SRC’s warrant and implications. The paper then turns to Hutto and Myin’s (2013) handling of SRC in sections 67, highlighting irregularities within their programme for Radical Enactivism on this issue. Despite seeming to favour SRC, I argue that Radical Enactivism’s purported compatibility with the narrow (brain-bound) supervenience of perceptual experience is in fact inconsistent with SRC, given Hutto and Myin’s phenomenological commitments. I argue that enactivists more broadly ought to resist such a concessionary position if they wish to tackle the explanatory gap, for it is primarily the abidance to SRC that ensures progress is made here. Section 8 then concludes the paper with a series of open questions to enactivists, inviting further justification of the manner in which they apply SRC.  相似文献   

16.
Over the past decade, it has been debated whether retaining bindings in working memory (WM) requires more attention than retaining constituent features, focusing on domain-general attention and space-based attention. Recently, we proposed that retaining bindings in WM needs more object-based attention than retaining constituent features (Shen, Huang, & Gao, 2015, Journal of Experimental Psychology: Human Perception and Performance, doi: 10.1037/xhp0000018). However, only unitized visual bindings were examined; to establish the role of object-based attention in retaining bindings in WM, more emperical evidence is required. We tested 4 new bindings that had been suggested requiring no more attention than the constituent features in the WM maintenance phase: The two constituent features of binding were stored in different WM modules (cross-module binding, Experiment 1), from auditory and visual modalities (cross-modal binding, Experiment 2), or temporally (cross-time binding, Experiments 3) or spatially (cross-space binding, Experiments 46) separated. In the critical condition, we added a secondary object feature-report task during the delay interval of the change-detection task, such that the secondary task competed for object-based attention with the to-be-memorized stimuli. If more object-based attention is required for retaining bindings than for retaining constituent features, the secondary task should impair the binding performance to a larger degree relative to the performance of constituent features. Indeed, Experiments 16 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.  相似文献   

17.
Franco, Gaillard, Cleeremans, and Destrebecqz (Behavior Research Methods, 47, 1393–1403, 2015), in a study on statistical learning employing the click-detection paradigm, conclude that more needs to be known about how this paradigm interacts with statistical learning and speech perception. Past results with this monitoring technique have pointed to an end-of-clause effect in parsing—a structural effect—but we here show that the issues are a bit more nuanced. Firstly, we report two Experiments (1a and 1b), which show that reaction times (RTs) are affected by two factors: (a) processing load, resulting in a tendency for RTs to decrease across a sentence, and (b) a perceptual effect which adds to this tendency and moreover helps neutralize differences between sentences with slightly different structures. These two factors are then successfully discriminated by registering event-related brain potentials (ERPs) during a monitoring task, with Experiment 2 establishing that the amplitudes of the N1 and P3 components—the first associated with temporal uncertainty, the second with processing load in dual tasks—correlate with RTs. Finally, Experiment 3 behaviorally segregates the two factors by placing the last tone at the end of sentences, activating a wrap-up operation and thereby both disrupting the decreasing tendency and highlighting structural effects. Our overall results suggest that much care needs to be employed in designing click-detection tasks if structural effects are sought, and some of the now-classic data need to be reconsidered.  相似文献   

18.
According to the documents model framework (Britt, Perfetti, Sandak, & Rouet, 1999), readers’ detection of contradictions within texts increases their integration of source–content links (i.e., who says what). This study examines whether conflict may also strengthen the relationship between the respective sources. In two experiments, participants read brief news reports containing two critical statements attributed to different sources. In half of the reports, the statements were consistent with each other, whereas in the other half they were discrepant. Participants were tested for source memory and source integration in an immediate item-recognition task (Experiment 1) and a cued recall task (Experiments 1 and 2). In both experiments, discrepancies increased readers’ memory for sources. We found that discrepant sources enhanced retrieval of the other source compared to consistent sources (using a delayed recall measure; Experiments 1 and 2). However, discrepant sources failed to prime the other source as evidenced in an online recognition measure (Experiment 1). We argue that discrepancies promoted the construction of links between sources, but that integration did not take place during reading.  相似文献   

19.
The present study dissociated the number (i.e., quantity) and precision (i.e., quality) of visual short-term memory (STM) representations in change detection using receiver operating characteristic (ROC) and experimental manipulations. Across three experiments, participants performed both recognition and recall tests of visual STM using the change-detection task and the continuous color-wheel recall task, respectively. Experiment 1 demonstrated that the estimates of the number and precision of visual STM representations based on the ROC model of change-detection performance were robustly correlated with the corresponding estimates based on the mixture model of continuous-recall performance. Experiments 2 and 3 showed that the experimental manipulation of mnemonic precision using white-noise masking and the experimental manipulation of the number of encoded STM representations using consolidation masking produced selective effects on the corresponding measures of mnemonic precision and the number of encoded STM representations, respectively, in both change-detection and continuous-recall tasks. Altogether, using the individual-differences (Experiment 1) and experimental dissociation (Experiment 2 and 3) approaches, the present study demonstrated the some-or-none nature of visual STM representations across recall and recognition.  相似文献   

20.
This study examined the effects of cues to motion in depth – namely, stereoscopic (i.e., changing-disparity cues and interocular velocity differences) and changing-size cues on forward and backward vection. We conducted four experiments in which participants viewed expanding or contracting optical flows with the addition of either or both cues. In Experiment 1, participants reported vection by pressing a button whenever they felt it. After each trial, they also rated the magnitude of the vection (from 0 to 100). In Experiments 2 and 3, the participants rated the perceived velocity and motion-in-depth impression of the flows relative to standard stimuli, respectively. In Experiment 4, the participants rated the perceived depth and distance of the display. We observed enhancements in vection, motion-in-depth impression, and perceived depth and distance when either or both types of cues indicated motion-in-depth, as compared to those when the cues did not (Experiments 1, 3, and 4). The perceived velocity changed with cue conditions only for the high velocity condition (Experiment 2). Correlational analyses showed that the vection can be best explained by the motion-in-depth impression. This was partially supported by the multiple regression analyses. These results indicate that the enhancement of vection caused by cues is related to the impression of motion-in-depth rather than the perceived velocity and perceived three-dimensionality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号