首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The extrastriate body area (EBA) is involved in perception of human bodies and nonfacial body parts, but its role in representing body identity is not clear. Here, we used on-line high-frequency repetitive transcranial magnetic stimulation (rTMS) to test the role of EBA in self–other distinction. In Experiments 1 and 2 we compared rTMS of right EBA with stimulation of left ventral premotor cortex (vPM), whereas in Experiment 3 we compared stimulation of right and left EBA. RTMS was applied during a hand laterality task in which self or others’ hand images were presented in first- versus third-person view (Experiments 1 and 3), or while participants had to explicitly recognize their own hands (Experiment 2) presented in first- versus third-person view. Experiment 1 showed that right EBA stimulation selectively speeded judgments on others’ hands, whereas no effect of left vPM stimulation was found. Experiment 2 did not reveal any effect of rTMS. Experiment 3 confirmed faster responses on others’ hands while stimulating right EBA and also showed an advantage when judging self with respect to others’ hands during stimulation of left EBA. These results would demonstrate that EBA responds to morphological features of human body contributing to identity processing.  相似文献   

2.
Despite being able to rapidly and accurately infer their own and other peoples’ visual perspectives, healthy adults experience difficulty ignoring the irrelevant perspective when the two perspectives are in conflict; they experience egocentric and altercentric interference. We examine for the first time how the age of an observed person (adult vs. child avatar) influences adults’ visual perspective-taking, particularly the degree to which they experience interference from their own or the other person’s perspective. Participants completed the avatar visual perspective-taking task, in which they verified the number of discs in a visual scene according to either their own or an on-screen avatar’s perspective (Experiments 1 and 2) or only from their own perspective (Experiment 3), where the two perspectives could be consistent or in conflict. Age of avatar was manipulated between (Experiment 1) or within (Experiments 2 and 3) participants, and interference was assessed using behavioral (Experiments 13) and ERP (Experiment 1) measures. Results revealed that altercentric interference is reduced or eliminated when a child avatar was present, suggesting that adults do not automatically compute a child avatar’s perspective. We attribute this pattern to either enhanced visual processing for own-age others or an inference on reduced mental awareness in younger children. The findings argue against a purely attentional basis for the altercentric effect, and instead support an account where both mentalising and directional processes modulate automatic visual perspective-taking, and perspective-taking effects are strongly influenced by experimental context.  相似文献   

3.
This study examined the effects of cues to motion in depth – namely, stereoscopic (i.e., changing-disparity cues and interocular velocity differences) and changing-size cues on forward and backward vection. We conducted four experiments in which participants viewed expanding or contracting optical flows with the addition of either or both cues. In Experiment 1, participants reported vection by pressing a button whenever they felt it. After each trial, they also rated the magnitude of the vection (from 0 to 100). In Experiments 2 and 3, the participants rated the perceived velocity and motion-in-depth impression of the flows relative to standard stimuli, respectively. In Experiment 4, the participants rated the perceived depth and distance of the display. We observed enhancements in vection, motion-in-depth impression, and perceived depth and distance when either or both types of cues indicated motion-in-depth, as compared to those when the cues did not (Experiments 1, 3, and 4). The perceived velocity changed with cue conditions only for the high velocity condition (Experiment 2). Correlational analyses showed that the vection can be best explained by the motion-in-depth impression. This was partially supported by the multiple regression analyses. These results indicate that the enhancement of vection caused by cues is related to the impression of motion-in-depth rather than the perceived velocity and perceived three-dimensionality.  相似文献   

4.
The visual system is remarkably efficient at extracting summary statistics from the environment. Yet at any given time, the environment consists of many groups of objects distributed over space. Thus, the challenge for the visual system is to summarize over multiple groups. The current study investigates the capacity and computational efficiency of ensemble perception, in the context of perceiving mean sizes of multiple spatially intermixed groups of circles. In a series of experiments, participants viewed an array of one to eight sets of circles. Each set contained four circles in the same colors, but with different sizes. Participants estimated the mean size of a probed set. The set that would be probed was either known before onset of the array (pre-cue condition) or afterwards (post-cue condition). By comparing estimation error in the pre-cue and post-cue conditions, we found that participants could reliably estimate mean sizes for approximately two sets (Experiment 1). Importantly, this capacity was robust against attention bias toward individual objects in the sets (Experiment 2). Varying the exposure time to stimulus arrays did not increase the capacity limit, suggesting that ensemble perception could be limited by an internal resource constraint, rather than the speed of information encoding (Experiment 3). Moreover, we found that the visual system could not encode and hold more individual items than ensemble representations (Experiment 4). Taken together, these results suggest that ensemble perception provides an efficient way of information processing but with constraints.  相似文献   

5.
The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.  相似文献   

6.
The goal of this research was to examine memories created for the number of items during a visual search task. Participants performed a visual search task for a target defined by a single feature (Experiment 1A), by a conjunction of features (Experiment 1B), or by a specific spatial configuration of features (Experiment 1C). On some trials following the search task, subjects were asked to recall the total number of items in the previous display. In all search types, participants underestimated the total number of items, but the severity of the underestimation varied depending on the efficiency of the search. In three follow-up studies (Experiments 2A, 2B, and 2C) using the same visual stimuli, the participants’ only task was to estimate the number of items on each screen. Participants still underestimated the numerosity of the items, although the degree of underestimation was smaller than in the search tasks and did not depend on the type of visual stimuli. In Experiment 3, participants were asked to recall the number of items in a display only once. Subjects still displayed a tendency to underestimate, indicating that the underestimation effects seen in Experiments 1A-1C were not attributable to knowledge of the estimation task. The degree of underestimation depends on the efficiency of the search task, with more severe underestimation in efficient search tasks. This suggests that the lower attentional demands of very efficient searches leads to less encoding of numerosity of the distractor set.  相似文献   

7.
The present study dissociated the number (i.e., quantity) and precision (i.e., quality) of visual short-term memory (STM) representations in change detection using receiver operating characteristic (ROC) and experimental manipulations. Across three experiments, participants performed both recognition and recall tests of visual STM using the change-detection task and the continuous color-wheel recall task, respectively. Experiment 1 demonstrated that the estimates of the number and precision of visual STM representations based on the ROC model of change-detection performance were robustly correlated with the corresponding estimates based on the mixture model of continuous-recall performance. Experiments 2 and 3 showed that the experimental manipulation of mnemonic precision using white-noise masking and the experimental manipulation of the number of encoded STM representations using consolidation masking produced selective effects on the corresponding measures of mnemonic precision and the number of encoded STM representations, respectively, in both change-detection and continuous-recall tasks. Altogether, using the individual-differences (Experiment 1) and experimental dissociation (Experiment 2 and 3) approaches, the present study demonstrated the some-or-none nature of visual STM representations across recall and recognition.  相似文献   

8.
Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.  相似文献   

9.
Mere exposure effect refers to a phenomenon in which repeated stimuli are evaluated more positively than novel stimuli. We investigated whether this effect occurs for internally generated visual representations (i.e., visual images). In an exposure phase, a 5 × 5 dot array was presented, and a pair of dots corresponding to the neighboring vertices of an invisible polygon was sequentially flashed (in red), creating an invisible polygon. In Experiments 1, 2, and 4, participants visualized and memorized the shapes of invisible polygons based on different sequences of flashed dots, whereas in Experiment 3, participants only memorized positions of these dots. In a subsequent rating phase, participants visualized the shape of the invisible polygon from allocations of numerical characters on its vertices, and then rated their preference for invisible polygons (Experiments 1, 2, and 3). In contrast, in Experiment 4, participants rated the preference for visible polygons. Results showed that the mere exposure effect appeared only when participants visualized the shape of invisible polygons in both the exposure and rating phases (Experiments 1 and 2), suggesting that the mere exposure effect occurred for internalized visual images. This implies that the sensory inputs from repeated stimuli play a minor role in the mere exposure effect. Absence of the mere exposure effect in Experiment 4 suggests that the consistency of processing between exposure and rating phases plays an important role in the mere exposure effect.  相似文献   

10.
“Vast” is a word often applied to environmental terrain that is perceived to have large spatial extent. This judgment is made even at viewing distances where traditional metric depth cues are not useful. This paper explores the perceptual basis of vast experience, including reliability and visual precursors. Experiment 1 demonstrated strong agreement in ratings of the spatial extent of two-dimensional (2D) scene images by participants in two countries under very different viewing conditions. Image categories labeled “vast” often exemplified scene attributes of ruggedness and openness (Oliva & Torralba, 2001). Experiment 2 quantitatively assessed whether these properties predict vastness. High vastness ratings were associated with highly open, or moderately open but rugged, scenes. Experiment 3 provided evidence, consistent with theory, that metric distance perception does not directly mediate the observed vastness ratings. The question remains as to how people perceive vast space when information about environmental scale is unavailable from metric depth cues or associated scene properties. We consider possible answers, including contribution from strong cues to relative depth.  相似文献   

11.
Over the past decade, it has been debated whether retaining bindings in working memory (WM) requires more attention than retaining constituent features, focusing on domain-general attention and space-based attention. Recently, we proposed that retaining bindings in WM needs more object-based attention than retaining constituent features (Shen, Huang, & Gao, 2015, Journal of Experimental Psychology: Human Perception and Performance, doi: 10.1037/xhp0000018). However, only unitized visual bindings were examined; to establish the role of object-based attention in retaining bindings in WM, more emperical evidence is required. We tested 4 new bindings that had been suggested requiring no more attention than the constituent features in the WM maintenance phase: The two constituent features of binding were stored in different WM modules (cross-module binding, Experiment 1), from auditory and visual modalities (cross-modal binding, Experiment 2), or temporally (cross-time binding, Experiments 3) or spatially (cross-space binding, Experiments 46) separated. In the critical condition, we added a secondary object feature-report task during the delay interval of the change-detection task, such that the secondary task competed for object-based attention with the to-be-memorized stimuli. If more object-based attention is required for retaining bindings than for retaining constituent features, the secondary task should impair the binding performance to a larger degree relative to the performance of constituent features. Indeed, Experiments 16 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.  相似文献   

12.
Humans have developed a specific capacity to rapidly perceive and anticipate other people’s facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people’s levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an “immediate perceptual history” in the perceiver before leading to an emotional anticipation of the agent’s upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process—one through which we can swiftly and involuntarily detect other people’s pain.  相似文献   

13.
How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.  相似文献   

14.
Previous studies on how people set and modify decision criteria in old-new recognition tasks (in which they have to decide whether or not a stimulus was seen in a study phase) have almost exclusively focused on properties of the study items, such as presentation frequency or study list length. In contrast, in the three studies reported here, we manipulated the quality of the test cues in a scene-recognition task, either by degrading through Gaussian blurring (Experiment 1) or by limiting presentation duration (Experiment 2 and 3). In Experiments 1 and 2, degradation of the test cue led to worse old-new discrimination. Most importantly, however, participants were more liberal in their responses to degraded cues (i.e., more likely to call the cue “old”), demonstrating strong within-list, item-by-item, criterion shifts. This liberal response bias toward degraded stimuli came at the cost of increasing the false alarm rate while maintaining a constant hit rate. Experiment 3 replicated Experiment 2 with additional stimulus types (words and faces) but did not provide accuracy feedback to participants. The criterion shifts in Experiment 3 were smaller in magnitude than Experiments 1 and 2 and varied in consistency across stimulus type, suggesting, in line with previous studies, that feedback is important for participants to shift their criteria.  相似文献   

15.
Researchers have often determined how cues influence judgments of learning (JOLs; e.g., concrete words are assigned higher JOLs than are abstract words), and recently there has been an emphasis in understanding why cues influence JOLs (i.e., the mechanisms that underlie cue effects on JOLs). The analytic-processing (AP) theory posits that JOLs are constructed in accordance with participants’ beliefs of how a cue will influence memory. Even so, some evidence suggests that fluency is also important to cue effects on JOLs. In the present experiments, we investigated the contributions of participants’ beliefs and processing fluency to the concreteness effect on JOLs. To evaluate beliefs, participants estimated memory performance in a hypothetical experiment (Experiment 1), and studied concrete and abstract words and made a pre-study JOL for each (Experiments 2 and 3). Participants’ predictions demonstrated the belief that concrete words are more likely to be remembered than are abstract words, consistent with the AP theory. To evaluate fluency, response latencies were measured during lexical decision (Experiment 4), self-paced study (Experiment 5), and mental imagery (Experiment 7). Number of trials to acquisition was also evaluated (Experiment 6). Fluency did not differ between concrete and abstract words in Experiments 5 and 6, and it did not mediate the concreteness effect on JOLs in Experiments 4 and 7. Taken together, these results demonstrate that beliefs are a primary mechanism driving the concreteness effect on JOLs.  相似文献   

16.
Observers perceive objects in the world as stable over space and time, even though the visual experience of those objects is often discontinuous and distorted due to masking, occlusion, camouflage, or noise. How are we able to easily and quickly achieve stable perception in spite of this constantly changing visual input? It was previously shown that observers experience serial dependence in the perception of features and objects, an effect that extends up to 15 seconds back in time. Here, we asked whether the visual system utilizes an object’s prior physical location to inform future position assignments in order to maximize location stability of an object over time. To test this, we presented subjects with small targets at random angular locations relative to central fixation in the peripheral visual field. Subjects reported the perceived location of the target on each trial by adjusting a cursor’s position to match its location. Subjects made consistent errors when reporting the perceived position of the target on the current trial, mislocalizing it toward the position of the target in the preceding two trials (Experiment 1). This pull in position perception occurred even when a response was not required on the previous trial (Experiment 2). In addition, we show that serial dependence in perceived position occurs immediately after stimulus presentation, and it is a fast stabilization mechanism that does not require a delay (Experiment 3). This indicates that serial dependence occurs for position representations and facilitates the stable perception of objects in space. Taken together with previous work, our results show that serial dependence occurs at many stages of visual processing, from initial position assignment to object categorization.  相似文献   

17.
Repeatedly searching through invariant spatial arrangements in visual search displays leads to the buildup of memory about these displays (contextual-cueing effect). In the present study, we investigate (1) whether contextual cueing is influenced by global statistical properties of the task and, if so, (2) whether these properties increase the overall strength (asymptotic level) or the temporal development (speed) of learning. Experiment 1a served as baseline against which we tested the effects of increased or decreased proportions of repeated relative to nonrepeated displays (Experiments 1b and 1c, respectively), thus manipulating the global statistical properties of search environments. Importantly, probability variations were achieved by manipulating the number of nonrepeated (baseline) displays so as to equate the total number of repeated displays across experiments. In Experiment 1d, repeated and nonrepeated displays were presented in longer streaks of trials, thus establishing a stable environment of sequences of repeated displays. Our results showed that the buildup of contextual cueing was expedited in the statistically rich Experiments 1b and 1d, relative to the baseline Experiment 1a. Further, contextual cueing was entirely absent when repeated displays occurred in the minority of trials (Experiment 1c). Together, these findings suggest that contextual cueing is modulated by observers’ assumptions about the reliability of search environments.  相似文献   

18.
In many daily activities, we need to form and retain temporary representations of an object’s size. Typically, such visual short-term memory (VSTM) representations follow perception and are considered reliable. Here, participants were asked to hold in mind a single simple object for a short duration and to reproduce its size by adjusting the length and width of a test probe. Experiment 1 revealed two powerful findings: First, similar to a recently reported perceptual illusion, participants greatly overestimated the size of open objects – ones with missing boundaries – relative to the same-size fully closed objects. This finding confirms that object boundaries are critical for size perception and memory. Second, and in contrast to perception, even the size of the closed objects was largely overestimated. Both inflation effects were substantial and were replicated and extended in Experiments 25. Experiments 68 used a different testing procedure to examine whether the overestimation effects are due to inflation of size in VSTM representations or to biases introduced during the reproduction phase. These data showed that while the overestimation of the open objects was repeated, the overestimation of the closed objects was not. Taken together, these findings suggest that similar to perception, only the size representation of open objects is inflated in VSTM. Importantly, they demonstrate the considerable impact of the testing procedure on VSTM tasks and further question the use of reproduction procedures for measuring VSTM.  相似文献   

19.
We investigated the role of two kinds of attention—visual and central attention—for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention—visual or central—was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.  相似文献   

20.
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 13, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a “depth-of-processing” account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号