共查询到20条相似文献,搜索用时 15 毫秒
1.
Visual search is speeded when the target is repeated from trial to trial compared to when it changes, suggesting that selective attention learns from previous events. Such intertrial effects are stronger when there is more competition for selection, for example in ambiguous displays where the target is accompanied by a salient distractor. Here we investigate whether this is because the competition strengthens the learning itself, or because it allows for a learned representation to exert a greater effect. The results point to the latter. Observers looked for a colour-defined target that could repeat or change from trial to trial. A salient distractor could be present on the current trial, the previous trial, both, or neither. Intertrial effects were greater when a distractor was present on the current trial, suggesting that a primed target representation is more beneficial under conditions of competition. In contrast, distractor presence on the previous trial had no effects whatsoever, indicating that the learning process itself is not affected by competition. This suggests that the source of the learning resides at postselection stages, whereas the effects may occur at the perceptual level. 相似文献
2.
In visual search, detection of a target in a repeated layout is faster than search within a novel arrangement, demonstrating that contextual invariances can implicitly guide attention to the target location (“contextual cueing”; Chun & Jiang, 1998). Here, we investigated how display segmentation processes influence contextual cueing. Seven experiments showed that grouping by colour and by size can considerably reduce contextual cueing. However, selectively attending to a relevant subgroup of items (that contains the target) preserved context-based learning effects. Finally, the reduction of contextual cueing by means of grouping affected both the latent learning and the recall of display layouts. In sum, all experiments show an influence of grouping on contextual cueing. This influence is larger for variations of spatial (as compared to surface) features and is consistent with the view that learning of contextual relations critically interferes with processes that segment a display into segregated groups of items. 相似文献
3.
4.
《Quarterly journal of experimental psychology (2006)》2013,66(4):689-706
Humans can perceive affordances both for themselves and for others, and affordance perception is a function of perceptual–motor experience involved in playing a sport. Two experiments investigated the enhanced affordance perception of athletes. In Experiment 1, basketball players and nonbasketball players provided perceptual reports for sports-relevant (maximum standing-reach and reach-with-jump heights) and non-sports-relevant (maximum sitting height) affordances for self and other. Basketball players were more accurate at perceiving maximum reach-with-jump for another person than were nonbasketball players, but were no better at perceiving maximum reach or sitting heights. Experiment 2 investigated the informational basis for this enhanced perceptual ability of basketball players by evaluating whether kinematics inform perceivers about action-scaled (e.g., force-production dependent), but not body-scaled (i.e., geometrically determined), affordances for others, and whether basketball experience enhances sensitivity to kinematic information. Only basketball players improved at perceiving an action-scaled affordance (maximum reach-with-jump), but not body-scaled affordances (maximum standing-reach and sit) with exposure to kinematic information, suggesting that action-scaled affordances may be specified by kinematic information to which athletes are already attuned by virtue of their sport experience. 相似文献
5.
Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a “trained target”. Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the “trained target”: an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. 相似文献
6.
Previous research has demonstrated that visual properties of objects can affect shape-based categorization in a novel-name extension task; however, we still do not know how a relationship between visual properties of objects affects judgments in a novel-name extension task. We examined effects of increased visual similarity among the target and test objects in a shape bias task in young children and adults. Experiment 1 assessed college students with sets of objects whose similarity between target and test objects was either low or high similarity. Adults preferred shape when the similarity among objects was minimized. Experiment 2 tested 24-month olds in their use of the shape bias using the Intermodal Preferential Looking Paradigm. Children showed a shape bias only with items whose similarity to each other was low. These findings suggest that the visual properties of objects affect shape bias performance. 相似文献
7.
Serge Caparos 《Visual cognition》2013,21(8):1218-1227
In the study of visual attention, two major determinants of our ability to ignore distracting information have been isolated, namely, (1) the spatial separation from the focus of attention and (2) perceptual load. This study manipulated both factors using a dual-task adaptation of the flanker paradigm (Eriksen & Hoffman, 1973). It showed that (1) although attention followed a gradient profile under low perceptual load it followed a Mexican-hat profile under high perceptual load, consistent with the idea that increasing load focuses spatial attention; and (2) increasing perceptual load did not improve overall selectivity: Though selectivity improved at near separations, it was impaired at far ones. Load and spatial separation exert interacting effects. 相似文献
8.
Contour adaptation (CA) is a recently described paradigm that renders otherwise salient visual stimuli temporarily perceptually invisible. Here we investigate whether this illusion can be exploited to study visual awareness. We found that CA can induce seconds of sustained invisibility following similarly long periods of uninterrupted adaptation. Furthermore, even fragmented adaptors are capable of producing CA, with the strength of CA increasing monotonically as the adaptors encompass a greater fraction of the stimulus outline. However, different types of adaptor patterns, such as distinctive shapes or illusory contours, produce equivalent levels of CA suggesting that the main determinants of CA are low-level stimulus characteristics, with minimal modulation by higher-order visual processes. Taken together, our results indicate that CA has desirable properties for studying visual awareness, including the production of prolonged periods of perceptual dissociation from stimulation as well as parametric dependencies of that dissociation on a host of stimulus parameters. 相似文献
9.
《Journal of Cognitive Psychology》2013,25(6):685-691
Language switching studies typically implement visual stimuli and visual language cues to trigger a concept and a language response, respectively. In the present study we set out to generalise this to another stimulus modality by investigating language switching with auditory stimuli next to visual stimuli. The results showed that switch costs can be obtained with both auditory and visual stimuli. Yet, switch costs were relatively larger with visual stimuli than with auditory stimuli. Both methodological and theoretical implications of these findings are discussed. 相似文献
10.
Hsiao JH 《Brain and language》2011,119(2):89-98
In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition. 相似文献
11.
We examined whether configuring, which determines the appearance of grouped elements as a global shape, requires visual awareness, using a priming paradigm and two invisibility-inducing methods, CFS and sandwich masking. The primes were organized into configurations based on closure, collinearity, and symmetry (collinear primes), or on closure and symmetry (noncollinear primes). The prime-target congruency could be in configuration or in elements. During CFS, no significant response-priming was observed for invisible primes. When masking induced invisibility, a significant configuration response-priming was found for collinear and noncollinear primes, visible and invisible, with larger magnitude for the former. An element response-priming of equal magnitude was evident for visible and invisible noncollinear primes. Our results suggest that configuring can be accomplished in the absence of visual awareness when stimuli are rendered invisible by sandwich masking, but it benefits from visual awareness. Our results also suggest sensitivity to the available grouping cues in unconscious processing. 相似文献
12.
《Quarterly journal of experimental psychology (2006)》2013,66(5):946-966
Older and younger adults searched arrays of 12 unique real-world photographs for a specified object (e.g., a yellow drill) among distractors (e.g., yellow telephone, red drill, and green door). Eye-tracking data from 24 of 48 participants in each age group showed generally similar search patterns for the younger and older adults but there were some interesting differences. Older adults processed all the items in the arrays more slowly than the younger adults (e.g., they had longer fixation durations, gaze durations, and total times), but this difference was exaggerated for target items. We also found that older and younger adults differed in the sequence in which objects were searched, with younger adults fixating the target objects earlier in the trial than older adults. Despite the relatively longer fixation times on the targets (in comparison to the distractors) for older adults, a surprise visual recognition test revealed a sizeable age deficit for target memory but, importantly, no age differences for distractor memory. 相似文献
13.
Listeners rapidly adjust to talkers’ pronunciations, accommodating those pronunciations into the relevant phonemic category to improve subsequent perception. Previous work has suggested that such learning is restricted to pronunciations that are representative of how the speaker talks (Kraljic, Samuel, & Brennan, 2008). If an ambiguous pronunciation, for example, can be attributed to an external source (such as a pen in the speaker’s mouth), or if it is preceded by normal pronunciations of the same sound, learning is blocked.In three experiments, we explore this blocking effect in more detail. Our aim is to better understand the nature of the representations underlying the perceptual learning process. Experiment 1 replicates the blocking effect. Experiments 2 and 3 demonstrate that it can be eliminated when certain visual information occurs simultaneously with the auditory signal. The pattern of learning and non-learning is best accounted for by the view that speech perception is mediated by episodic representations that include potentially relevant visual information. 相似文献
14.
Two experiments investigated the validity of the Autism Quotient (AQ) scale for measuring traits associated with Autism Spectrum Disorders (ASD) in a population of male university students. Both studies found evidence that individuals who scored higher on the AQ questionnaire performed in a similar way to individuals with ASD on tasks with a perceptual component. Experiment 1 demonstrated a difference in the degree to which higher scoring AQ individuals performed on a perceptual learning task; with higher AQ scorers showing no advantage for familiar over novel items less. In Experiment 2, higher scorers showed fewer errors when judging the elements, as opposed to the global configuration, of a Navon task letter. Both of these patterns of results had previously been noted in individuals with ASD. These results suggest that the AQ may have some validity in identifying individuals in the nonclinical population with similar performance profiles to those with ASD. 相似文献
15.
Recent results suggest that observers can learn, unsupervised, the co-occurrence of independent shape features in viewed patterns (e.g., Fiser & Aslin, 2001). A critical question with regard to these findings is whether learning is driven by a structural, rule-based encoding of spatial relations between distinct features or by a pictorial, template-like encoding, in which spatial configurations of features are embedded in a “holistic” fashion. In two experiments, we test whether observers can learn combinations of features when the paired features are separated by an intervening spatial “gap”, in which other, unrelated features can appear. This manipulation both increases task difficulty and makes it less likely that the feature combinations are encoded simply as larger unitary features. Observers exhibited learning consistent with earlier studies, suggesting that unsupervised learning of compositional structure is based on the explicit encoding of spatial relations between separable visual features. More generally, these results provide support for compositional structure in visual representation. 相似文献
16.
Visual representations are prevalent in STEM instruction. To benefit from visuals, students need representational competencies that enable them to see meaningful information. Most research has focused on explicit conceptual representational competencies, but implicit perceptual competencies might also allow students to efficiently see meaningful information in visuals. Most common methods to assess students’ representational competencies rely on verbal explanations or assume explicit attention. However, because perceptual competencies are implicit and not necessarily verbally accessible, these methods are ill‐equipped to assess them. We address these shortcomings with a method that draws on similarity learning, a machine learning technique that detects visual features that account for participants’ responses to triplet comparisons of visuals. In Experiment 1, 614 chemistry students judged the similarity of Lewis structures and in Experiment 2, 489 students judged the similarity of ball‐and‐stick models. Our results showed that our method can detect visual features that drive students’ perception and suggested that students’ conceptual knowledge about molecules informed perceptual competencies through top‐down processes. Furthermore, Experiment 2 tested whether we can improve the efficiency of the method with active sampling. Results showed that random sampling yielded higher accuracy than active sampling for small sample sizes. Together, the experiments provide the first method to assess students’ perceptual competencies implicitly, without requiring verbalization or assuming explicit visual attention. These findings have implications for the design of instructional interventions that help students acquire perceptual representational competencies. 相似文献
17.
Is visual representation of an object affected by whether surrounding objects are identical to it, different from it, or absent? To address this question, we tested perceptual priming, visual short-term, and long-term memory for objects presented in isolation or with other objects. Experiment 1 used a priming procedure, where the prime display contained a single face, four identical faces, or four different faces. Subjects identified the gender of a subsequent probe face that either matched or mismatched with one of the prime faces. Priming was stronger when the prime was four identical faces than when it was a single face or four different faces. Experiments 2 and 3 asked subjects to encode four different objects presented on four displays. Holding memory load constant, visual memory was better when each of the four displays contained four duplicates of a single object, than when each display contained a single object. These results suggest that an object's perceptual and memory representations are enhanced when presented with identical objects, revealing redundancy effects in visual processing. 相似文献
18.
We investigated the automaticity of implicit sequence learning by varying perceptual load in a pure perceptual sequence learning paradigm. Participants responded to the randomly changing identity of a target, while the irrelevant target location was structured. In Experiment 1, the target was presented under low or high perceptual load during training, whereas testing occurred without load. Unexpectedly, no sequence learning was observed. In Experiment 2, perceptual load was introduced during the test phase to determine whether load is required to express perceptual knowledge. Learning itself was unaffected by visuospatial demands, but more learning was expressed under high load test conditions. In Experiment 3, we demonstrated that perceptual load is not required for the acquisition of perceptual sequence knowledge. These findings suggest that perceptual load does not mediate the perceptual sequence learning process itself, supporting the automaticity of implicit learning, but is mandatory for the expression of pure perceptual sequence knowledge. 相似文献
19.
ABSTRACTMultiple-target visual searches are susceptible to Subsequent Search Miss (SSM) errors—a reduced accuracy for target detection after a previous target has already been detected. SSM errors occur in critical searches (e.g., evaluations of radiographs and airport luggage x-rays), and have proven to be a stubborn problem. A few SSM theories have been offered, and here we investigate the “satisfaction” account: failing to completely finish a search after having found a first target. Accuracy on a multiple-target search task was compared to both how long participants spent searching after finding a first target and their target sensitivity in a separate vigilance task. Less time spent searching and poor vigilance predicted higher SSM error rates. These results suggest that observers who are more likely to miss a second target are less likely to thoroughly search after finding a first target, thus offering some of the first evidence for the “satisfaction” account. 相似文献
20.
Gibson argued that illusory pictorial displays contain “inadequate” information (1966, p. 288) but also that a “very special kind of selective attention” (p. 313) can dispel the illusion–suggesting that adequate perceptual information could in fact be potentially available to observers. The present paper describes Gibson's treatment of geometrical illusions and reviews pertinent empirical evidence. Interestingly, Gibson's insights have been corroborated by recent findings of inter- and intra-observer variability in susceptibility to visual illusions as a function of culture, learning and task. It is argued that these findings require a modification of the general Gibsonian principle of perception as the detection of specifying information. Withagen and Chemero's (2009) evolutionary motivated reconceptualization of perception predicts observers' use of both specifying and non-specifying information and inter- and intra-observer variability therein. Based on this reconceptualization we develop an ecological approach to visual illusions that explains differential illusion effects in terms of the optical variable(s) detected. 相似文献