共查询到20条相似文献,搜索用时 0 毫秒
1.
The rapidity with which infants come to understand language and events in their surroundings has prompted speculation concerning innate knowledge structures that guide language acquisition and object knowledge. Recently, however, evidence has emerged that by 8 months, infants can extract statistical patterns in auditory input that are based on transitional probabilities defining the sequencing of the input's components (Science 274 (1996) 1926). This finding suggests powerful learning mechanisms that are functional in infancy, and raises questions about the domain generality of such mechanisms. We habituated 2-, 5-, and 8-month-old infants to sequences of discrete visual stimuli whose ordering followed a statistically predictable pattern. The infants subsequently viewed the familiar pattern alternating with a novel sequence of identical stimulus components, and exhibited significantly greater interest in the novel sequence at all ages. These results provide support for the likelihood of domain general statistical learning in infancy, and imply that mechanisms designed to detect structure inherent in the environment may play an important role in cognitive development. 相似文献
2.
ABSTRACTStatistical learning refers to the extraction of probabilistic relationships between stimuli and is increasingly used as a method to understand learning processes. However, numerous cognitive processes are sensitive to the statistical relationships between stimuli and any one measure of learning may conflate these processes; to date little research has focused on differentiating these processes. To understand how multiple processes underlie statistical learning, here we compared, within the same study, operational measures of learning from different tasks that may be differentially sensitive to these processes. In Experiment 1, participants were visually exposed to temporal regularities embedded in a stream of shapes. Their task was to periodically detect whether a shape, whose contrast was staircased to a threshold level, was present or absent. Afterwards, they completed a search task, where statistically predictable shapes were found more quickly. We used the search task to label shape pairs as “learned” or “non-learned”, and then used these labels to analyse the detection task. We found a dissociation between learning on the search task and the detection task where only non-learned pairs showed learning effects in the detection task. This finding was replicated in further experiments with recognition memory (Experiment 2) and associative learning tasks (Experiment 3). Taken together, these findings are consistent with the view that statistical learning may comprise a family of processes that can produce dissociable effects on different aspects of behaviour. 相似文献
3.
To investigate properties of object representations constructed during a visual search task, we manipulated the proportion of trials/task within a block: In a search-frequent block, 80% of trials were search tasks; remaining trials presented a memory task; in a memory-frequent block, this proportion was reversed. In the search task, participants searched for a toy car (Experiments 1 and 2) or a T-shape object (Experiment 3). In the memory task, participants had to memorize objects in a scene. Memory performance was worse in the search-frequent block than in the memory-frequent block in Experiments 1 and 3, but not in Experiment 2 (token change in Experiment 1; type change in Experiments 2 and 3). Experiment 4 demonstrated that lower performance in the search-frequent block was not due to eye-movement behaviour. Results suggest that object representations constructed during visual search are different from those constructed during memorization and they are modulated by type of target. 相似文献
4.
A visual search task was used to test the idea that shaded images and their line-drawn analogues are treated identically from an early stage onwards in human vision. Reaction times and error rates were measured to locate the presence or absence of a target in an array of a variable number of distractors. The target was a cube in one orientation and the distractors cubes in a different orientation. The stimuli were defined by lines alone, shading alone, or lines plus shading. Both the slopes and the intercepts of the search functions (graphs of search time against number of displayed items) were higher for the line drawings than for the stimuli defined by shading. Over six experimental sessions, both the slopes and the intercepts fell for all stimuli, but the relative differences between them were maintained. The data suggest that, at an equivalent stage of practice, line-drawn stimuli are processed more slowly than shaded stimuli in early vision. 相似文献
5.
Statistical properties in the visual environment can be used to improve performance on visual working memory (VWM) tasks. The current study examined the ability to incidentally learn that a change is more likely to occur to a particular feature dimension (shape, color, or location) and use this information to improve change detection performance for that dimension (the change probability effect). Participants completed a change detection task in which one change type was more probable than others. Change probability effects were found for color and shape changes, but not location changes, and intentional strategies did not improve the effect. Furthermore, the change probability effect developed and adapted to new probability information quickly. Finally, in some conditions, an improvement in change detection performance for a probable change led to an impairment in change detection for improbable changes. 相似文献
6.
In visual search, detection of a target is faster when a layout of nontarget items is repeatedly encountered, suggesting that contextual invariances can guide attention. Moreover, contextual cueing can also adapt to environmental changes. For instance, when the target undergoes a predictable (i.e., learnable) location change, then contextual cueing remains effective even after the change, suggesting that a learned context is “remapped” and adjusted to novel requirements. Here, we explored the stability of contextual remapping: Four experiments demonstrated that target location changes are only effectively remapped when both the initial and the future target positions remain predictable across the entire experiment. Otherwise, contextual remapping fails. In sum, this pattern of results suggests that multiple, predictable target locations can be associated with a given repeated context, allowing the flexible adaptation of previously learned contingencies to novel task demands. 相似文献
7.
Thiessen ED 《Cognitive Science》2010,34(6):1093-1106
Infant and adult learners are able to identify word boundaries in fluent speech using statistical information. Similarly, learners are able to use statistical information to identify word-object associations. Successful language learning requires both feats. In this series of experiments, we presented adults and infants with audio-visual input from which it was possible to identify both word boundaries and word-object relations. Adult learners were able to identify both kinds of statistical relations from the same input. Moreover, their learning was actually facilitated by the presence of two simultaneously present relations. Eight-month-old infants, however, do not appear to benefit from the presence of regular relations between words and objects. Adults, like 8-month-olds, did not benefit from regular audio-visual correspondences when they were tested with tones, rather than linguistic input. These differences in learning outcomes across age and input suggest that both developmental and stimulus-based constraints affect statistical learning. 相似文献
8.
Hsiao JH 《Brain and language》2011,119(2):89-98
In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition. 相似文献
9.
Beck MR Angelone BL Levin DT Peterson MS Varakin DA 《Consciousness and cognition》2008,17(4):1192-1208
Previous research demonstrates that implicitly learned probability information can guide visual attention. We examined whether the probability of an object changing can be implicitly learned and then used to improve change detection performance. In a series of six experiments, participants completed 120–130 training change detection trials. In four of the experiments the object that changed color was the same shape (trained shape) on every trial. Participants were not explicitly aware of this change probability manipulation and change detection performance was not improved for the trained shape versus untrained shapes. In two of the experiments, the object that changed color was always in the same general location (trained location). Although participants were not explicitly aware of the change probability, implicit knowledge of it did improve change detection performance in the trained location. These results indicate that improved change detection performance through implicitly learned change probability occurs for location but not shape. 相似文献
10.
Jessica A. Collins 《Visual cognition》2013,21(8):945-960
Learning verbal semantic knowledge for objects has been shown to attenuate recognition costs incurred by changes in view from a learned viewpoint. Such findings were attributed to the semantic or meaningful nature of the learned verbal associations. However, recent findings demonstrate surprising benefits to visual perception after learning even noninformative verbal labels for stimuli. Here we test whether learning verbal information for novel objects, independent of its semantic nature, can facilitate a reduction in viewpoint-dependent recognition. To dissociate more general effects of verbal associations from those stemming from the semantic nature of the associations, participants learned to associate semantically meaningful (adjectives) or nonmeaningful (number codes) verbal information with novel objects. Consistent with a role of semantic representations in attenuating the viewpoint-dependent nature of object recognition, the costs incurred by a change in viewpoint were attenuated for stimuli with learned semantic associations relative to those associated with nonmeaningful verbal information. This finding is discussed in terms of its implications for understanding basic mechanisms of object perception as well as the classic viewpoint-dependent nature of object recognition. 相似文献
11.
Aaron Kozbelt 《Visual cognition》2013,21(6):705-723
This study addressed the question of how artists differ from non-artists in visual cognition. Four perception and twelve drawing tasks were used. Artists outperformed non-artists on both kinds of tasks. Regression analyses revealed common visual processes in the two kinds of tasks and unique variance in the drawing tasks. The advantage of artists over non-artists was apparently in the way they perceptually analysed as well as in how they drew. The perceptual advantage seems to be closely linked to the activity of drawing and is discussed with reference to artists' extensive experience in visual interaction with objects and images during drawing. Artists appear to be more proficient at using visual analytic procedures that are qualitatively similar to those of novices, unlike experts in many other domains. 相似文献
12.
Previous research has shown that infants match vowel sounds to facial displays of vowel articulation [Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. Science, 218, 1138–1141; Patterson, M. L., & Werker, J. F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. Infant Behaviour & Development, 22, 237–247], and integrate seen and heard speech sounds [Rosenblum, L. D., Schmuckler, M. A., & Johnson, J. A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59, 347–357; Burnham, D., & Dodd, B. (2004). Auditory-visual speech integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychobiology, 45, 204–220]. However, the role of visual speech in language development remains unknown. Our aim was to determine whether seen articulations enhance phoneme discrimination, thereby playing a role in phonetic category learning. We exposed 6-month-old infants to speech sounds from a restricted range of a continuum between /ba/ and /da/, following a unimodal frequency distribution. Synchronously with these speech sounds, one group of infants (the two-category group) saw a visual articulation of a canonical /ba/ or /da/, with the two alternative visual articulations, /ba/ and /da/, being presented according to whether the auditory token was on the /ba/ or /da/ side of the midpoint of the continuum. Infants in a second (one-category) group were presented with the same unimodal distribution of speech sounds, but every token for any particular infant was always paired with the same syllable, either a visual /ba/ or a visual /da/. A stimulus-alternation preference procedure following the exposure revealed that infants in the former, and not in the latter, group discriminated the /ba/–/da/ contrast. These results not only show that visual information about speech articulation enhances phoneme discrimination, but also that it may contribute to the learning of phoneme boundaries in infancy. 相似文献
13.
To investigate whether working memory and visual processing have the same role or different roles in A/B and A/not A prototype category learning, the present study adopted an A/B or A/not A category learning task in control and dual conditions. The results of Experiment 1 showed that an additional dual visual working memory task rather than a dual verbal working memory task reduced accuracy of the A/B task, whereas no dual tasks influenced accuracy of the A/not A task. The results of Experiment 2 revealed that an additional dual visual processing task impaired accuracy of the A/B task, whereas the dual visual processing task did not influence accuracy of the A/not A task. These results indicate that visual working memory and visual processing play different roles in A/B and A/not A prototype category learning, and support that these two types of prototype category learning are mediated by different memory systems. 相似文献
14.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison. 相似文献
15.
16.
Dale R Duran ND Morehead JR 《Advances in cognitive psychology / University of Finance and Management in Warsaw》2012,8(2):196-209
Accounts of statistical learning, both implicit and explicit, often invoke predictive processes as central to learning, yet practically all experiments employ non-predictive measures during training. We argue that the common theoretical assumption of anticipation and prediction needs clearer, more direct evidence for it during learning. We offer a novel experimental context to explore prediction, and report results from a simple sequential learning task designed to promote predictive behaviors in participants as they responded to a short sequence of simple stimulus events. Predictive tendencies in participants were measured using their computer mouse, the trajectories of which served as a means of tapping into predictive behavior while participants were exposed to very short and simple sequences of events. A total of 143 participants were randomly assigned to stimulus sequences along a continuum of regularity. Analysis of computer-mouse trajectories revealed that (a) participants almost always anticipate events in some manner, (b) participants exhibit two stable patterns of behavior, either reacting to vs. predicting future events, (c) the extent to which participants predict relates to performance on a recall test, and (d) explicit reports of perceiving patterns in the brief sequence correlates with extent of prediction. We end with a discussion of implicit and explicit statistical learning and of the role prediction may play in both kinds of learning. 相似文献
17.
This study advances the hypothesis that, in the course of object recognition, attention is directed to distinguishing features: visual information that is diagnostic of object identity in a specific context. In five experiments, observers performed an object categorization task involving drawings of fish (Experiments 1–4) and photographs of natural sea animals (Experiment 5). Allocation of attention to distinguishing and non-distinguishing features was examined using primed-matching (Experiment 1) and visual probe (Experiments 2, 4, 5) methods, and manipulated by spatial precuing (Experiment 3). Converging results indicated that in performing the object categorization task, attention was allocated to the distinguishing features in a context-dependent manner, and that such allocation facilitated performance. Based on the view that object recognition, like categorization, is essentially a process of discrimination between probable alternatives, the implications of the findings for the role of attention to distinguishing features in object recognition are discussed. 相似文献
18.
Statistical learning – implicit learning of statistical regularities within sensory input – is a way of acquiring structure within continuous sensory environments. Statistics computation, initially shown to be involved in word segmentation, has been demonstrated to be a general mechanism that operates across domains, across time and space, and across species. Recently, statistical leaning has been reported to be present even at birth when newborns were tested with a speech stream. The aim of the present study was to extend this finding, by investigating whether newborns’ ability to extract statistics operates in multiple modalities, as found for older infants and adults. Using the habituation procedure, two experiments were carried out in which visual sequences were presented. Results demonstrate that statistical learning is a general mechanism that extracts statistics across domain since the onset of sensory experience. Intriguingly, present data reveal that newborn learner’s limited cognitive resources constrain the functioning of statistical learning, narrowing the range of what can be learned. 相似文献
19.
The top-down guidance of visual attention is one of the main factors allowing humans to effectively process vast amounts of incoming visual information. Nevertheless we still lack a full understanding of the visual, semantic, and memory processes governing visual attention. In this paper, we present a computational model of visual search capable of predicting the most likely positions of target objects. The model does not require a separate training phase, but learns likely target positions in an incremental fashion based on a memory of previous fixations. We evaluate the model on two search tasks and show that it outperforms saliency alone and comes close to the maximal performance of the Contextual Guidance Model (CGM; Torralba, Oliva, Castelhano, & Henderson, 2006; Ehinger, Hidalgo-Sotelo, Torralba, & Oliva, 2009), even though our model does not perform scene recognition or compute global image statistics. The search performance of our model can be further improved by combining it with the CGM. 相似文献
20.
Lauren L. Emberson Jennifer B. Misyak Jennifer A. Schwade Morten H. Christiansen Michael H. Goldstein 《Developmental science》2019,22(6)
Statistical learning (SL), sensitivity to probabilistic regularities in sensory input, has been widely implicated in cognitive and perceptual development. Little is known, however, about the underlying mechanisms of SL and whether they undergo developmental change. One way to approach these questions is to compare SL across perceptual modalities. While a decade of research has compared auditory and visual SL in adults, we present the first direct comparison of visual and auditory SL in infants (8–10 months). Learning was evidenced in both perceptual modalities but with opposite directions of preference: Infants in the auditory condition displayed a novelty preference, while infants in the visual condition showed a familiarity preference. Interpreting these results within the Hunter and Ames model (1988), where familiarity preferences reflect a weaker stage of encoding than novelty preferences, we conclude that there is weaker learning in the visual modality than the auditory modality for this age. In addition, we found evidence of different developmental trajectories across modalities: Auditory SL increased while visual SL did not change for this age range. The results suggest that SL is not an abstract, amodal ability; for the types of stimuli and statistics tested, we find that auditory SL precedes the development of visual SL and is consistent with recent work comparing SL across modalities in older children. 相似文献