共查询到20条相似文献,搜索用时 171 毫秒
1.
语义知识神经表征的fMRI研究:通道特异性或类别特异性? 总被引:1,自引:0,他引:1
在同一实验中同时考虑类别与通道二个因素,并采用典型的语义提取任务应用功能磁共振成像技术探讨了语义知识的神经基础,结果发现不同类型的语义知识的脑激活模式(含左梭形回的激活)无论在范围还是强度上存在很大的相似性;而且,在严格的统计阈值下,不存在通道或是类别特异性激活的脑区。结果提示语义知识的神经表征分布于整个大脑皮层;在提取物体语义知识时都会同时激活该物体的视觉表象。另外,研究观察到的语义提取过程中BA9区的激活进一步证实并扩展了关于“汉语和表音文字在皮层组织上存在重要差异的观点”。 相似文献
2.
本研究根据音乐加工的层级结构, 对现有的脑成像研究进行了元分析, 探讨了音乐知觉的神经基础。具体而言, 对特异于音乐知觉加工的两个层级, 音程分析和结构分析的神经基础进行了分析, 并在此基础上对比了参与两个层级加工的脑区。结果发现, 音程分析主要的激活分布在双侧颞上回和右侧额下回, 在中央前回、角回和脑岛等脑区也有分布。音程分析在颞上回激活最多, 可能表明颞上回为音程分析的核心区域。结构分析激活分布较广, 主要激活颞上回、颞横回和前额叶区域, 此外, 还激活了下顶叶、缘上回和舌回等顶枕区域。结构分析在前额叶激活最多, 可能表明前额叶为结构分析的核心区域。最后, 对比两层级激活的脑区发现, 二者仅在后侧颞上回存在着重合, 而在绝大部分脑区则表现出分离, 这暗示了音程分析和结构分析通过颞上回进行交流, 并负责音乐不同层面的加工。 相似文献
3.
4.
采用距离启动范式,考察中国文化背景下不同手指表征方式对数量表征能力的影响。实验首先验证单手表征中不同手指数量表征方式对小数字(1~5)认知表征的影响;实验2则进一步采用中国人特有的单手手指表征,考察其对大数字(5~9)认知表征的影响。结果表明,小数字中出现了标准手指表征方式语义层面的位置编码、非标准手指表征方式知觉层面总和编码的激活;但大数字中两种手指表征方式均出现了语义层面位置编码的激活。此结果与计算模型理论一致,说明当手指数量从少到多变化时,标准手指表征方式为语义性的符号数量表征;而非标准手指表征方式由知觉性的非符号向语义性符号数量表征过渡。 相似文献
5.
6.
研究加工通道的差异以及减法和乘法心算的编码特征,选择100道心算算式,在视觉和听觉两种输入通道下记录并分析了14例正常人的脑事件相关电位(ERP)。结果发现,视觉和听觉通道引发了趋势相反的早中期成分,在额区和中央区产生了显著分离。在慢波阶段,听觉通道下更多激活了颞区,而视觉通道下更多激活了顶-枕区,且慢电位的ERP特征也表现出通道差异。减法和乘法运算以左半球激活为主,但听觉通道下的顶区和颞区出现了显著的右半球优势。减法在视觉通道下优势较大,说明其主要依赖视空间表征;而乘法在听觉通道下更具优势,则主要是由于其以听觉的言语表征方式为主。 相似文献
7.
心算加工分编码(表征)、运算(或提取)和反应三个阶段,这三个阶段相互影响。不同输入形式的数字表征在顶叶的不同区域完成。算术知识提取主要与左脑顶内沟有关,但当心算变得更复杂时而需要具体运算时,左脑额叶下部出现明显激活。所有与心算有关的脑区涉及大脑前额皮层和颞顶枕联合皮层的综合作用,并总体表现为左脑优势,但估算、珠心算以及某些具有特殊心算能力的人的心算还依赖视空间表征,这与右脑额顶区和楔前叶的活动有关 相似文献
8.
类别学习是人类对不同类别加以归类的过程。类别信息的表征、分类策略运用的特点一直是类别学习研究的重点。非监控类别学习可分为直接的非监控类别学习和间接的非监控类别学习。直接的非监控类别学习(非限制任务, 限制任务)中被试的分类策略具有分类“单维度倾向”策略特点,类别变异程度会影响类别表征; 间接的非监控类别学习更倾向形成相似性表征, 直接的非监控类别学习则为基于规则表征。现有的非监控类别学习的理论对分类策略和表征的解释仍显薄弱, 不同学习任务下类别迁移和知识效应的研究还存在不足, 未来研究还需要进一步验证知识效应对非监控类别学习的认知加工过程的影响、探索影响类别表征形成的因素等问题。 相似文献
9.
10.
11.
There is a long history of research into how people learn new categories of objects or events. Recent advances have led to new insights about the neuropsychological basis of this critically important cognitive process. In particular, there is now good evidence that the frontal cortex and basal ganglia contribute to category learning, that medial temporal lobe structures make a more minor contribution, and that categorization rules are not represented in visual cortex. There is also strong evidence that normal category learning is mediated by at least two separate systems. A recent neuropsychological theory of category learning that is consistent with these data is described. 相似文献
12.
Effects of category learning on the stimulus selectivity of macaque inferior temporal neurons 下载免费PDF全文
De Baene W Ons B Wagemans J Vogels R 《Learning & memory (Cold Spring Harbor, N.Y.)》2008,15(9):717-727
Primates can learn to categorize complex shapes, but as yet it is unclear how this categorization learning affects the representation of shape in visual cortex. Previous studies that have examined the effect of categorization learning on shape representation in the macaque inferior temporal (IT) cortex have produced diverse and conflicting results that are difficult to interpret owing to inadequacies in design. The present study overcomes these issues by recording IT responses before and after categorization learning. We used parameterized shapes that varied along two shape dimensions. Monkeys were extensively trained to categorize the shapes along one of the two dimensions. Unlike previous studies, our paradigm counterbalanced the relevant categorization dimension across animals. We found that categorization learning increased selectivity specifically for the category-relevant stimulus dimension (i.e., an expanded representation of the trained dimension), and that the ratio of within-category response similarities to between-category response similarities increased for the relevant dimension (i.e., category tuning). These small effects were only evident when the learned category-related effects were disentangled from the prelearned stimulus selectivity. These results suggest that shape-categorization learning can induce minor category-related changes in the shape tuning of IT neurons in adults, suggesting that learned, category-related changes in neuronal response mainly occur downstream from IT. 相似文献
13.
How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. 相似文献
14.
Are there representational shifts during category learning? 总被引:2,自引:0,他引:2
Early theories of categorization assumed that either rules, or prototypes, or exemplars were exclusively used to mentally represent categories of objects. More recently, hybrid theories of categorization have been proposed that variously combine these different forms of category representation. Our research addressed the question of whether there are representational shifts during category learning. We report a series of experiments that tracked how individual subjects generalized their acquired category knowledge to classifying new critical transfer items as a function of learning. Individual differences were observed in the generalization patterns exhibited by subjects, and those generalizations changed systematically with experience. Early in learning, subjects generalized on the basis of single diagnostic dimensions, consistent with the use of simple categorization rules. Later in learning, subjects generalized in a manner consistent with the use of similarity-based exemplar retrieval, attending to multiple stimulus dimensions. Theoretical modeling was used to formally corroborate these empirical observations by comparing fits of rule, prototype, and exemplar models to the observed categorization data. Although we provide strong evidence for shifts in the kind of information used to classify objects as a function of categorization experience, interpreting these results in terms of shifts in representational systems underlying perceptual categorization is a far thornier issue. We provide a discussion of the challenges of making claims about category representation, making reference to a wide body of literature suggesting different kinds of representational systems in perceptual categorization and related domains of human cognition. 相似文献
15.
Jianhong Shen 《Visual cognition》2016,24(3):260-283
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modelling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization. 相似文献
16.
17.
Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects. 相似文献
18.
The ways in which visual categories are learned, and in which well-established categories are represented and retrieved, are
fundamental issues of cognitive neuroscience. Researchers have typically studied these issues separately, and the transition
from the initial phase of category learning to expertise is poorly characterized. The acquisition of novel categories has
been shown to depend on the striatum, hippocampus, and prefrontal cortex, whereas visual category expertise has been shown
to involve changes in inferior temporal cortex. The goal of the present experiment is to understand the respective roles of
these brain regions in the transition from initial learning to expertise when category judgments are being made. Subjects
were explicitly trained, over 2 days, to classify realistic faces. Subjects then performed the categorization task during
fMRI scanning, as well as a perceptual matching task, in order to characterize how brain regions respond to these faces when
not explicitly categorizing them. We found that, during face categorization, face-selective inferotemporal cortex, lateral
prefrontal cortex, and dorsal striatum are more responsive to faces near the category boundary, which are most difficult to
categorize. In contrast, the hippocampus and left superior frontal sulcus responded most to faces farthest from the category
boundary. These dissociable effects suggest that there are several distinct neural mechanisms involved in categorization,
and provide a framework for understanding the contribution of each of these brain regions in categorization. 相似文献
19.
Stephen Grossberg 《Attention, perception & psychophysics》1994,55(1):48-121
A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity-partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, orfacade, within area V4. Area V4 is suggested to support figure-ground separation and to interact with cortical mechanisms of spatial attention, attentive object learning, and visual search. Adaptive resonance theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal (IT) cortex for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular motion BCS signals interact with the model Where stream. Reciprocal interactions between these visual, What, and Where mechanisms are used to discuss data about visual search and saccadic eye movements, including fast search of conjunctive targets, search of 3-D surfaces, selective search of like-colored targets, attentive tracking of multielement groupings, and recursive search of simultaneously presented targets. 相似文献
20.
The visual system is remarkably efficient at extracting regularities from the environment through statistical learning. While such extraction has extensive consequences on cognition, it is unclear how statistical learning shapes the representations of the individual objects that comprise the regularities. Here we examine how statistical learning alters object representations. In three experiments, participants were exposed to either random arrays containing objects in a random order, or structured arrays containing object pairs where two objects appeared next to each other in fixed spatial or temporal configurations. After exposure, one object in each pair was briefly presented and participants judged the location or the orientation of the object without seeing the other object in the pair. We found that when an object reliably appeared next to another object in space, it was judged as being closer to the other object in space even though the other object was never presented (Experiments 1 and 2). Likewise, when an object reliably preceded another object in time, its orientation was biased toward the orientation of the other object even though the other object was never presented (Experiment 3). These results demonstrated that statistical learning fundamentally shapes how individual objects are represented in visual memory, by biasing the representation of one object toward its co-occurring partner. Importantly, participants in all experiments were not explicitly aware of the regularities. Thus, the bias in object representations was implicit. The current study reveals a novel impact of statistical learning on object representation: spatially co-occurring objects are represented as being closer in space, and temporally co-occurring objects are represented as having more similar features. 相似文献