排序方式: 共有52条查询结果,搜索用时 0 毫秒
51.
The binding problem is considered in terms of how a brain-inspired cognitive system can recognize multiple sensory features from an object which may be among many objects, process those features individually and then bind the multiple features to the object they belong to. The Causal Cognitive Architecture 3 (CCA3) is a brain-inspired cognitive architecture using a multi-dimensional navigation map as its basic store of information, and capable of pre-causal as well as fully causal behavior. Objects within an input sensory scene are segmented, and sensory features (e.g., visual, auditory, etc.) of each segmented object are spatially mapped onto a variety of navigation maps. It is shown that to provide efficient, flexible, causal solutions to real-world problems, it is not sufficient to bind space (i.e., objects spatially) but it is necessary to also bind time (i.e., change and rate of change of objects within a sensory scene). The CCA3 binds both space and time onto a navigation map as physical features, and is better able to function in real-world environments. As the CCA3 is brain-inspired, the Causal Cognitive Architecture can help to better hypothesize and understand biological mammalian brain function, including solutions to the binding problem. The CCA3 architecture allows it to work in different knowledge domains, possess continual lifelong learning, and demonstrate reasonable explainability. 相似文献
52.
Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1–4 is consistent with a deep integration hierarchy. 相似文献