首页 | 本学科首页   官方微博 | 高级检索  
     


A computational model of spatial visualization capacity
Authors:Lyon Don R  Gunzelmann Glenn  Gluck Kevin A
Affiliation:a L3 Communications at Air Force Research Laboratory, 6030 South Kent Street, Mesa, Arizona 85212-6061, USA
b Air Force Research Laboratory, Mesa, Arizona, USA
Abstract:Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to perform it. In this model, developed within the Adaptive Control of Thought-Rational (ACT-R) architecture, visualization capacity is limited by three mechanisms. Two of these (associative interference and decay) are longstanding characteristics of ACT-R’s declarative memory. A third (spatial interference) is a new mechanism motivated by spatial proximity effects in our data. We tested the model in two experiments, one with parameter-value fitting, and a replication without further fitting. Correspondence between model and data was close in both experiments, suggesting that the model may be useful for understanding why visualizing new, complex spatial material is so difficult.
Keywords:Spatial visualization   Visuospatial working memory   Mental imagery   ACT-R   Computational model
本文献已被 ScienceDirect PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号