首页 | 本学科首页   官方微博 | 高级检索  
     

远距离规则的内隐学习使用了何种记忆存储器:来自神经网络模拟的证据
引用本文:李菲菲,刘宝根. 远距离规则的内隐学习使用了何种记忆存储器:来自神经网络模拟的证据[J]. 心理科学, 2018, 0(4): 796-802
作者姓名:李菲菲  刘宝根
作者单位:1. 浙江师范大学;2. 浙江师范大学杭州幼儿师范学院;
摘    要:关于远距离规则的知识是如何被内隐学习的,研究尚未得出结论。该研究通过采用和人类被试相同的实验材料和程序,考察了简单循环网络模型(SRN)对两种汉语声调远距离规则——倒映和逆行规则的内隐学习。结果发现:1.在广泛的参数范围上,SRN能够学会倒映和逆行规则,表明模型的记忆缓冲器可以模拟人类远距离规则的内隐学习;2.SRN对倒映规则的学习比对逆行规则的学习更好,表明在功能上远距离规则的内隐学习可能优先使用了先进先出的记忆存储器及信息加工模式。该研究为探究远距离规则内隐学习的机制提供了新的证据和视角。

关 键 词:远距离规则  内隐学习  记忆存储器  神经网络模拟  
收稿时间:2017-09-18
修稿时间:2018-04-08

Which Memory Buffer does the Implicit Learning Mechanism of Nonlocal Dependencies Use: Evidence from Neural Network Simulations
Fei-Fei LI Liu Baogen. Which Memory Buffer does the Implicit Learning Mechanism of Nonlocal Dependencies Use: Evidence from Neural Network Simulations[J]. Psychological Science, 2018, 0(4): 796-802
Authors:Fei-Fei LI Liu Baogen
Affiliation:1. Zhejiang Normal University;2. ;
Abstract:In implicit learning literature, a basic question concerning how knowledge of structures and regularities is learned is whether the learning mechanism uses a temporary storage buffer, and, if so, what the nature of the buffer is. Recently, Li et al.(2013) found that people acquired unconscious structural knowledge of both Chinese tonal retrogrades and inversions. Moreover, inversions were implicitly learnt more easily than retrogrades, pattern predicted by implicit learning using a first in-first out buffer rather than a last in-?rst out buffer. However, because Chinese Tang poetry uses an inversion, knowledge participants were likely exposed to as children, it is not clear whether prior expectations of structure instantiating inversions could over-ride the effect of what type of buffer the system uses. The network doesn’t have prior knowledge. Accordingly, the present study investigated whether the Simple Recurrent Network (SRN), that used a buffer to allow learning of nonlocal dependencies, could learn tonal inversions and retrogrades and replicate the advantage of inversions over retrogrades.The SRN was tested on the same materials and procedures as Li et al. (2013). The networks were assigned to four cells of two training conditions (trained vs. untrained) by two rules (inversion vs. retrograde) design. The simulations were carried out using all possible permutations of the parameter values, resulting in 150 different models for each group. The materials were strings of tonal syllables. Each string consisted of 10 different tonal syllables, where the tone types (pings and zes) of first five syllables predicted the tone types of following five by forming an inversion or a retrograde. In training phase, 144 grammatical strings were used for two trained groups. In test phase, four groups of networks were presented with 48 test sequences (half grammatical and half ungrammatical), and their ability to predict the next tone in the predictable second five elements was used as an index of performance.T-test (with Bonferronni correction) showed that trained networks performed significantly better than untrained networks for both inversion and retrograde groups, suggesting that the networks possibly learnt the two rules. Moreover, for both trained and untrained groups, inversion group performed significantly better than retrograde group. The performance difference between inversion and retrograde for trained networks was greater than that for untrained networks, indicating that inversions were implicitly learnt more easily than retrogrades. Further, the effects of learning were calculated by subtracting the z-scores of the untrained networks/participants from that of the trained networks/participants. A substantial number of the SRNs fell within the area covered by the human data (m ± 1se)(15/150 for inversion, 38/150 for retrograde), suggesting that the SRN could match the characteristic performance of human participants.To conclude, consistent with the results of human experiments, the present simulations showed that: SRN could learn the two nonlocal dependencies, and tonal inversions were implicitly learnt more easily than retrogrades, tentatively suggesting that functionally a first in-first out memory buffer is more likely to be involved in implicit learning of nonlocal dependencies. Thus the present study provide new evidence and a new perspective for exploring the implicit learning mechanism of nonlocal dependencies.
Keywords:nonlocal dependencies   implicit learning   memory buffer   neural network simulations  
本文献已被 CNKI 等数据库收录!
点击此处可从《心理科学》浏览原始摘要信息
点击此处可从《心理科学》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号