首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6篇
  免费   0篇
  2020年   1篇
  2019年   1篇
  2017年   1篇
  2016年   1篇
  2006年   1篇
  2004年   1篇
排序方式: 共有6条查询结果,搜索用时 15 毫秒
1
1.
Boundary extension (BE) is a memory error for close-up views of scenes in which participants tend to remember a picture of a scene as depicting a more wide-angle view than what was actually displayed. However, some experiments have yielded data that indicate a normalized memory of the views depicted in a set of scenes, suggesting that memory for the previously studied scenes has become drawn toward the average view in the image set. In previous studies, normalization is only found when the retention interval is very long or when the stimuli no longer appear to represent a spatial expanse. In Experiment 1, we examine whether normalization can influence results for scenes depicting a partial view of space and when the memory test occurs immediately following the study block by manipulating the degree of difference between studied close-up and wide-angle scenes. In Experiment 2, normalization is induced in a set of scenes by creating conditions expected to lead to memory interference, suggesting that this may be the cause of view normalization. Based on the multi-source model of BE, these scenes should be extended during perception (Intraub, H. (2010). Rethinking scene perception: A multisource model. Psychology of Learning and Motivation, 52, 231–265). In Experiment 3, we show that BE is indeed observable if the same scenes are tested differently, supporting the notion that BE is primarily a perceptual phenomenon while normalization is a memory effect.  相似文献   
2.
Professional career paths are nowadays marked by multiple transitions. Job loss is one of the most frequent causes of professional transitions. Given this emphasis, recent studies questioned a possible evolution of the “work” norm, the emergence of an “unemployment norm” and its effects on job seekers. This paper proposes to contribute to the discussion on the current evolution of the relation to work and unemployment. At first, the results of a study with 500 unemployed people, who completed a questionnaire on work centrality and on the perception of work, will be presented. Then, a discourse analysis on the meaning and meaningfulness of work of 15 working people will be discussed. The results show the central function of work and the relative normalization of unemployment.  相似文献   
3.
Participants in two human goal-tracking experiments were simultaneously trained with negative patterning (NP) and positive patterning (PP) discriminations (A+, B+, AB–, C–, D–, CD+). Both elemental and configural models of associative learning predict a PP advantage, such that NP is solved less readily than PP. However, elemental models like the unique cue approach additionally predict responding in AB– trials to be initially stronger than that in A+ and B+ trials due to summation of associative strength. Both experiments revealed a PP advantage and a strong summation effect in AB– trials in the first half of the experiments, irrespective of whether the same US was used for both discriminations (Experiment 1) or two different USs (Experiment 2). We discuss that the correct predictions of the unique cue approach are based on its assumptions of non-normalized and context-independent stimulus processing rather than elemental processing per se.  相似文献   
4.
5.
This paper describes formalizations of Tait's normalization proof for the simply typed λ-calculus in the proof assistants Minlog, Coq and Isabelle/HOL. From the formal proofs programs are machine-extracted that implement variants of the well-known normalization-by-evaluation algorithm. The case study is used to test and compare the program extraction machineries of the three proof assistants in a non-trivial setting.  相似文献   
6.
Liver cancer is quite common type of cancer among individuals worldwide. Hepatocellular carcinoma (HCC) is the malignancy of liver cancer. It has high impact on individual’s life and investigating it early can decline the number of annual deaths. This study proposes a new machine learning approach to detect HCC using 165 patients. Ten well-known machine learning algorithms are employed. In the preprocessing step, the normalization approach is used. The genetic algorithm coupled with stratified 5-fold cross-validation method is applied twice, first for parameter optimization and then for feature selection. In this work, support vector machine (SVM) (type C-SVC) with new 2level genetic optimizer (genetic training) and feature selection yielded the highest accuracy and F1-Score of 0.8849 and 0.8762 respectively. Our proposed model can be used to test the performance with huge database and aid the clinicians.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号