首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Frequency theories of concept learning assume that people count how often features occur among instances of a concept, but different versions make various assumptions about what features they count. According to the basic feature model, only basic features are counted, whereas according to the configural model, basic features and configural features (all combinations of basic features) are counted. Two experiments assessed the predictions of both versions of frequency theory. Subjects viewed schematic human faces, which included both positive and negative instances of the concept to be learned, and then provided typicality ratings, classification responses, and frequency estimates of configural features, basic features, and whole exemplars. Because both models assume that basic features are counted, they make the same predictions in many situations. Here, the basic feature estimation and whole exemplar tests were designed such that both models make the same predictions, whereas the typicality rating, classification, and confignral feature estimation tests were designed to distinguish between the models. The pattern of results clearly supported the basic feature version of frequency theory.  相似文献   

3.
4.
The exact role of the cerebellum in motor learning and cognition is controversial. Nonetheless, recent ideas and facts have prompted an attempt at building and testing a more unified and coherent conceptualization. This article will suggest that the cerebellum might indeed participate in both motor control and cognition, and in motor adaptation, motor learning, and procedural learning. The proposed process would entail stimulus-response linkage through trial and error learning, and would consist of groupings of single-response elements-motor and cognitive-into large combinations. After practice, the occurrence of a sensory or experiential `context' would automatically trigger the combined response. The parallel fiber is the proposed agent of stimulus-response linkage and of combining the response elements. The attempt here is to focus on the role of the parallel fiber as a possible combiner of downstream motor and cognitive elements.  相似文献   

5.
Learning has been defined functionally as changes in behavior that result from experience or mechanistically as changes in the organism that result from experience. Both types of definitions are problematic. We define learning as ontogenetic adaptation—that is, as changes in the behavior of an organism that result from regularities in the environment of the organism. This functional definition not only solves the problems of other definitions, but also has important advantages for cognitive learning research.  相似文献   

6.
The focus of the present article was to analyze processes that determine the enactment and age effect in a multi-trial free recall paradigm by looking at the serial position effects. In an experimental study (see Schatz et al 2010), the performance-enhancing effect of enactive encoding and repeated learning was tested with older and younger participants. As expected, there was a steady improvement of memory performance as a function of repeated learning regardless of age. In addition, enactive encoding led to a better memory performance than verbal encoding in both age groups. Furthermore, younger adults outperformed the elderly regardless of type of encoding. Analyses in the present article show that encoding by enacting seems to profit especially from remembering the last items of a presented list. Regarding age differences, younger outperformed older participants in nearly all item positions. The performance enhancement after task repetition is due to a higher amount of recalled items in the middle positions in a subject performed task (SPT) and a verbal task (VT) as well as the last positions of a learned list in VT.  相似文献   

7.
How serial is serial processing in vision?   总被引:1,自引:0,他引:1  
E Zohary  S Hochstein 《Perception》1989,18(2):191-200
Visual search for an element defined by the conjunction of its colour and orientation has previously been shown to be a serial processing task since reaction times increase linearly with the number of distractor elements used in the display. Evidence is presented that there are parallel processing constituents to this serial search. Processing time depended on the ratio of the number of the two distractor types used, suggesting that only one type was scanned. Which type was scanned also depended on the distractor ratio, indicating that this decision was made after stimulus presentation and was based on a parallel figure-ground separation of the stimulus elements. Furthermore, in accordance with this serial scanning model, there was an increase in processing speed (elements scanned per second) with increase in number of elements to be scanned. This increased efficiency suggests that clumps of elements were processed synchronously. Under the stimulation conditions used, clumps contained six to sixteen elements and each clump was processed in 50-150 ms.  相似文献   

8.
9.
The customary assumption in the study of human learning using alternating study and test trials is that learning occurs during study trials and that test trials are useful only to measure learning. In fact, tests seem to play little role in the development of learning, because the learning curve is similar even when the number of test trials varies widely (Tulving, Journal of Verbal Learning and Verbal Behavior 6:175-184, 1967). However, this outcome seems odd, because other research has shown that testing fosters greater long-term learning than does studying. We report three experiments addressing whether tests affect the shape of the learning curve. In two of the experiments, we examined this issue by varying the number of spaced study trials in a sequence and examining performance on only a single test trial at the end of the series (a "pure-study" learning curve). We compared these pure-study learning curves to standard learning curves and found that the standard curves increase more rapidly and reach a higher level in both free recall (Exp. 1) and paired-associate learning (Exp. 2). In Experiment 3, we provided additional study trials in the "pure-study" condition to determine whether the standard (study-test) condition would prove superior to a study-study condition. The standard condition still produced better retention on both immediate and delayed tests. Our experiments show that test trials play an important role in the development of learning using both free-recall (Exps. 1 and 3) and paired-associate (Exp. 2) procedures. Theories of learning have emphasized processes that occur during study, but our results show that processes engaged during tests are also critical.  相似文献   

10.
Current practices in the undergraduate Psychology of Learning course were assessed through a survey in which a questionnaire probing the teaching of the course was sent to 238 4-year colleges and universities in the United States. Fifty-four percent of the questionnaires were returned. Learning courses were taught at all but 10 of the schools that responded. The course typically is one of several that can be selected to fulfill requirements for the major in psychology. The course orientation and content varied widely from cognitive to eclectic to behavioral, and laboratory requirements existed in less than half of the courses. The effects of these practices on behavior analysis are considered and several suggestions are made for teaching behavior analysis in the Learning course and elsewhere to undergraduates.  相似文献   

11.
What are the minimal conditions for the formation of chunks by a pigeon learning an arbitrary list? Experiment 1 compared the acquisition of two types of chunkable list (each composed of colors and achromatic geometric forms): A----B----C----D'----E' (or A'----B'----C'----D----E) and A----B----C'----D'----E' (or A'----B'----C----D----E). The first type of list was acquired more rapidly than the second. On both lists, however, evidence of chunking did not emerge until the four-item phase of training (e.g., pauses at the end of one category of list item). In Experiment 2, chunking was shown to occur on four-item lists in which colors and forms were segregated (A----B----C'----D' and A'----B'----C----D), but not on lists in which the two types of items were interspersed (A----B'----C'----D and A'----B----C----D'). As in Experiment 1, evidence of chunking (pauses at chunk boundaries) did not appear until the fourth item was added.  相似文献   

12.
While there is a substantial conceptual literature on equality in education, there has been little clarificatory discussion on the term equity, despite its frequent use in policy and planning documents. The article draws out some different ways in which equity can be understood in education. It distinguishes three forms of equity, looking at the social context when major shifts in the meaning of the term took place in English—the fourteenth century, the sixteenth century and the eighteenth century. Terming these equity from below, equity from above, and equity from the middle, the analysis highlights how each helps clarify aspects of the concern with diversity within the capability approach. The conclusion drawn is that all three forms of equity need to be placed in articulation to expand capabilities in education.  相似文献   

13.
14.
A philosophical standard in the debates concerning material constitution is the case of a statue and a lump of clay, Lumpl and Goliath respectively. According to the story, Lumpl and Goliath are coincident throughout their respective careers. Monists hold that they are identical; pluralists that they are distinct. This paper is concerned with a particular objection to pluralism, the Grounding Problem. The objection is roughly that the pluralist faces a legitimate explanatory demand to explain various differences she alleges between Lumpl and Goliath, but that the pluralist’s theory lacks the resources to give any such explanation. In this paper, I explore the question of whether there really is any problem of this sort. I argue (i) that explanatory demands that are clearly legitimate are easy for the pluralist to meet; (ii) that even in cases of explanatory demands whose legitimacy is questionable the pluralist has some overlooked resources; and (iii) there is some reason for optimism about the pluralist’s prospects for meeting every legitimate explanatory demand. In short, no clearly adequate statement of a Grounding Problem is extant, and there is some reason to believe that the pluralist can overcome any Grounding Problem that we haven’t thought of yet.  相似文献   

15.
The magnitude and nature of the diplopia threshold, that is, the value of the retinal disparity at which binocular single vision ends, were studied in four experiments. The results show that the magnitude of the diplopia threshold is highly dependent on the subject tested (differences up to a factor of 6), the amount of training the subject has received (differences up to a factor of 2.5), the criterion used for diplopia (limits for unequivocal singleness of vision were up to a factor of 3 lower than those for unequivocal doubleness of vision), and the conspicuousness of disparity that can be influenced both by the surrounding stimuli (differences up to a factor of 3.5) and stereoscopic depth (differences up to a factor of 4.5). Our data do not confirm previous findings of interference effects associated with the initial appearance of binocular disparity when test stimuli are presented tachistoscopically. A remarkable finding was that the magnitude of the diplopia threshold seems to be determined by the amount of intrinsic noise in the disparity domain, as revealed by the standard deviations of the thresholds for tachistoscopically presented test stimuli. The overall results suggest that the diplopia threshold is, in essence, not the rigid boundary of a dead zone, but, rather, a disparity level corresponding to a lenient criterion for singleness of vision which leads touseful interpretation of the percept of the stimulus without disparity, given the variability of this percept due to intrinsic noise in the disparity domain.  相似文献   

16.
Jaakko Hintikka 《Synthese》2011,183(1):69-85
The modern notion of the axiomatic method developed as a part of the conceptualization of mathematics starting in the nineteenth century. The basic idea of the method is the capture of a class of structures as the models of an axiomatic system. The mathematical study of such classes of structures is not exhausted by the derivation of theorems from the axioms but includes normally the metatheory of the axiom system. This conception of axiomatization satisfies the crucial requirement that the derivation of theorems from axioms does not produce new information in the usual sense of the term called depth information. It can produce new information in a different sense of information called surface information. It is argued in this paper that the derivation should be based on a model-theoretical relation of logical consequence rather than derivability by means of mechanical (recursive) rules. Likewise completeness must be understood by reference to a model-theoretical consequence relation. A correctly understood notion of axiomatization does not apply to purely logical theories. In the latter the only relevant kind of axiomatization amounts to recursive enumeration of logical truths. First-order “axiomatic” set theories are not genuine axiomatizations. The main reason is that their models are structures of particulars, not of sets. Axiomatization cannot usually be motivated epistemologically, but it is related to the idea of explanation.  相似文献   

17.
In "Action and Responsibility,' Joel Feinberg pointed to an important idea to which he gave the label "the accordion effect.' Feinberg's discussion of this idea is of interest on its own, but it is also of interest because of its interaction with his critique, in his "Causing Voluntary Actions,' of a much discussed view of H. L. A. Hart and A. M. Honoré that Feinberg labels the "voluntary intervention principle.' In this essay I reflect on what the accordion effect is supposed by Feinberg to be, on differences between Feinberg's understanding of this idea and that of Donald Davidson, and on the interaction between Feinberg's discussion of the accordion effect and his critique of the voluntary intervention principle.  相似文献   

18.
Dichotic listening originally was a means of studying attention. Half a century ago Doreen Kimura parlayed the dichotic method into a noninvasive indicator of lateralized cerebral language representation. The ubiquitous right-ear advantage (REA) for verbal material was accepted as a concomitant of left-sided language lateralization and preferential conduction of right-ear messages to the left hemisphere. As evidence has accumulated over the past 50years showing the REA to be dynamic and modifiable, the concept of attention has become essential for interpreting the findings. Progress in understanding the role of attention has been manifested as a transition from efforts to document attention effects to efforts to characterize their mechanisms. We summarize the relevant evidence, trace the evolution of explanatory models, and outline contemporary accounts of the role of attention in dichotic listening.  相似文献   

19.
Priming from imagery is typically weaker than that from perception. This has been interpreted as resulting from weaker activation of perceptual processes. However, for imagery and perception, commonality is only half the story: Each is also characterized by specific processes. If priming can be due to both unshared and shared components of imagery and perception, then it should be possible to observe greater priming from imagery than from perception. Two new priming experiments were designed to test this hypothesis, while controlling incidental task differences. In both experiments, participants studied objects by counting their parts (from a mental image or a picture). Experiment 1 used a word-picture matching test task, which was hypothesized to depend on stimulus processing specific to perception, and Experiment 2 a size judgment test task, which was hypothesized to depend on retrieval and generation processes specific to imagery. As predicted, priming for perceived objects was greater than priming for imagined objects in the word-picture matching task. Conversely, in the size judgment task, more priming from imagery than from perception was observed. These results support the conclusions that (a) imagery and perception have substantial unshared processes, and (b) these processes contribute to priming.  相似文献   

20.
Ian McDiarmid 《Erkenntnis》2008,69(3):279-293
The first part of this paper discusses Quine’s views on underdetermination of theory by evidence, and the indeterminacy of translation, or meaning, in relation to certain physical theories. The underdetermination thesis says different theories can be supported by the same evidence, and the indeterminacy thesis says the same component of a theory that is underdetermined by evidence is also meaning indeterminate. A few examples of underdetermination and meaning indeterminacy are given in the text. In the second part of the paper, Quine’s scientific realism is discussed briefly, along with some of the difficulties encountered when considering the ‘truth’ of different empirically equivalent theories. It is concluded that the difference between underdetermination and indeterminacy, while significant, is not as great as Quine claims. It just means that after we have chosen a framework theory, from a number of empirically equivalent ones, we still have further choices along two different dimensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号