首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An animal that is rewarded for a response in one situation (the S+) is likely to respond to similar but recognizably different stimuli, the ubiquitous phenomenon of stimulus generalization. On the basis of functional analyses of the probabilistic structure of the world, Shepard formulated a universal law of generalization, claiming that generalization gradients, as a function of the appropriately scaled distance of a stimulus from S+, should be exponential in shape. This law was tested in spatial generalization in honeybees. Based on theoretically derived scales, generalization along both the dimensions of the distance from a landmark and the direction to a landmark followed Shepard's law. Support in an invertebrate animal increases the scope of the law, and suggests that the ecological structure of the world may have driven the evolution of cognitive structures in diverse animals.  相似文献   

2.
Generalization,similarity, and Bayesian inference   总被引:1,自引:0,他引:1  
Tenenbaum JB  Griffiths TL 《The Behavioral and brain sciences》2001,24(4):629-40; discussion 652-791
Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models.  相似文献   

3.
J Tamminen  MH Davis  M Merkx  K Rastle 《Cognition》2012,125(1):107-112
Accounts of memory that postulate complementary learning systems (CLS) have become increasingly influential in the field of language learning. These accounts predict that generalisation of newly learnt linguistic information to untrained contexts requires offline memory consolidation. Such generalisation should not be observed immediately after training, as these accounts claim unconsolidated representations are context and hippocampus-dependent and gain contextual and hippocampal independence only after consolidation. We trained participants on new affixes (e.g., -nule) attached to familiar word stems (e.g., buildnule), testing them immediately or 2days later. Participants showed an immediate advantage for trained affixes in a speeded shadowing task as long as these affixes occurred in the stem contexts in which they were learnt (e.g., buildnule). This learning effect generalised to words with untrained stems (e.g., sailnule) only in the delayed test condition. By contrast, a non-speeded definition selection task showed immediate generalisation. We propose that generalisation can be supported by initial context-dependent memories given sufficient processing time, but that context-independent lexical representations emerge only following consolidation, as predicted by CLS accounts.  相似文献   

4.
From the principle that subjective dissimilarity between 2 stimuli is determined by their ratio, Fechner derives his logarithmic law in 2 ways. In one derivation, ignored and forgotten in modern accounts of Fechner's theory, he formulates the principle in question as a functional equation and reduces it to one with a known solution. In the other derivation, well known and often criticized, he solves the same functional equation by differentiation. Both derivations are mathematically valid (the much-derided "expedient principle" mentioned by Fechner can be viewed as merely an inept way of pointing at a certain property of the differentiation he uses). Neither derivation uses the notion of just-noticeable differences. But if Weber's law is accepted in addition to the principle in question, then the dissimilarity between 2 stimuli is approximately proportional to the number of just-noticeable differences that fit between these stimuli: The smaller Weber's fraction the better the approximation, and Weber's fraction can always be made arbitrarily small by an appropriate convention. We argue, however, that neither the 2 derivations of Fechner's law nor the relation of this law to thresholds constitutes the essence of Fechner's approach. We see this essence in the idea of additive cumulation of sensitivity values. Fechner's work contains a surprisingly modern definition of sensitivity at a given stimulus: the rate of growth of the probability-of-greater function, with this stimulus serving as a standard. The idea of additive cumulation of sensitivity values lends itself to sweeping generalizations of Fechnerian scaling.  相似文献   

5.
The present study compared the impact of symbolic equivalence and opposition relations on fear generalisation. In a procedure using nonsense words, some stimuli became symbolically equivalent to an aversively conditioned stimulus while others were symbolically opposite. The generalisation of fear to symbolically related stimuli was then measured using behavioural avoidance, retrospective unconditioned stimulus expectancy and stimulus valence ratings. Equivalence relations facilitated fear generalisation while opposition relations constrained generalisation. The potential clinical implications of symbolic generalisation are discussed.  相似文献   

6.
Excessive fear generalisation is a feature characteristic of clinical anxiety and has been linked to its aetiology. Previous animal studies have shown that the mere passage of time increases fear generalisation and that brief exposure to training cues prior to long-term testing reverses this effect. The current study examined these phenomena in humans. Healthy participants learned the relationship between the presentation of a picture of a neutral male face and the delivery of a mild shock. One group was immediately tested with a novel picture of a somewhat different male face (generalisation test). Another group was tested one week later. A third group was also tested one week later and was additionally exposed to the training picture prior to testing. During picture presentations, shock-expectancy ratings were obtained as a measure of fear. Fear generalisation increased from the immediate test to the 1-week follow-up test. This result could not be attributed to level of neuroticism or a general increase in fear (incubation). Furthermore, the time-dependent increase in fear generalisation vanished following brief exposure to the training picture. Results indicate that human fear generalisation is a temporally dynamic process and that memory for stimulus details can be re-established following a reminder treatment.  相似文献   

7.
Previous research (Smeets, 1991) suggested that when given a new discrimination, children respond on the basis of physical similarity with previously discriminated stimuli. They respond to a stimulus similar to another preferred stimulus (S+ transfer) and respond away from a stimulus similar to another nonpreferred stimulus (S- transfer). When both types of transfer apply to the same stimulus, S+ transfer prevails, S+ Priority Transfer (S+PT). The present study demonstrated that S+PT also occurs when the criterion task consists of two nonpreferred stimuli. When given a choice between two previously nonpreferred stimuli, one similar and one dissimilar to other preferred stimuli, children select the first one. They do not so, however, when a nonpreferred stimulus resembling another preferred stimulus is presented with a new nonpreferred stimulus. These findings suggest that the children's preferences were not based on the physical resemblance with other (non)preferred stimuli but on the functions (S+, S-, S0) of individual stimulus components. A theoretical model is presented that accounts for all experimental data reported in the previous and present study. The model implies that discriminative responding not only results from but also determines the functional properties of individual stimulus elements.  相似文献   

8.
An otherwise lawlike generalisation hedged by a ceteris paribus (CP) clause qualifies as a law of nature, if the CP clause can be substituted with a set of conditions derived from the multivariate regression model used to interpret the empirical data in support of the generalisation. Three studies in human biology that use regression analysis are surveyed, showing that standard objections to cashing out CP clauses in this way—based on alleged vagueness, vacuity, or lack of testability—do not apply. CP laws also cannot be said to be simply false due to the indefinitely many conditions not explicitly stated in their associated model: scientific CP clauses imply that these are, given the evidence, not nomically relevant.  相似文献   

9.
Individual differences in fear generalisation have been proposed to play a role in the aetiology and/or maintenance of anxiety disorders, but few data are available to directly support that claim. The research that is available has focused mostly on generalisation of peripheral and central physiological fear responses. Far less is known about the generalisation of avoidance, the behavioural component of fear. In two experiments, we evaluated how neuroticism, a known vulnerability factor for anxiety, modulates an array of fear responses, including avoidance tendencies, towards generalisation stimuli (GS). Participants underwent differential fear conditioning, in which one conditioned stimulus (CS+) was repeatedly paired with an aversive outcome (shock; unconditioned stimulus, US), whereas another was not (CS?). Fear generalisation was observed across measures in Experiment 1 (US expectancy and evaluative ratings) and Experiment 2 (US expectancy, evaluative ratings, skin conductance, startle responses, safety behaviours), with overall highest responding to the CS+, lowest to the CS? and intermediate responding to the GSs. Neuroticism had very little impact on fear generalisation (but did affect GS recognition rates in Experiment 1), in line with the idea that fear generalisation is largely an adaptive process.  相似文献   

10.
Staddon discusses a vast array of topics in comparative psychology in this book. His view is that adaptive behavior in most cases is the result of optimal choice acting on an animal's knowledge about the world. Staddon refers to this as a functional teleonomic approach inasmuch as it attempts to understand an animal's behavior in terms of goals. He builds mathematical models based on this idea that are designed to reproduce specific sets of empirical observations, usually qualitatively. A natural consequence of Staddon's approach is that many models are developed, each of which applies to a specific set of observations. An alternative to functional teleonomy is a functional approach that builds on prior principles. In most cases, this approach favors a single‐theory account of behavior. Prior principles can be understood as functional stand‐ins for antecedent material causes, which means that these accounts are closer to mechanistic theories than are goal‐based teleonomic accounts. An ontological perspective, referred to as supervenient realism, is a means of understanding the relationship between functional theories and the material world. According to this perspective, the algorithmic operation of a successful functional theory may be understood to supervene on the material operation of the nervous system.  相似文献   

11.
It is clear that children generalise their knowledge of events from one instantiation to another. The means by which generalisation is accomplished are unclear. In three experiments, we used elicited imitation of multi-step sequences to test whether 25-month-olds' generalisation occurs as a function of forgetting of the features of the original event. Experiment 1 was an initial test of generalisation from one version of an event to another version involving perceptually different yet functionally analogous props. After a 1-week delay, children showed evidence of generalisation by enacting the event using the analogous props. Experiment 2 was a within-subject test of generalisation and memory for the original version of an event. After a 1-week delay, when paired with unrelated distractor props, analogous props served as effective retrieval cues; when paired with the original props, analogous props were treated as functionally equivalent to unrelated distractors. This within-subject reversal in the functional role of analogous props is compelling evidence that children's event representations include specific features and, at the same time, are generalisable. In Experiment 3, children showed evidence of generalisation immediately after exposure to an event, thereby making clear that generalisation occurs even in the face of robust memory for the specific features of the original event.  相似文献   

12.
In failing to define the units in which the stimulus is to be measured, the Weber law might seem to make no definite assertion, and indeed, it is shown that any single empirical function, supposed to relate a given stimulus intensity with that intensity which is just noticeably greater, can be put into the Weber form by a suitable change of scale in which the stimulus intensity is to be measured. Nevertheless, it turns out that if different individuals have different Weber functions, when the intensities are measured on a given scale, then it is by no means always possible to transform the scale so that all of the functions can take on the Weber form. Some necessary conditions are given for the possibility of such a transformation when there is at hand a finite number of functions, and when the functions depend upon a single parameter the necessary and sufficient condition is easily derived. The same discussion leads to a generalization of Thurstone's psychophysical scale and shows that such a scale is always possible.  相似文献   

13.
Generalising what is learned about one stimulus to other but perceptually related stimuli is a basic behavioural phenomenon. We evaluated whether a rule learning mechanism may serve to explain such generalisation. To this end, we assessed whether inference rules communicated through verbal instructions affect generalisation. Expectancy ratings, but not valence ratings, proved sensitive to this manipulation. In addition to revealing a role for inference rules in generalisation, our study has clinical implications as well. More specifically, we argue that targeting inference rules might prove to be an effective strategy to affect the excessive generalisation that is often observed in psychopathology.  相似文献   

14.
Constraints on computational models of basic processes in reading   总被引:1,自引:0,他引:1  
There are numerous reports in the visual word recognition literature that the joint effects of various factors are additive on reaction time. A central claim by D. C. Plaut and J. R. Booth (2000, 2006) is that their parallel distributed processing model simulates additive effects of stimulus quality and word frequency in the context of lexical decision. If correct, this success would have important implications for computational accounts of reading processes. However, the results of further simulations with this model undermine this claim given that the joint effects of stimulus quality and word frequency yield a nonmonotonic function (underadditivity, additivity, and overadditivity) depending on the size of the stimulus quality effect, whereas skilled readers yield additivity more broadly. The implications of these results both locally and more globally are discussed, and a number of other issues are noted. Additivity of factor effects constitutes a benchmark that computational accounts should strive to meet.  相似文献   

15.
Schwartz R 《The Behavioral and brain sciences》2001,24(4):626-8; discussion 652-791
Roger Shepard's proposals and supporting experiments concerning evolutionary internalized regularities have been very influential in the study of vision and in other areas of psychology and cognitive science. This paper examines issues concerning the need, nature, explanatory role, and justification for postulating such internalized constraints. In particular, I seek further clarification from Shepard on how best to understand his claim that principles of kinematic geometry underlie phenomena of motion perception. My primary focus is on the ecological validity of Shepard's kinematic constraint in the context of ordinary motion perception. First, I explore the analogy Shepard draws between internalized circadian rhythms and the supposed internalization of kinematic geometry. Next, questions are raised about how to interpret and justify applying results from his own and others' experimental studies of apparent motion to more everyday cases of motion perception in richer environments. Finally, some difficulties with Shepard's account of the evolutionary development of his kinematic constraint are considered.  相似文献   

16.
Edelman S 《The Behavioral and brain sciences》1998,21(4):449-67; discussion 467-98
Advanced perceptual systems are faced with the problem of securing a principled (ideally, veridical) relationship between the world and its internal representation. I propose a unified approach to visual representation, addressing the need for superordinate and basic-level categorization and for the identification of specific instances of familiar categories. According to the proposed theory, a shape is represented internally by the responses of a small number of tuned modules, each broadly selective for some reference shape, whose similarity to the stimulus it measures. This amounts to embedding the stimulus in a low-dimensional proximal shape space spanned by the outputs of the active modules. This shape space supports representations of distal shape similarities that are veridical as Shepard's (1968) second-order isomorphisms (i.e., correspondence between distal and proximal similarities among shapes, rather than between distal shapes and their proximal representations). Representation in terms of similarities to reference shapes supports processing (e.g., discrimination) of shapes that are radically different from the reference ones, without the need for the computationally problematic decomposition into parts required by other theories. Furthermore, a general expression for similarity between two stimuli, based on comparisons to reference shapes, can be used to derive models of perceived similarity ranging from continuous, symmetric, and hierarchical ones, as in multidimensional scaling (Shepard 1980), to discrete and nonhierarchical ones, as in the general contrast models (Shepard & Arabie 1979; Tversky 1977).  相似文献   

17.
Wilburn C  Feeney A 《Cognition》2008,106(3):1451-1464
In a recently published study, Sloutsky and Fisher [Sloutsky, V. M., & Fisher, A.V. (2004a). When development and learning decrease memory: Evidence against category-based induction in children. Psychological Science, 15, 553-558; Sloutsky, V. M., & Fisher, A. V. (2004b). Induction and categorization in young children: A similarity-based model. Journal of Experimental Psychology: General, 133, 166-188.] demonstrated that children have better memory for the items that they generalise to than do adults. On the basis of this finding, they claim that children and adults use different mechanisms for inductive generalisations; whereas adults focus on shared category membership, children project properties on the basis of perceptual similarity. Sloutsky & Fisher attribute children's enhanced recognition memory to the more detailed processing required by this similarity-based mechanism. In Experiment 1 we show that children look at the stimulus items for longer than adults. In Experiment 2 we demonstrate that although when given just 250ms to inspect the items children remain capable of making accurate inferences, their subsequent memory for those items decreases significantly. These findings suggest that there are no necessary conclusions to be drawn from Sloutsky & Fisher's results about developmental differences in generalisation strategy.  相似文献   

18.
Various statistical procedures are ieveloped for determining the psychophysical law within the context of a functional measurement approach to studying stimulus integration in perception. The specific results are limited to additive or multiplicative psychological laws, but the generalization to alternative cognitive algebras is evident. Estimation of parameters of the hypothesized psychophysical law and test of the hypothesis that the row psychophysical law is the same as the column psychophysical law in a two-factor stimulus design is considered for various possible psychophysicaldaws, including linear, polynomial, and power laws.  相似文献   

19.
Augustin T 《Acta psychologica》2008,128(1):176-185
Frequently, it is postulated that the results of a ratio production (resp., ratio estimation) experiment can be summarized by Stevens' power law psi=alphaphi(beta). In the present article, it is argued that the power law parameters depend, among other things, on the standard stimulus presented as a reference point, and the physical stimulus scale by which the physical intensities are measured. To formalize this idea, a new formulation of Stevens' power law is presented. We show that the exponent in Stevens' power law can only be interpreted in a meaningful way if the stimulus scale is a ratio scale. Furthermore, we present empirically testable axioms (termed invertibility and weak multiplicativity) which are both necessary and sufficient for the power law exponent to be invariant under changes of the standard stimulus. Finally, invertibility and weak multiplicativity are evaluated in a ratio production experiment. Ten participants were required to adjust the area of variable circles to prescribed ratio production factors. Both axioms are violated for all participants. The results cast doubts on the well-established practice of comparing power law exponents across different modalities.  相似文献   

20.
The effect of stimulus luminance on visual information acquisition depends on the duration for which the stimulus is displayed. In this reply, three duration regions are described, and the accounts of luminance effects provided by Loftus (1985d) and Sperling (1986) are compared for each region. Of particular interest is why different combinations of duration and luminance produce equal performance levels. Two possibilities are considered: first, equal performance may be a logical consequence of equivalent cognitive states and second, equal performance may be a coincidental consequence of different cognitive states. I suggest that equal performance following relatively short durations results from equivalent cognitive states, whereas equal performance following longer durations results from different cognitive states.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号