首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   214篇
  免费   142篇
  国内免费   3篇
  2024年   4篇
  2023年   3篇
  2022年   2篇
  2021年   5篇
  2020年   16篇
  2019年   15篇
  2018年   8篇
  2017年   12篇
  2016年   5篇
  2015年   9篇
  2014年   7篇
  2013年   10篇
  2012年   10篇
  2011年   7篇
  2010年   8篇
  2009年   13篇
  2008年   15篇
  2007年   9篇
  2006年   10篇
  2005年   14篇
  2004年   1篇
  2003年   9篇
  2002年   4篇
  2001年   6篇
  2000年   12篇
  1999年   12篇
  1998年   8篇
  1997年   13篇
  1996年   12篇
  1995年   8篇
  1994年   5篇
  1993年   6篇
  1992年   8篇
  1991年   6篇
  1990年   3篇
  1989年   8篇
  1988年   8篇
  1987年   8篇
  1986年   4篇
  1985年   5篇
  1984年   6篇
  1982年   6篇
  1981年   5篇
  1980年   1篇
  1979年   2篇
  1978年   1篇
  1977年   1篇
  1976年   6篇
  1975年   3篇
排序方式: 共有359条查询结果,搜索用时 15 毫秒
181.
Voluntary behaviors (operants) can come in two varieties: Goal-directed actions, which are emitted based on the remembered value of the reinforcer, and habits, which are evoked by antecedent cues and performed without the reinforcer's value in active memory. The two are perhaps most clearly distinguished with the reinforcer-devaluation test: Goal-directed actions are suppressed when the reinforcer is separately devalued and responding is tested in extinction, and habitual behaviors are not. But what is the function of habit learning? Habits are often thought to be strong and unusually persistent. The present selective review examines this idea by asking whether habits identified by the reinforcer-devaluation test are more resistant to extinction, resistant to the effects of other contingency change, vulnerable to relapse, resistant to the weakening effects of context change, or permanently in place once they are learned. Surprisingly little evidence supports the idea that habits are permanent or more persistent. Habits are more context-specific than goal-directed actions are. Methods that make behavior persistent do not necessarily work by encouraging habit. The function of habit learning may not be to make a behavior strong or more persistent but to make it automatic and efficient in a particular context.  相似文献   
182.
Six pigeons were trained in a delayed matching-to-sample task involving bright- and dim-yellow samples on a central key, a five-peck response requirement to either sample, a constant 1.5-s delay, and the presentation of comparison stimuli composed of red on the left key and green on the right key or vice versa. Green-key responses were occasionally reinforced following the dimmer-yellow sample, and red-key responses were occasionally reinforced following the brighter-yellow sample. Reinforcer delivery was controlled such that the distribution of reinforcers across both comparison-stimulus color and comparison-stimulus location could be varied systematically and independently across conditions. Matching accuracy was high throughout. The ratio of left to right side-key responses increased as the ratio of left to right reinforcers increased, the ratio of red to green responses increased as the ratio of red to green reinforcers increased, and there was no interaction between these variables. However, side-key biases were more sensitive to the distribution of reinforcers across key location than were comparison-color biases to the distribution of reinforcers across key color. An extension of Davison and Tustin's (1978) model of DMTS performance fit the data well, but the results were also consistent with an alternative theory of conditional discrimination performance (Jones, 2003) that calls for a conceptually distinct quantitative model.  相似文献   
183.
Pigeons' keypecking was maintained under two- and three-component chained schedules of food presentation. The component schedules were all fixed-interval schedules of either 1- or 2-min duration. Across conditions the presence of houselight illumination within each component schedule was manipulated. For each pigeon, first-component response rates increased significantly when the houselight was extinguished in the first component and illuminated in the second. The results suggest that the increase was not the result of disinhibition or modification of stimulus control by component stimuli, but appears to result from the reinforcement of responding by the onset of illumination in the second component. Additionally, the apparent reinforcing properties of houselight illumination resulted neither from association of the houselight with the terminal component of the chained schedule nor through generalization of the hopper illumination present during food presentation. The results of the present series of experiments are related to previous demonstrations of illumination-reinforced responding and to the interpretation of data from experiments employing houselight illumination as stimuli associated with timeout or brief stimuli in second-order schedules.  相似文献   
184.
Four pigeons were exposed to a token-reinforcement procedure with stimulus lights serving as tokens. Responses on one key (the token-production key) produced tokens that could be exchanged for food during an exchange period. Exchange periods could be produced by satisfying a ratio requirement on a second key (the exchange-production key). The exchange-production key was available any time after one token had been produced, permitting up to 12 tokens to accumulate prior to exchange. Token accumulation, measured in terms of both frequency (percent cycles with accumulation) and magnitude (mean number of tokens accumulated), decreased as the token-production ratio increased from 1 to 10 across conditions (with exchange-production ratio held constant), and increased as the exchange-production ratio increased from 1 to 250 across conditions (with token-production ratio held constant). When tokens were removed, accumulation decreased markedly compared to conditions with tokens and the same schedules. These data show that token accumulation is an orderly function of token-production and exchange-production schedules, and they are broadly consistent with a unit-price model based on local and global responses per reinforcer.  相似文献   
185.
Animals accumulate reinforcers when they forgo the opportunity to consume available food in favor of acquiring additional food for later consumption. Laboratory research has shown that reinforcer accumulation is facilitated when an interval (either spatial or temporal) separates earning from consuming reinforcers. However, there has been no systematic investigation on the interval separating consuming reinforcers from earning additional reinforcers. This oversight is problematic because this second interval is an integral part of much of the previous research on reinforcer accumulation. The purpose of the current study was to determine the independent contributions of these two temporal intervals on reinforcer accumulation in rats. Each left lever press earned a single food pellet; delivery of the accumulated pellet(s) occurred upon a right lever press. Conditions varied based on the presence of either an intertrial interval (ITI) that separated pellet delivery from the further opportunity to accumulate more pellets, or a delay‐to‐reinforcement that separated the right lever press from the delivery of the accumulated pellet(s). Delay and ITI values of 0, 5, 10 and 20 s were investigated. The delay‐to‐reinforcement conditions produced greater accumulation relative to the ITI conditions, despite accumulation increasing the density of reinforcement more substantially in the ITI conditions. This finding suggests that the temporal separation between reinforcer accumulation and subsequent delivery and consumption was a more critical variable in controlling reinforcer accumulation.  相似文献   
186.
Resurgence is defined as an increase in the frequency of a previously reinforced target response when an alternative source of reinforcement is suspended. Despite an extensive body of research examining factors that affect resurgence, the effects of alternative‐reinforcer magnitude have not been examined. Thus, the present experiments aimed to fill this gap in the literature. In Experiment 1, rats pressed levers for single‐pellet reinforcers during Phase 1. In Phase 2, target‐lever pressing was extinguished, and alternative‐lever pressing produced either five‐pellet, one‐pellet, or no alternative reinforcement. In Phase 3, alternative reinforcement was suspended to test for resurgence. Five‐pellet alternative reinforcement produced faster elimination and greater resurgence of target‐lever pressing than one‐pellet alternative reinforcement. In Experiment 2, effects of decreasing alternative‐reinforcer magnitude on resurgence were examined. Rats pressed levers and pulled chains for six‐pellet reinforcers during Phases 1 and 2, respectively. In Phase 3, alternative reinforcement was decreased to three pellets for one group, one pellet for a second group, and suspended altogether for a third group. Shifting from six‐pellet to one‐pellet alternative reinforcement produced as much resurgence as suspending alternative reinforcement altogether, while shifting from six pellets to three pellets did not produce resurgence. These results suggest that alternative‐reinforcer magnitude has effects on elimination and resurgence of target behavior that are similar to those of alternative‐reinforcer rate. Thus, both suppression of target behavior during alternative reinforcement and resurgence when conditions of alternative reinforcement are altered may be related to variables that affect the value of the alternative‐reinforcement source.  相似文献   
187.
A new schematic diagram is presented which shows the displacement paths of surface particles within the contact circle when a rigid cone is loaded normally on an elastic half-space. With the help of this diagram and the information available in the literature, it is argued here that a modification made to the Love equation in 1999 is unnecessary and incorrect. It is also shown that the modification results in an incorrect expression for the surface radial displacement outside the contact circle. Therefore, it is suggested that the modification be dropped forthwith.  相似文献   
188.
Children of both typical and atypical cognitive development tend to prefer contexts in which their behavior results in a choice of reinforcers rather than a single reinforcer, even when the reinforcer accessed is identical across conditions. The origin of this preference has been attributed speculatively to behavioral histories in which choice making tends to be associated with differentially beneficial outcomes. Few studies have evaluated this claim, and those that have, have yielded mixed results. We provided five preschool‐aged children experiences in which choice‐making and no‐choice contexts were differentially associated with higher preference and larger magnitude reinforcers, and we assessed changes in their preference for choice and no‐choice contexts in which outcomes were equated. These conditioning experiences resulted in consistent and replicable shifts in child preference, indicating that preference for choice is malleable through experience.  相似文献   
189.
Experiment I investigated the effects of reinforcer magnitude on differential-reinforcement-of-low-rate (DRL) schedule performance in three phases. In Phase 1, two groups of rats (n = 6 and 5) responded under a DRI. 72-s schedule with reinforcer magnitudes of either 30 or 300 microl of water. After acquisition, the water amounts were reversed for each rat. In Phase 2, the effects of the same reinforcer magnitudes on DRL 18-s schedule performance were examined across conditions. In Phase 3, each rat responded unider a DR1. 18-s schedule in which the water amotnts alternated between 30 and 300 microl daily. Throughout each phase of Experiment 1, the larger reinforcer magnitude resulted in higher response rates and lower reinforcement rates. The peak of the interresponse-time distributions was at a lower value tinder the larger reinforcer magnitude. In Experiment 2, 3 pigeons responded under a DRL 20-s schedule in which reinforcer magnitude (1-s or 6-s access to grain) varied iron session to session. Higher response rates and lower reinforcement rates occurred tinder the longer hopper duration. These results demonstrate that larger reinforcer magnitudes engender less efficient DRL schedule performance in both rats and pigeons, and when reinforcer magnitude was held constant between sessions or was varied daily. The present results are consistent with previous research demonstrating a decrease in efficiency as a function of increased reinforcer magnituide tinder procedures that require a period of time without a specified response. These findings also support the claim that DRI. schedule performance is not governed solely by a timing process.  相似文献   
190.
Eight rats pressed levers for varying concentrations of sucrose in water under eight variable-interval schedules that specified a wide range of reinforcement rate. Herrnstein's (1970) hyperbolic equation described the relation between reinforcement and responding well. Although the y asymptote, k, of the hyperbola appeared roughly constant over conditions that approximated conditions used by Heyman and Monaghan (1994), k varied when lower concentration solutions were included. Advances in matching theory that reflect asymmetries between response alternatives and insensitive responding were incorporated into Herrnstein's equation. After fitting the modified equation to the data, Herrnstein's k also increased. The results suggest that variation in k can be detected under a sufficiently wide range of reinforcer magnitudes, and they also suggest that matching theory's account of response strength is false. The results support qualitative predictions made by linear system theory.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号