首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   167篇
  免费   300篇
  467篇
  2024年   1篇
  2023年   2篇
  2021年   2篇
  2020年   5篇
  2019年   13篇
  2018年   6篇
  2017年   10篇
  2016年   7篇
  2015年   6篇
  2014年   16篇
  2013年   16篇
  2012年   3篇
  2011年   5篇
  2010年   15篇
  2009年   14篇
  2008年   9篇
  2007年   8篇
  2006年   7篇
  2005年   11篇
  2004年   7篇
  2003年   10篇
  2002年   12篇
  2001年   9篇
  2000年   10篇
  1999年   9篇
  1998年   10篇
  1997年   12篇
  1996年   16篇
  1995年   10篇
  1994年   9篇
  1993年   8篇
  1992年   10篇
  1991年   13篇
  1990年   7篇
  1989年   7篇
  1988年   11篇
  1987年   8篇
  1986年   6篇
  1985年   10篇
  1984年   11篇
  1983年   15篇
  1982年   13篇
  1981年   8篇
  1980年   8篇
  1979年   13篇
  1978年   19篇
  1977年   12篇
  1976年   14篇
  1975年   4篇
排序方式: 共有467条查询结果,搜索用时 15 毫秒
271.
The effects of a history of differential reinforcement for selecting a free-choice versus a restricted-choice stimulus arrangement on the subsequent responding of 7 undergraduates in a computer-based game of chance were examined using a concurrent-chains arrangement and a multiple-baseline-across-participants design. In the free-choice arrangement, participants selected three numbers, in any order, from an array of eight numbers presented on the computer screen. In the restricted-choice arrangement, participants selected the order of three numbers preselected from the array of eight by a computer program. In initial sessions, all participants demonstrated no consistent preference or preference for restricted choice. Differential reinforcement of free-choice selections resulted in increased preference for free choice immediately and in subsequent sessions in the absence of programmed differential outcomes. For 5 participants, changes in preference for choice were both robust and lasting, suggesting that a history of differential reinforcement for choice may affect preference for choice.  相似文献   
272.
Identifying effective reinforcers to use to increase desired behaviors is essential to the success of an intervention. Conducting preference assessments is a proven method for identification of effective reinforcers. In the current study, reinforcers were identified to decrease the latency of initiation of daily living skills such as laundry, showering, and chores in five individuals with dual diagnoses. A Concurrent Operant Preference Assessment measuring response allocation to social stimuli was completed with each individual to determine preferred consequence to increase task compliance. Results showed that all five participants decreased latency to initiate daily tasks once treatment was implemented compared with that during the baseline phase. These results were perceived as socially acceptable by staff, and the improvement was maintained 2 weeks beyond the completion of treatment.  相似文献   
273.
Basic research shows that token‐production and exchange‐production schedules in token economies affect each other as second‐order schedules (i.e., the exchange‐production schedule's requirements affect responding toward the token‐production schedule). This relationship has not been investigated with children in academic settings despite the widespread use of token economies in this context. This study compared the effects of fixed‐ratio (FR) and variable‐ratio (VR) exchange‐production schedules of equal ratios (2, 5, and 10) on responding toward an FR 1 token‐production schedule with a child diagnosed with autism. A concurrent chains assessment was also conducted to assess the participant's relative preference for FR and VR exchange‐production schedule arrangements within her typical discrete trial training. Results showed no difference in response rate between the two schedule types. However, the concurrent chains assessment revealed an exclusive preference for the VR arrangement.  相似文献   
274.
Children of both typical and atypical cognitive development tend to prefer contexts in which their behavior results in a choice of reinforcers rather than a single reinforcer, even when the reinforcer accessed is identical across conditions. The origin of this preference has been attributed speculatively to behavioral histories in which choice making tends to be associated with differentially beneficial outcomes. Few studies have evaluated this claim, and those that have, have yielded mixed results. We provided five preschool‐aged children experiences in which choice‐making and no‐choice contexts were differentially associated with higher preference and larger magnitude reinforcers, and we assessed changes in their preference for choice and no‐choice contexts in which outcomes were equated. These conditioning experiences resulted in consistent and replicable shifts in child preference, indicating that preference for choice is malleable through experience.  相似文献   
275.
276.
Eight rats were trained to discriminate pentobarbital from saline under a concurrent variable-interval (VI) VI schedule, on which responses on the pentobarbital-biased lever after pentobarbital were reinforced under VI 20 s and responses on the saline-biased lever were reinforced under VI 80 s. After saline, the reinforcement contingencies programmed on the two levers were reversed. The rats made 62.3% of their responses on the pentobarbital-biased lever after pentobarbital and 72.2% on the saline-biased lever after saline, both of which are lower than predicted by the matching law. When the schedule was changed to concurrent VI 50 s VI 50 s for test sessions with saline and the training dose of pentobarbital, responding on the pentobarbital-biased lever after the training dose of pentobarbital and on the saline-biased lever after saline became nearly equal, even during the first 2 min of the session, suggesting that the presence or absence of the training drug was exerting minimal control over responding and making the determination of dose-effect relations of drugs difficult to interpret. When the pentobarbital dose-response curve was determined under the concurrent VI 50-s VI 50-s schedule, responding was fairly evenly distributed on both levers for most rats. Therefore, 6 additional rats were trained to respond under a concurrent VI 60-s VI 240-s schedule. Under this schedule, the rats made 62.6% of their responses on the pentobarbital-biased lever after pentobarbital and 73.5% of their responses on the saline-biased lever after saline, which also is lower than the percentages predicted by perfect matching. When the schedule was changed to a concurrent VI 150-s VI 150-s schedule for 5-min test sessions with additional drugs, the presence or absence of pentobarbital continued to control responding in most rats, and it was possible to generate graded dose-response curves for pentobarbital and other drugs using the data from these 5-min sessions. The dose-response curves generated under these conditions were similar to the dose-response curves generated using other reinforcement schedules and other species.  相似文献   
277.
In three experiments, pigeons were used to examine the independent effects of two normally confounded delays to reinforcement associated with changing between concurrently available variable-interval schedules of reinforcement. In Experiments 1 and 2, combinations of changeover-delay durations and fixed-interval travel requirements were arranged in a changeover-key procedure. The delay from a changeover-produced stimulus change to a reinforcer was varied while the delay between the last response on one alternative and a reinforcer on the other (the total obtained delay) was held constant. Changeover rates decreased as a negative power function of the total obtained delay. The delay between a changeover-produced stimulus change had a small and inconsistent effect on changeover rates. In Experiment 3, changeover delays and fixed-interval travel requirements were arranged independently. Changeover rates decreased as a negative power function of the total obtained delay despite variations in the delay from a change in stimulus conditions to a reinforcer. Periods of high-rate responding following a changeover, however, were higher near the end of the delay from a change in stimulus conditions to a reinforcer. The results of these experiments suggest that the effects of changeover delays and travel requirements primarily result from changes in the delay between a response at one alternative and a reinforcer at the other, but the pattern of responding immediately after a changeover depends on the delay from a changeover-produced change in stimulus conditions to a reinforcer.  相似文献   
278.
Pigeons were trained to discriminate 5.0 mg/kg pentobarbital from saline under a two-key concurrent fixed-interval (FI) 100-s FI 200-s schedule of food presentation, and later tinder a concurrent FI 40-s FI 80-s schedule, in which the FI component with the shorter time requirement reinforced responding on one key after drug administration (pentobarbital-biased key) and on the other key after saline administration (saline-biased key). After responding stabilized under the concurrent FI 100-s FI 200-s schedule, pigeons earned an average of 66% (after pentobarbital) to 68% (after saline) of their reinforcers for responding under the FI 100-s component of the concurrent schedule. These birds made an average of 70% of their responses on both the pentobarbital-biased key after the training dose of pentobarbital and the saline-biased key after saline. After responding stabilized under the concurrent FI 40-s FI 80-s schedule, pigeons earned an average of 67% of their reinforcers for responding under the FI 40 component after both saline and the training dose of pentobarbital. These birds made an average of 75% of their responses on the pentobarbital-biased key after the training dose of pentobarbital, but only 55% of their responses on the saline-biased key after saline. In test sessions preceded by doses of pentobarbital, chlordiazepoxide, ethanol, phencyclidine, or methamphetamine, the dose-response curves were similar under these two concurrent schedules. Pentobarbital, chlordiazepoxide, and ethanol produced dose-dependent increases in responding on the pentobarbital-biased key as the doses increased. For some birds, at the highest doses of these drugs, the dose-response curve turned over. Increasing doses of phencyclidine produced increased responding on the pentobarbital-biased key in some, but not all, birds. After methamphetamine, responding was largely confined to the saline-biased key. These data show that pigeons can perform drug discriminations under concurrent schedules in which the reinforcement frequency under the schedule components differs only by a factor of two, and that when other drugs are substituted for the training drugs they produce dose-response curves similar to the curves produced by these drugs under other concurrent interval schedules.  相似文献   
279.
Pigeons were trained on multiple schedules that provided concurrent reinforcement in each of two components. In Experiment 1, one component consisted of a variable-interval (VI) 40-s schedule presented with a VI 20-s schedule, and the other a VI 40-s schedule presented with a VI 80-s schedule. After extended training, probe tests measured preference between the stimuli associated with the two 40-s schedules. Probe tests replicated the results of Belke (1992) that showed preference for the 40-s schedule that had been paired with the 80-s schedule. In a second condition, the overall reinforcer rate provided by the two components was equated by adding a signaled VI schedule to the component with the lower reinforcer rate. Probe results were unchanged. In Experiment 2, pigeons were trained on alternating concurrent VI 30-s VI 60-s schedules. One schedule provided 2-s access to food and the other provided 6-s access. The larger reinforcer magnitude produced higher response rates and was preferred on probe trials. Rate of changeover responding, however, did not differ as a function of reinforcer magnitude. The present results demonstrate that preference on probe trials is not a simple reflection of the pattern of changeover behavior established during training.  相似文献   
280.
This study examined the effects of modeling versus instructions on the choices of 3 typically developing children and 3 children with attention deficit hyperactivity disorder (ADHD) whose academic responding showed insensitivity to reinforcement schedules. During baseline, students chose between successively presented pairs of mathematics problems associated with different variable-interval schedules of reinforcement. After responding proved insensitive to the schedules, sessions were preceded by either instructions or modeling, counterbalanced across students in a multiple baseline design across subjects. During the instruction condition, students were told how to distribute responding to earn the most reinforcers. During the modeling condition, students observed the experimenter performing the task while describing her distribution of responding to obtain the most reinforcers. Once responding approximated obtained reinforcement under either condition, the schedules of reinforcement were changed, and neither instruction nor modeling was provided. Both instruction and modeling interventions quickly produced patterns of response allocation that approximated obtained rates of reinforcement, but responding established with modeling was more sensitive to subsequent changes in the reinforcement schedules than responding established with instructions. Results were similar for students with and without ADHD.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号