首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In concurrent schedules, reinforcers are often followed by a brief period of heightened preference for the just‐productive alternative. Such ‘preference pulses’ may reflect local effects of reinforcers on choice. However, similar pulses may occur after nonreinforced responses, suggesting that pulses after reinforcers are partly unrelated to reinforcer effects. McLean, Grace, Pitts, and Hughes (2014) recommended subtracting preference pulses after responses from preference pulses after reinforcers, to construct residual pulses that represent only reinforcer effects. Thus, a reanalysis of existing choice data is necessary to determine whether changes in choice after reinforcers in previous experiments were actually related to reinforcers. In the present paper, we reanalyzed data from choice experiments in which reinforcers served different functions. We compared local choice, mean visit length, and visit‐length distributions after reinforcers and after nonreinforced responses. Our reanalysis demonstrated the utility of McLean et al.'s preference‐pulse correction for determining the effects of reinforcers on choice. However, visit analyses revealed that residual pulses may not accurately represent reinforcer effects, and reinforcer effects were clearer in visit analyses than in local‐choice analyses. The best way to determine the effects of reinforcers on choice may be to conduct visit analyses in addition to local‐choice analyses.  相似文献   

2.
We compared free‐operant and restricted‐operant multiple‐stimulus preference assessments with three children diagnosed with mental retardation. The methods produced comparable results, although the free‐operant assessment identified fewer potential reinforcers than the restricted‐operant assessment. The highest‐ and lowest‐ranked stimuli from both methods were subsequently evaluated in a concurrent‐operants reinforcer assessment. All participants engaged in behavior that resulted in access to the highest‐ranked stimuli the majority of the time, thus validating both preference assessment methods as effective in identifying reinforcers. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
Research on the reinforcing effects of providing choice-making opportunities to individuals with developmental disabilities (i.e., allowing them to choose reinforcers or tasks) has produced inconsistent results, perhaps because the mechanisms underlying such effects remain unclear. Choice may produce a reinforcement effect because it is correlated with differential consequences (i.e., choice may increase one's access to higher preference stimuli), or it may have reinforcement value independent of (or in addition to) the chosen stimulus. In Experiment 1, we used a concurrent-operants arrangement to assess preference for a choice condition (in which participants selected one of two available reinforcers) relative to a no-choice condition (in which the therapist selected the same reinforcers on a yoked schedule). All 3 participants preferred the choice option. In Experiment 2, we altered the schedules so that the participant selected one of two lower preference reinforcers in the choice condition, whereas the therapist selected a higher preference stimulus for the participant either half or all of the time in the no-choice condition. Participants typically allowed the therapist to select reinforcers for them (i.e., they allocated responding to the no-choice condition) when it resulted in greater access to higher preference stimuli.  相似文献   

4.
Factors that influence choice between qualitatively different reinforcers (e.g., a food item or a break from work) are important to consider when arranging treatments for problem behavior. Previous findings indicate that children who engage in problem behavior maintained by escape from demands may choose a food item over the functional reinforcer during treatment (DeLeon, Neidert, Anders, & Rodriguez-Catter, 2001; Lalli et al., 1999). However, a number of variables may influence choice between concurrently available forms of reinforcement. An analogue for treatment situations in which positive reinforcement for compliance is in direct competition with negative reinforcement for problem behavior was used in the current study to evaluate several variables that may influence choice. Participants were 5 children who had been diagnosed with developmental disabilities and who engaged in problem behavior maintained by escape from demands. In the first phase, the effects of task preference and schedule of reinforcement on choice between a 30-s break and a high-preference food item were evaluated. The food item was preferred over the break, regardless of the preference level of the task or the reinforcement schedule, for all but 1 participant. In the second phase, the quality of the break was manipulated by combining escape with toys, attention, or both. Only 1 participant showed preference for the enriched break. In the third phase, choice of a medium- or low-preference food item versus the enriched break was evaluated. Three of 4 participants showed preference for the break over the less preferred food item. Results extend previous research by identifying some of the conditions under which individuals who engage in escape-maintained behavior will prefer a food reinforcer over the functional one.  相似文献   

5.
The generality of the molar view of behavior was extended to the study of choice with rats, showing the usefulness of studying order at various levels of extendedness. Rats' presses on two levers produced food according to concurrent variable-interval variable-interval schedules. Seven different reinforcer ratios were arranged within each session, without cues identifying them, and separated by blackouts. To alternate between levers, rats pressed on a third changeover lever. Choice changed rapidly with changes in component reinforcer ratio, and more presses occurred on the lever with the higher reinforcer rate. With continuing reinforcers, choice shifted progressively in the direction of the reinforced lever, but shifted more slowly with each new reinforcer. Sensitivity to reinforcer ratio, as estimated by the generalized matching law, reached an average of 0.9 and exceeded that documented in previous studies with pigeons. Visits to the more-reinforced lever preceded by a reinforcer from that lever increased in duration, while all visits to the less-reinforced lever decreased in duration. Thus, the rats' performances moved faster toward fix and sample than did pigeons' performances in previous studies. Analysis of the effects of sequences of reinforcer sources indicated that sequences of five to seven reinforcers might have sufficed for studying local effects of reinforcers with rats. This study supports the idea that reinforcer sequences control choice between reinforcers, pulses in preference, and visits following reinforcers.  相似文献   

6.
Five pigeons were trained on pairs of concurrent variable-interval schedules in a switching-key procedure. The arranged overall rate of reinforcement was constant in all conditions, and the reinforcer-magnitude ratios obtained from the two alternatives were varied over five levels. Each condition remained in effect for 65 sessions and the last 50 sessions of data from each condition were analyzed. At a molar level of analysis, preference was described well by a version of the generalized matching law, consistent with previous reports. More local analyses showed that recently obtained reinforcers had small measurable effects on current preference, with the most recently obtained reinforcer having a substantially larger effect. Larger reinforcers resulted in larger and longer preference pulses, and a small preference was maintained for the larger-magnitude alternative even after long inter-reinforcer intervals. These results are consistent with the notion that the variables controlling choice have both short- and long-term effects. Moreover, they suggest that control by reinforcer magnitude is exerted in a manner similar to control by reinforcer frequency. Lower sensitivities when reinforcer magnitude is varied are likely to be due to equal frequencies of different sized preference pulses, whereas higher sensitivities when reinforcer rates are varied might result from changes in the frequencies of different sized preference pulses.  相似文献   

7.
A choice assessment has been found to be a more accurate method of identifying preferences than is single-item presentation. However, it is not clear whether the effectiveness of reinforcement varies positively with the degree of preference (i.e., whether the relative preference based on the results of a choice assessment predicts relative reinforcer effectiveness). In the current study, we attempted to address this question by categorizing stimuli as high, middle, and low preference based on the results of a choice assessment, and then comparing the reinforcing effectiveness of these stimuli using a concurrent operants paradigm. High-preference stimuli consistently functioned as reinforcers for all 4 clients. Middle-preference stimuli functioned as reinforcers for 2 clients, but only when compared with low-preference stimuli. Low-preference stimuli did not function as reinforcers when compared to high- and middle-preference stimuli. These results suggest that a choice assessment can be used to predict the relative reinforcing value of various stimuli, which, in turn, may help to improve programs for clients with severe to profound disabilities.  相似文献   

8.
Three methods of assessing preference for stimuli were compared in four adults with a diagnosis of schizophrenia. During phase 1, a survey method, a verbal stimulus choice method, and a pictorial stimulus choice method of assessing preference for four categories of stimuli were administered. During phase 2, a coupon system was used to determine which categories of stimuli actually functioned as reinforcers for each participant. Comparisons between the three preference assessment methods were then conducted based on the results of the reinforcer assessment. Results showed that, overall, there were few differences in total accuracy among the preference assessment procedures. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

9.
Four pigeons were exposed to a token-based self-control procedure with stimulus lights serving as token reinforcers. Smaller-reinforcer choices produced one token immediately; larger-reinforcer choices produced three tokens following a delay. Each token could be exchanged for 2-s access to food during a signaled exchange period each trial. The main variables of interest were the exchange delays (delays from the choice to the exchange stimulus) and the food delays (also timed from the choice), which were varied separately and together across blocks of sessions. When exchange delays and food delays were shorter following smaller-reinforcer choices, strong preference for the smaller reinforcer was observed. When exchange delays and food delays were equal for both options, strong preference for the larger reinforcer was observed. When food delays were equal for both options but exchange delays were shorter for smaller-reinforcer choices, preference for the larger reinforcer generally was less extreme than under conditions in which both exchange and food delays were equal. When exchange delays were equal for both options but food delays were shorter for smaller-reinforcer choices, preference for the smaller reinforcer generally was less extreme than under conditions in which both exchange and food delays favored smaller-reinforcer choices. On the whole, the results were consistent with prior research on token-based self-control procedures in showing that choices are governed by reinforcer immediacy when exchange and food delays are unequal and by reinforcer amount when exchange and food delays are equal. Further, by decoupling the exchange delays from food delays, the results tentatively support a role for the exchange stimulus as a conditioned reinforcer.  相似文献   

10.
Teachers were asked to identify and rank 10 preferred stimuli for 9 toddlers, and a hierarchy of preference for these items was determined via a direct preference assessment. The reinforcing efficacy of the most highly preferred items identified by each method was evaluated concurrently in a reinforcer assessment. The reinforcer assessment showed that all stimuli identified as highly preferred via the direct preference assessment and teacher rankings functioned as reinforcers. The highest ranked stimuli in the direct assessment were more reinforcing than the teachers' top-ranked stimuli for 5 of 9 participants, whereas the teachers' top-ranked stimulus was more reinforcing than the highest ranked stimulus of the direct assessment for only 1 child. A strong positive correlation between rankings generated through the two assessments was identified for only 1 of the 9 participants. Despite poor correspondence between rankings generated through the teacher interview and direct preference assessment, results of the reinforcer assessment suggest that both methods are effective in identifying reinforcers for toddlers.  相似文献   

11.
Choice between two reinforcers differing in magnitude and delay was investigated in rats using a discrete-trials schedule in which the two reinforcers were associated with two levers (A and B); in each session 5 free-choice trials (A and B both available) were interspersed among 44 forced-choice trials (A alone, 22 trials; B alone, 22 trials). In Experiment 1, preference for the more concentrated of two sucrose solutions declined as the delay to that reinforcer was progressively increased. In Experiment 2, progressively increasing the delay to both reinforcers by the same amount resulted in a shift in preference away from the less concentrated solution. In Experiment 3, it was found that the decline in preference for the more concentrated solution as a function of the delay to that reinforcer was steeper when the rats were maintained at 90% than when they were maintained at 80% of their free-feeding body weights. This effect of deprivation level on choice is inconsistent with some current models of “self-control”.  相似文献   

12.
The search for different options before making a consequential choice is a central aspect of many important decisions, such as mate selection or purchasing a house. Despite its importance, surprisingly little is known about how search and choice are affected by the observed and objective properties of the decision problem. Here, we analyze the effects of two key properties in a binary choice task: the options' observed and objective values, and the variability of payoffs. First, in a large public data set of a binary choice task, we investigate how the observed value and variability relate to decision‐makers' efforts and preferences during search. Furthermore, we test how these properties influence the chance of correctly identifying the objectively maximizing option, and how they affect choice. Second, we designed a novel experiment to systematically analyze the role of the objective difference between the options. We find that a larger objective difference between options increases the chance for correctly identifying the maximizing option, but it does not affect behavior during search and choice. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Sensitivity to reinforcer duration in a self-control procedure   总被引:2,自引:2,他引:0  
In a concurrent-chains procedure, pigeons' responses on left and right keys were followed by reinforcers of different durations at different delays following the choice responses. Three pairs of reinforcer delays were arranged in each session, and reinforcer durations were varied over conditions. In Experiment 1 reinforcer delays were unequal, and in Experiment 2 reinforcer delays were equal. In Experiment 1 preference reversal was demonstrated in that an immediate short reinforcer was chosen more frequently than a longer reinforcer delayed 6 s from the choice, whereas the longer reinforcer was chosen more frequently when delays to both reinforcers were lengthened. In both experiments, choice responding was more sensitive to variations in reinforcer duration at overall longer reinforcer delays than at overall shorter reinforcer delays, independently of whether fixed-interval or variable-interval schedules were arranged in the choice phase. We concluded that preference reversal results from a change in sensitivity of choice responding to ratios of reinforcer duration as the delays to both reinforcers are lengthened.  相似文献   

14.
Impulsive choice describes preference for smaller, sooner rewards over larger, later rewards. Excessive delay discounting (i.e., rapid devaluation of delayed rewards) underlies some impulsive choices, and is observed in many maladaptive behaviors (e.g., substance abuse, gambling). Interventions designed to reduce delay discounting may provide therapeutic gains. One such intervention provides rats with extended training with delayed reinforcers. When compared to a group given extended training with immediate reinforcers, delay‐exposed rats make significantly fewer impulsive choices. To what extent is this difference due to delay‐exposure training shifting preference toward self‐control or immediacy‐exposure training (the putative control group) shifting preference toward impulsivity? The current study compared the effects of delay‐ and immediacy‐exposure training to a no‐training control group and evaluated within‐subject changes in impulsive choice across 51 male Wistar rats. Delay‐exposed rats made significantly fewer impulsive choices than immediacy‐exposed and control rats. Between‐group differences in impulsive choice were not observed in the latter two groups. While delay‐exposed rats showed large, significant pre‐ to posttraining reductions in impulsive choice, immediacy‐exposed and control rats showed small reductions in impulsive choice. These results suggest that extended training with delayed reinforcers reduces impulsive choice, and that extended training with immediate reinforcers does not increase impulsive choice.  相似文献   

15.
Contingencies of reinforcement specify how reinforcers are earned and how they are obtained. Ratio contingencies specify the number of responses that earn a reinforcer, and the response satisfying the ratio requirement obtains the earned reinforcer. Simple interval schedules specify that a certain time earns a reinforcer, which is obtained by the first response after the interval. The earning of reinforcers has been overlooked, perhaps because simple schedules confound the rates of earning reinforcers with the rates of obtaining reinforcers. In concurrent variable-interval schedules, however, spending time at one alternative earns reinforcers not only at that alternative, but at the other alternative as well. Reinforcers earned for delivery at the other alternative are obtained after changing over. Thus the rates of earning reinforcers are not confounded with the rate of obtaining reinforcers, but the rates of earning reinforcers are the same at both alternatives, which masks their possibly differing effects on preference. Two experiments examined the separate effects of earning reinforcers and of obtaining reinforcers on preference by using concurrent interval schedules composed of two pairs of stay and switch schedules (MacDonall, 2000). In both experiments, the generalized matching law, which is based on rates of obtaining reinforcers, described responding only when rates of earning reinforcers were the same at each alternative. An equation that included both the ratio of the rates of obtaining reinforcers and the ratio of the rates of earning reinforcers described the results from all conditions from each experiment.  相似文献   

16.
We have developed a method for studying list learning in animals and humans, and we use variants of the task to examine list learning in rats, mice, and humans. This method holds several advantages over other methods. It has been found to be easily learned without lengthy pretraining. The data gathered with this procedure provide a measure of correct response rates, of incorrect responses and the locations of these responses, and of response latency on a trial-by-trialbasis. We have examined mouse, rat, and human list acquisition of patterns ranging from 12 to 48 items in length. This procedure has also been used to examine many aspects of list learning, such as the effects of the placement of phrasing cues that are either consistent or inconsistent with the structure of the list in rats and mice, the effects of phrasing cues of differing modalities in mice, the sensitivity of subjects to violations of list structure in rats, subjects’ abilities to “chunk” from nonadjacent serial positions in structured lists in rats, and subjects’ sensitivity to serial patterns with multiple levels of hierarchical organization. The procedure has also been used to examine the effects of drugs on sequential learning.  相似文献   

17.
Children of both typical and atypical cognitive development tend to prefer contexts in which their behavior results in a choice of reinforcers rather than a single reinforcer, even when the reinforcer accessed is identical across conditions. The origin of this preference has been attributed speculatively to behavioral histories in which choice making tends to be associated with differentially beneficial outcomes. Few studies have evaluated this claim, and those that have, have yielded mixed results. We provided five preschool‐aged children experiences in which choice‐making and no‐choice contexts were differentially associated with higher preference and larger magnitude reinforcers, and we assessed changes in their preference for choice and no‐choice contexts in which outcomes were equated. These conditioning experiences resulted in consistent and replicable shifts in child preference, indicating that preference for choice is malleable through experience.  相似文献   

18.
Historically, reinforcer assessment procedures focus primarily on identifying nonsocial reinforcers (e.g., tangibles and edibles). Far less empirical attention has been allocated to the systematic identification of social consequences that function as reinforcers. This discrepancy is problematic given that social consequences are commonly incorporated into behavioral treatment programs without systematic evaluation of their efficacy. In this study, two methodologies (a single operant and a concurrent choice) were used to assess social reinforcers for children with autism. Results highlighted differences in response allocation to the control condition between procedures. Specifically, responding occurred in the control condition of the single‐operant procedure but not in the concurrent‐operant procedure. These differences highlight the need for further evaluation of procedures to assess social reinforcers. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Four pigeons were exposed to a concurrent procedure similar to that used by Davison, Baum, and colleagues (e.g., Davison & Baum, 2000, 2006) in which seven components were arranged in a mixed schedule, and each programmed a different left∶right reinforcer ratio (1∶27, 1∶9, 1∶3, 1∶1, 3∶1, 9∶1, 27∶1). Components within each session were presented randomly, lasted for 10 reinforcers each, and were separated by 10-s blackouts. These conditions were in effect for 100 sessions. When data were aggregated over Sessions 16-50, the present results were similar to those reported by Davison, Baum, and colleagues: (a) preference adjusted rapidly (i.e., sensitivity to reinforcement increased) within components; (b) preference for a given alternative increased with successive reinforcers delivered via that alternative (continuations), but was substantially attenuated following a reinforcer on the other alternative (a discontinuation); and (c) food deliveries produced preference pulses (immediate, local, increases in preference for the just-reinforced alternative). The same analyses were conducted across 10-session blocks for Sessions 1-100. In general, the basic structure of choice revealed by analyses of data from Sessions 16-50 was preserved at a smaller level of aggregation (10 sessions), and it developed rapidly (within the first 10 sessions). Some characteristics of choice, however, changed systematically across sessions. For example, effects of successive reinforcers within a component tended to increase across sessions, as did the magnitude and length of the preference pulses. Thus, models of choice under these conditions may need to take into account variations in behavior allocation that are not captured completely when data are aggregated over large numbers of sessions.  相似文献   

20.
Token reinforcement, choice, and self-control in pigeons.   总被引:9,自引:9,他引:0       下载免费PDF全文
Pigeons were exposed to self-control procedures that involved illumination of light-emitting diodes (LEDs) as a form of token reinforcement. In a discrete-trials arrangement, subjects chose between one and three LEDs; each LED was exchangeable for 2-s access to food during distinct posttrial exchange periods. In Experiment 1, subjects generally preferred the immediate presentation of a single LED over the delayed presentation of three LEDs, but differences in the delay to the exchange period between the two options prevented a clear assessment of the relative influence of LED delay and exchange-period delay as determinants of choice. In Experiment 2, in which delays to the exchange period from either alternative were equal in most conditions, all subjects preferred the delayed three LEDs more often than in Experiment-1. In Experiment 3, subjects preferred the option that resulted in a greater amount of food more often if the choices also produced LEDs than if they did not. In Experiment 4, preference for the delayed three LEDs was obtained when delays to the exchange period were equal, but reversed in favor of an immediate single LED when the latter choice also resulted in quicker access to exchange periods. The overall pattern of results suggests that (a) delay to the exchange period is a more critical determinant of choice than is delay to token presentation; (b) tokens may function as conditioned reinforcers, although their discriminative properties may be responsible for the self-control that occurs under token reinforcer arrangements; and (c) previously reported differences in the self-control choices of humans and pigeons may have resulted at least in part from the procedural conventions of using token reinforcers with human subjects and food reinforcers with pigeon subjects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号