首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Seventeen pigeons were exposed to a three-key discrete-trial procedure in which a peck on the lit center key produced food if, and only if, the left keylight was lit. The center key was illuminated by a peck on the lit right key. Of interest was whether subjects pecked the right key before or after the response-independent onset of the left keylight. Pecks on the right key after left-keylight onset suggest control of behavior by the left keylight—an establishing stimulus. In three experiments, the strength of center-keylight onset as conditioned reinforcer for a response on the right key was manipulated by altering the size of the reduction in time to food delivery correlated with its onset. Control of pigeons' key pecks by onset of the left keylight occurred on more trials per session when the center keylight was a relatively weak conditioned reinforcer and on fewer trials per session when the center keylight was a relatively strong condtioned reinforcer. Differences across conditions in the degree of control by onset of the establishing stimulus were greatest when changes in conditioned reinforcer strength occurred relatively frequently and were signaled. The results provide evidence of the function of an establishing stimulus.  相似文献   

3.
Three experiments explored the impact of different reinforcer rates for alternative behavior (DRA) on the suppression and post‐DRA relapse of target behavior, and the persistence of alternative behavior. All experiments arranged baseline, intervention with extinction of target behavior concurrently with DRA, and post‐treatment tests of resurgence or reinstatement, in two‐ or three‐component multiple schedules. Experiment 1, with pigeons, arranged high or low baseline reinforcer rates; both rich and lean DRA schedules reduced target behavior to low levels. When DRA was discontinued, the magnitude of relapse depended on both baseline reinforcer rate and the rate of DRA. Experiment 2, with children exhibiting problem behaviors, arranged an intermediate baseline reinforcer rate and rich or lean signaled DRA. During treatment, both rich and lean DRA rapidly reduced problem behavior to low levels, but post‐treatment relapse was generally greater in the DRA‐rich than the DRA‐lean component. Experiment 3, with pigeons, repeated the low‐baseline condition of Experiment 1 with signaled DRA as in Experiment 2. Target behavior decreased to intermediate levels in both DRA‐rich and DRA‐lean components. Relapse, when it occurred, was directly related to DRA reinforcer rate as in Experiment 2. The post‐treatment persistence of alternative behavior was greater in the DRA‐rich component in Experiment 1, whereas it was the same or greater in the signaled‐DRA‐lean component in Experiments 2 and 3. Thus, infrequent signaled DRA may be optimal for effective clinical treatment.  相似文献   

4.
Two experiments examined whether postsample signals of reinforcer probability or magnitude affected the accuracy of delayed matching to sample in pigeons. On each trial, red or green choice responses that matched red or green stimuli seen shortly before a variable retention interval were reinforced with wheat access. In Experiment 1, the reinforcer probability was either 0.2 or 1.0 for both red and green responses. Reinforcer probability was signaled by line or cross symbols that appeared after the sample had been presented. In Experiment 2, all correct responses were reinforced, and the signaled reinforcer durations were 1.0 s and 4.5 s. Matching was more accurate when larger or more probable reinforcers were signaled, independently of retention interval duration. Because signals were presented postsample, the effects were not the result of differential attention to the sample.  相似文献   

5.
Two experiments were completed allowing albino rats to choose between signaled and unsignaled reward conditions. These experiments examined the effects on preference of (1) response dependent versus response-independent reward and, (2) food pellets versus chocolate milk as the reward. All subjects preferred the signaled condition over the unsignaled condition, whether exposed to response-dependent, or to response-independent delivery of rewards. Preference was controlled most effectively by presenting both the signal itself and the correlated stimulus identifying the signaled condition. The signal presented alone (Extinction 3) controlled preference more effectively than did the stimulus correlated with the signaled condition (Extinction 2). The second experiment showed that the quality of the reinforcer (pellets and chocolate milk) did not affect preference for signaled reward since all subjects preferred the signaled condition at levels comparable to those observed in Experiment 1, with food pellets. These results, along with others, argue against species differences, response-dependency, and reinforcer quality as variables affecting the direction of preference.  相似文献   

6.
According to theoretical accounts of behavioral momentum, the Pavlovian stimulus—reinforcer contingency determines resistance to change. To assess this prediction, 8 pigeons were exposed to an unsignaled delay-of-reinforcement schedule (a tandem variable-interval fixed-time schedule), a signaled delay-of-reinforcement schedule (a chain variable-interval fixed-time schedule), and an immediate, zero-delay schedule of reinforcement in a three-component multiple schedule. The unsignaled delay and signaled delay schedules employed equal fixed-time delays, with the only difference being a stimulus change in the signaled delay schedule. Overall rates of reinforcement were equated for the three schedules. The Pavlovian contingency was identical for the unsignaled and immediate schedules, and response—reinforcer contiguity was degraded for the unsignaled schedule. Results from two disruption procedures (prefeeding subjects prior to experimental sessions and adding a variable-time schedule to timeout periods separating baseline components) demonstrated that response—reinforcer contiguity does play a role in determining resistance to change. The results from the extinction manipulation were not as clear. Responding in the unsignaled delay component was consistently less resistant to change than was responding in both the immediate and presignaled segments of the signaled delay components, contrary to the view that Pavlovian contingencies determine resistance to change. Probe tests further supported the resistance-to-change results, indicating consistency between resistance to change and preference, both of which are putative measures of response strength.  相似文献   

7.
In a discrete-trial conditional discrimination procedure, 4 pigeons obtained food reinforcers by pecking a key with a short latency on trials signaled by one stimulus and by pecking the same key with a long latency on trials signaled by a second stimulus. The physical difference between the two stimuli and the temporal separation between the latency values required for reinforcement were varied factorially over four sets of conditions, and the ratio of reinforcer rates for short and long latencies was varied within each set of conditions. Stimulus discrimination varied directly with both stimulus and response differences and was unaffected by the reinforcer ratio. Sensitivity to reinforcement, estimated by generalized-matching-law fits to the data within each set of conditions, varied directly with the response difference but inversely with the stimulus difference arranged between sets of conditions. Because variations in stimulus differences, response differences, and reinforcer differences did not have equivalent effects, these findings question the functional equivalence of the three terms of the discriminated operant: antecedent stimuli, behavior, and consequences.  相似文献   

8.
There is evidence suggesting aggression may be a positive reinforcer in many species. However, only a few studies have examined the characteristics of aggression as a positive reinforcer in mice. Four types of reinforcement schedules were examined in the current experiment using male Swiss CFW albino mice in a resident—intruder model of aggression as a positive reinforcer. A nose poke response on an operant conditioning panel was reinforced under fixed‐ratio (FR 8), fixed‐interval (FI 5‐min), progressive ratio (PR 2), or differential reinforcement of low rate behavior reinforcement schedules (DRL 40‐s and DRL 80‐s). In the FR conditions, nose pokes were maintained by aggression and extinguished when the aggression contingency was removed. There were long postreinforcement pauses followed by bursts of responses with short interresponse times (IRTs). In the FI conditions, nose pokes were maintained by aggression, occurred more frequently as the interval elapsed, and extinguished when the contingency was removed. In the PR conditions, nose pokes were maintained by aggression, postreinforcement pauses increased as the ratio requirement increased, and responding was extinguished when the aggression contingency was removed. In the DRL conditions, the nose poke rate decreased, while the proportional distributions of IRTs and postreinforcement pauses shifted toward longer durations as the DRL interval increased. However, most responses occurred before the minimum IRT interval elapsed, suggesting weak temporal control of behavior. Overall, the findings suggest aggression can be a positive reinforcer for nose poke responses in mice on ratio‐ and time‐based reinforcement schedules.  相似文献   

9.
Self-control in male and female rats   总被引:2,自引:1,他引:1       下载免费PDF全文
Eight male and 8 female Wistar rats were exposed to a discrete-trial procedure in which they chose between the presentation of a small (one pellet) or a large (three pellets) reinforcer. The delay to the small and large reinforcer was 6.0 s in the first condition of Experiment 1. Subjects consistently chose the large reinforcer. When the delay to the small reinforcer was decreased to 0.1 s in the next experimental condition, all subjects continued to choose the large 6.0-s delayed reinforcer. When the contingencies correlated with the two levers were reversed in the next experimental condition, the majority of subjects (5 males and 6 females) still chose the large delayed reinforcer over the small immediately presented reinforcer. The delay to the small reinforcer was maintained at 6.0 s, but the delay to the large reinforcer was varied among 9.0, 15.0, 24.0, and 36.0 s in Experiment 2, in which 4 males and 4 females participated. Most subjects consistently chose the large increasingly delayed reinforcer, although choice for the small 6.0-s delayed reinforcer developed in some females when the large reinforcer was delayed for 24.0 or 36.0 s. These choice patterns were not predicted from a literal application of a model that says choice should favor the alternative correlated with the higher (amount/delay) ratio.  相似文献   

10.
Four pigeons were exposed to a token-based self-control procedure with stimulus lights serving as token reinforcers. Smaller-reinforcer choices produced one token immediately; larger-reinforcer choices produced three tokens following a delay. Each token could be exchanged for 2-s access to food during a signaled exchange period each trial. The main variables of interest were the exchange delays (delays from the choice to the exchange stimulus) and the food delays (also timed from the choice), which were varied separately and together across blocks of sessions. When exchange delays and food delays were shorter following smaller-reinforcer choices, strong preference for the smaller reinforcer was observed. When exchange delays and food delays were equal for both options, strong preference for the larger reinforcer was observed. When food delays were equal for both options but exchange delays were shorter for smaller-reinforcer choices, preference for the larger reinforcer generally was less extreme than under conditions in which both exchange and food delays were equal. When exchange delays were equal for both options but food delays were shorter for smaller-reinforcer choices, preference for the smaller reinforcer generally was less extreme than under conditions in which both exchange and food delays favored smaller-reinforcer choices. On the whole, the results were consistent with prior research on token-based self-control procedures in showing that choices are governed by reinforcer immediacy when exchange and food delays are unequal and by reinforcer amount when exchange and food delays are equal. Further, by decoupling the exchange delays from food delays, the results tentatively support a role for the exchange stimulus as a conditioned reinforcer.  相似文献   

11.
Some individuals with intellectual disabilities do not respond to praise as a reinforcer, which may limit their ability to learn. We evaluated 2 procedures (stimulus pairing and response—stimulus pairing), both of which involved pairing previously neutral praise statements with preferred edible items, to determine their usefulness in establishing praise as a reinforcer. Results of Study 1 indicated that stimulus pairing was not effective in conditioning praise as a reinforcer for 3 of 4 subjects; results were inconclusive for the 4th subject. Results of Study 2 indicated that response—stimulus pairing was effective in conditioning praise as a reinforcer for 4 of 8 subjects. After conditioning, praise also increased the occurrence of additional target responses for these 4 subjects.  相似文献   

12.
Reinforcement magnitude and pausing on progressive-ratio schedules   总被引:4,自引:3,他引:1       下载免费PDF全文
Rats responded under progressive-ratio schedules for sweetened milk reinforcers; each session ended when responding ceased for 10 min. Experiment 1 varied the concentration of milk and the duration of postreinforcement timeouts. Postreinforcement pausing increased as a positively accelerated function of the size of the ratio, and the rate of increase was reduced as a function of concentration and by timeouts of 10 s or longer. Experiment 2 varied reinforcement magnitude within sessions (number of dipper operations per reinforcer) in conjunction with stimuli correlated with the upcoming magnitude. In the absence of discriminative stimuli, pausing was longer following a large reinforcer than following a small one. Pauses were reduced by a stimulus signaling a large upcoming reinforcer, particularly at the highest ratios, and the animals tended to quit responding when the past reinforcer was large and the stimulus signaled that the next one would be small. Results of both experiments revealed parallels between responding under progressive-ratio schedules and other schedules containing ratio contingencies. Relationships between pausing and magnitude suggest that ratio pausing is under the joint control of inhibitory properties of the past reinforcer and excitatory properties of stimuli correlated with the upcoming reinforcer, rather than under the exclusive control of either factor alone.  相似文献   

13.
Six pigeons were trained in experimental sessions that arranged six or seven components with various concurrent-schedule reinforcer ratios associated with each. The order of the components was determined randomly without replacement. Components lasted until the pigeons had received 10 reinforcers, and were separated by 10-s blackout periods. The component reinforcer ratios arranged in most conditions were 27:1, 9:1, 3:1, 1:1, 1:3, 1:9 and 1:27; in others, there were only six components, three of 27:1 and three of 1:27. In some conditions, each reinforcement ratio was signaled by a different red-yellow flash frequency, with the frequency perfectly correlated with the reinforcer ratio. Additionally, a changeover delay was arranged in some conditions, and no changeover delay in others. When component reinforcer ratios were signaled, sensitivity to reinforcement values increased from around 0.40 before the first reinforcer in a component to around 0.80 before the 10th reinforcer. When reinforcer ratios were not signaled, sensitivities typically increased from zero to around 0.40. Sensitivity to reinforcement was around 0.20 lower in no-changeover-delay conditions than in changeover-delay conditions, but increased in the former after exposure to changeover delays. Local analyses showed that preference was extreme towards the reinforced alternative for the first 25 s after reinforcement in changeover-delay conditions regardless of whether components were signaled or not. In no-changeover-delay conditions, preference following reinforcers was either absent, or, following exposure to changeover delays, small. Reinforcers have both local and long-term effects on preference. The former, but not the latter, is strongly affected by the presence of a changeover delay. Stimulus control may be more closely associated with longer-term, more molar, reinforcer effects.  相似文献   

14.
Pigeons chose between an immediate 2-second reinforcer (access to grain) and a 6-second reinforcer delayed 6 seconds. The four pigeons in the control group were exposed to this condition initially. The four experimental subjects first received a condition where both reinforcers were delayed 6 seconds. The small reinforcer delay was then gradually reduced to zero over more than 11,000 trials. Control subjects almost never chose the large delayed reinforcer. Experimental subjects chose the large delayed reinforcer significantly more often. Two experimental subjects showed preference for the large reinforcer even when the consequences for pecking the two keys were switched. The results indicate that fading procedures can lead to increased “self-control” in pigeons in a choice between a large delayed reinforcer and a small immediate reinforcer.  相似文献   

15.
Pigeons were presented with a concurrent‐chains schedule in which the total time to primary reinforcement was equated for the two alternatives (VI 30 s VI 60 s vs. VI 60 s VI 30 s). In one set of conditions, the terminal links were signaled by the same stimulus, and in another set of conditions they were signaled by different stimuli. Choice was in favor of the shorter terminal link when the terminal links were differentially signaled but in favor of the shorter initial link (and longer terminal link) when the terminal links shared the same stimulus. Preference reversed regularly with reversals of the stimulus condition and was unrelated to the discrimination between the two terminal links during the nondifferential stimulus condition. The present results suggest that the relative value of the terminal‐link stimuli and the relative rate of conditioned reinforcer presentation are important influences on choice behavior, and that models of conditioned reinforcement need to include both factors.  相似文献   

16.
Four pigeons responded on a concurrent-chains schedule in four experiments that examined whether the effectiveness of a stimulus as a conditioned reinforcer is best described by a global approach, as measured by the average interreinforcement interval, or by a local contextual approach, as measured by the onset of the stimulus preceding the conditioned reinforcer. The interreinforcement interval was manipulated by the inclusion of an intertrial interval, which increased the overall time to reinforcement but did not change the local contingencies on a given trial A global analysis predicted choice for the richer alternative to decrease with the inclusion of an intertrial interval, whereas a local analysis predicted no change in preference. Experiment 1 examined sensitivity to intertrial intervals when each was signaled by the same houselight that operated throughout the session. In Experiment 2, the intertrial interval always was signaled by the stimulus correlated with the richer terminal link. In Experiment 3, the intertrial interval was signaled by the keylights correlated with the initial links and two novel houselights. Experiment 4 provided free food pseudorandomly during the intertrial interval. In all experiments, subjects' preferences were consistent with a local analysis of choice in concurrent chains. These results are discussed in terms of delay-reduction theory, which traditionally has failed to distinguish global and local contexts.  相似文献   

17.
The literature offers few recommendations for sequencing exposure to treatment conditions with individuals with multiply maintained destructive behavior. Identifying relative preference for the functional reinforcers maintaining destructive behavior may be one means of guiding that decision. The present study presents a preliminary attempt at developing a robust relative preference and reinforcer assessment for individuals with multiply maintained destructive behavior. Guided and free-choice trials were implemented in which participants chose between two multiple-schedule arrangements, each of which programmed signaled periods of isolated reinforcer availability and unavailability. Consistent participant choice and responding during free-choice trials was then used to thin the corresponding schedule of reinforcement. The results demonstrated a strong preference for one of the two functional reinforcers for all four participants, yet preferences differed across participants and were not well predicted by responding in prior analyses.  相似文献   

18.
Human subjects were exposed to a concurrent-chains schedule in which reinforcer amounts, delays, or both were varied in the terminal links, and consummatory responses were required to receive points that were later exchangeable for money. Two independent variable-interval 30-s schedules were in effect during the initial links, and delay periods were defined by fixed-time schedules. In Experiment 1, subjects were exposed to three different pairs of reinforcer amounts and delays, and sensitivity to reinforcer amount and delay was determined based on the generalized matching law. The relative responding (choice) of most subjects was more sensitive to reinforcer amount than to reinforcer delay. In Experiment 2, subjects chose between immediate smaller reinforcers and delayed larger reinforcers in five conditions with and without timeout periods that followed a shorter delay, in which reinforcer amounts and delays were combined to make different predictions based on local reinforcement density (i.e., points per delay) or overall reinforcement density (i.e., points per total time). In most conditions, subjects' choices were qualitatively in accord with the predictions from the overall reinforcement density calculated by the ratio of reinforcer amount and total time. Therefore, the overall reinforcement density appears to influence the preference of humans in the present self-control choice situation.  相似文献   

19.
Differentially higher rates of aggression in treatment sessions occurred in the presence of two staff members who had previously worked with the participant at another facility. Adding an edible reinforcer for compliance and the absence of aggression in sessions conducted by these two staff members decreased aggression to rates similar to those obtained with less familiar therapists. Results suggest that embedding positive reinforcement within a demand context may reduce the aversiveness of therapists correlated with a history of demand situations.  相似文献   

20.
After an initial functional analysis of a participant's aggression showed unclear outcomes, we conducted preference and reinforcer assessments to identify preferred forms of attention that may maintain problem behavior. Next, we conducted an extended functional analysis that included a modified attention condition. Results showed that the participant's aggression was maintained by access to preferred conversational topics. A function-based intervention decreased aggression and increased an appropriate communicative response.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号