首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Variability has been shown to be a reinforceable dimension of behavior. One procedure that has been demonstrated to increase variability in basic research is the lag reinforcement schedule. On this type of schedule, a response is reinforced if it differs from a specified number of previous responses. Lag schedules are rarely used, however, for increasing response variability in applied settings. The purpose of the present study was to investigate the effects of a lag schedule of differential reinforcement on varied and appropriate verbal responding to social questions by 3 males with autism. A reversal design with a multiple baseline across subjects was used to evaluate the effects of the lag schedule. During baseline, differential reinforcement of appropriate responding (DRA) resulted in little or no varied responding. During the intervention, a Lag 1 requirement was added to the DRA (Lag 1/DRA) resulting in an increase in the percentage of trials with varied and appropriate verbal responding for 2 of the 3 participants. In addition, an increase in the cumulative number of novel verbal responses was also observed for the same 2 participants. These results are discussed in terms of reinforcement schedules that support variability, generalization, and potential stimulus control over varied responding.  相似文献   

2.
Although individuals with autism spectrum disorder (ASD) tend to behave repetitively, certain reinforcement contingencies (e.g., lag schedules) can be used to increase behavioral variability. In a lag schedule, reinforcers only follow responses that differ from recent responses. The present study was designed to promote variable play behavior in preschoolers with ASD interacting with playsets and figurines and to assess preference for variability and repetition contingencies. Data have shown a preference for variability in pigeons and college students, but this effect has not been explored in clinical populations. In this experiment, preschoolers with ASD were taught to discriminate between variability and repetition contingencies. Only play behaviors that met a lag schedule were reinforced in the presence of one color, and only repetitive behaviors were reinforced in the presence of another. After differential performance was established, participants experienced a concurrent chains schedule. Participants chose between the colors taught in training and then completed a play session with the selected contingency. One participant selected variability and repetition equally. The other participants showed a slight preference for variability. These results indicate that some individuals with ASD may play repetitively, not because they prefer repetition, but because they require additional teaching to play variably.  相似文献   

3.
Repetitive behavior refers to a highly heterogeneous set of responses associated with a wide range of conditions, including normative development. Treatment studies for aberrant repetitive behavior are limited although one promising approach involves conceptualizing such behavior as a generalized inflexibility or lack of variability in responding. Relatively little is known about the neurobiological mechanisms that mediate the development and expression of repetitive behavior, information critical to the design of effective pharmacotherapies, early interventions, and prevention strategies. We will review clinical findings in repetitive behavior as well as findings from animal models highlighting environmental factors and the role of cortical-basal ganglia circuitry in mediating the development and expression of these behaviors. Findings from animal models have included identification of a specific neural pathway important in mediating repetitive behavior. Moreover, pharmacological studies that support the importance of this pathway have led to the identification of novel potential therapeutic targets. Expanding the evidence base for environmental enrichment-derived interventions and focusing on generalized variability in responding will aid in addressing the broader problem of rigidity or inflexibility.  相似文献   

4.
An unsalient stimulus, or one imperfectly correlated with reinforcement, may acquire significant control over responding, provided that it is the only available signal for reinforcement, but may fail to acquire control if it is reinforced only in conjunction with a second, more salient or more valid stimulus. A stimulus imperfectly correlated with reinforcement may also lose control over responding if having initially been reinforced in isolation, it is subsequently reinforced only in conjunction with another, more valid stimulus. If the effects of relative salience are to be explained in exactly the same way as those of relative validity, we should expect a similar loss of control by an unsalient stimulus, A, if, after initial consistently reinforced trials to A alone, subjects subsequently receive reinforcement only in the presence of a compound stimulus, A + B. Two experiments on discrete-trial discrimination learning in pigeons and one on conditioned suppression in rats confirm this expectation. The results have implications for theories of selective association in conditioning and discrimination learning.  相似文献   

5.
Three experiments examined the effect of signaling reinforcement on rats' lever pressing on contingencies that reinforced variable responding to extend the exploration of signaled reinforcement to a schedule that has previously not been examined in this respect. In Experiment 1, rats responding on a lag-8 variability schedule with signaled reinforcement displayed greater levels of variability (U values) than rats on the same schedule lacking a reinforcement signal. In Experiment 2, rats responding on a differential reinforcement of least frequent responses schedule also displayed greater operant variability with a signal for reinforcement compared with rats without a reinforcement signal. In Experiment 3, a reinforcement signal decreased the variability of a response sequence when there was no variability requirement. These results offer empirical corroboration that operant variability responds to manipulations in the same manner as do other forms of operant response and that a reinforcement signal facilitates the emission of the required operant.  相似文献   

6.
7.
Reinforcement of variability may help to explain operant learning. Three groups of rats were reinforced, in different phases, whenever the following target sequences of left (L) and right (R) lever presses occurred: LR, RLL, LLR, RRLR, RLLRL, and in Experiment 2, LLRRL. One group (variability [VAR]) was concurrently reinforced once per minute for sequence variations, a second group also once per minute but independently of variations, that is, for any sequences (ANY), and a control group (CON) received no additional reinforcers. The 3 groups learned the easiest targets equally. For the most difficult targets, CON animals' responding extinguished whereas both VAR and ANY responded at high rates. Only the VAR animals learned, however. Thus, concurrent reinforcers--contingent on variability or not--helped to maintain responding when difficult sequences were reinforced, but learning those sequences depended on reinforcement of variations.  相似文献   

8.
We compared two sources of behavior variability: decreased levels of reinforcement and reinforcement contingent on variability itself. In Experiment 1, four groups of rats were reinforced for different levels of response-sequence variability: one group was reinforced for low variability, two groups were reinforced for intermediate levels, and one group was reinforced for very high variability. All of the groups experienced three different reinforcement frequencies for meeting their respective variability contingencies. Results showed that reinforcement contingencies controlled response variability more than did reinforcement frequencies. Experiment 2 showed that only those animals concurrently reinforced for high variability acquired a difficult-to-learn sequence; animals reinforced for low variability learned little or not at all. Variability was therefore controlled mainly by reinforcement contingencies, and learning increased as a function of levels of baseline variability. Knowledge of these relationships may be helpful to those who attempt to condition operant responses.  相似文献   

9.
The present experiment investigated whether pigeons can show associative symmetry on a two-alternative matching-to-sample procedure. The procedure consisted of a within-subject sequence of training and testing with reinforcement, and it provided (a) exemplars of symmetrical responding, and (b) all prerequisite discriminations among test samples and comparisons. After pigeons had learned two arbitrary-matching tasks (A-B and C-D), they were given a reinforced symmetry test for half of the baseline relations (B1-A1 and D1-C1). To control for the effects of reinforcement during testing, two novel, nonsymmetrical responses were concurrently reinforced using the other baseline stimuli (D2-A2 and B2-C2). Pigeons matched at chance on both types of relations, thus indicating no evidence for symmetry. These symmetrical and nonsymmetrical relations were then directly trained in order to provide exemplars of symmetry and all prerequisite discriminations for a second test. The symmetrical test relations were now B2-A2 and D2-C2 and the nonsymmetrical relations were D1-A1 and B1-C1. On this test, 1 pigeon showed clear evidence of symmetry, 2 pigeons showed weak evidence, and 1 pigeon showed no evidence. The previous training of all prerequisite discriminations among stimuli, and the within-subject control for testing with reinforcement seem to have set favorable conditions for the emergence of symmetry in nonhumans. However, the variability across subjects shows that methodological variables still remain to be controlled.  相似文献   

10.
Although responses are sometimes easy to predict, at other times responding seems highly variable, unpredictable, or even random. The inability to predict is generally attributed to ignorance of controlling variables, but this article is a review of research showing that the highest levels of behavioral variability may result from identifiable reinforcers contingent on such variability. That is, variability is an operant. Discriminative stimuli and reinforcers control it, resulting in low or high variability, depending on the contingencies. Schedule-of-reinforcement effects are orderly, and choosing to vary or repeat is lawfully governed by relative reinforcement frequencies. The operant nature of variability has important implications. For example, learning, exploring, creating, and problem solving may partly depend on it. Abnormal levels of variability, including those found in psychopathologies such as autism, depression, and attention deficit hyperactivity disorder, may be modified through reinforcement. Operant variability may also help to explain some of the unique attributes of voluntary action.  相似文献   

11.
In the first of two studies, the responding of four albino rats was differentially reinforced in the presence of noise and light together and then tested in the presence of the noise and the light separately during extinction. The light exercised substantially more control of responding than did the noise. In the second study the responding of a similar group of four rats was differentially reinforced in the presence of the noise and the light separately. Control of responding by the light developed more rapidly than control by the noise. Results suggest that levels of control by stimuli after differential reinforcement with respect to the stimuli together can be predicted by the rates of development of control during differential reinforcement with respect to the stimuli separately.  相似文献   

12.
Jessel et al. (2015) provided some evidence to suggest that “other” behavior is strengthened in the differential reinforcement of other behavior (DRO). The present study is a systematic replication of the Jessel et al. procedures. The effects of DRO and extinction on target responding, target-other responding (a response with an established history of reinforcement), and nontarget-other responding emitted by children with intellectual and developmental disabilities and children with no known diagnoses were compared. Other behavior increased in at least one DRO condition for each participant, suggesting that other behavior increases when using DRO, at least initially. Under extinction, target responding and target-other responding decreased to low rates for three of the five participants; however, rates of nontarget-other responding were elevated compared to the DRO condition. These results suggest that increased rates of target-other responding and nontarget-other responding during the DRO condition may be a result of extinction-induced variability.  相似文献   

13.
Noncontingent reinforcement (NCR) is typically implemented with extinction (EXT) for destructive behavior reinforced by social consequences and without EXT for destructive behavior reinforced by sensory consequences. Behavioral momentum theory (BMT) predicts that responding will be more persistent, and treatment relapse in the form of response resurgence more likely, when NCR is implemented without EXT due to the greater overall rate of reinforcement associated with this intervention. We used an analogue arrangement to test these predictions of BMT by comparing NCR implemented with and without EXT. For two of three participants, we observed more immediate reductions in responding during NCR without EXT. However, for all participants, NCR without EXT produced greater resurgence than NCR with EXT when we discontinued all reinforcers during an EXT Only phase, although there was variability in response patterns across and within participants. Implications for treatment of destructive behavior using NCR are discussed.  相似文献   

14.
This study aimed to investigate whether variable patterns of responses can be acquired and maintained by negative reinforcement under an avoidance contingency. Six male Wistar rats were exposed to sessions in which behavioral variability was reinforced according to a Lag contingency: Sequences of three responses on two levers had to differ from one, two or three previous sequences for shocks to be avoided (Lag 1, Lag 2 and Lag 3, respectively). Performance under the Lag conditions was compared with performance on a Yoke condition in which the animals received the same reinforcement frequency and distribution as in the Lag condition but behavioral variability was not required. The results showed that most of the subjects varied their sequences under the Lag contingencies, avoiding shocks with relatively high probability (≥ 0.7). Under the Yoke procedure, responding continued to occur with high probability, but the behavioral variability decreased. These results suggest that behavioral variability can be negatively reinforced.  相似文献   

15.
Ducklings (5 to 28 days old) were trained to peck a pole on fixed-ratio, fixed-interval, and multiple schedules using brief presentation of an imprinting stimulus as the response-contingent event. Other ducklings of the same age were trained similarly except that reinforcement consisted of access to water. With water reinforcement the typical fixed-ratio (“break-run”), fixed-interval (“scallop”), and multiple schedule response patterns were readily established and consistently maintained. With the imprinting stimulus these schedule effects were inconsistent in some subjects and virtually nonexistent in others, despite extended training. Schedule control with the imprinting stimulus was not improved by the use of a reinforcement signaling procedure which enhances responding reinforced by electrical brain stimulation on intermittent schedules. However, the overall rates of responding and the extinction functions generated after reinforcement with water versus the imprinting stimulus were comparable. These findings imply that control by temporal and discriminative stimuli may be relatively weak when a young organism's behavior is reinforced by presentation of an imprinting stimulus.  相似文献   

16.
Nine pigeons were used in two experiments in which a response was reinforced if a variable-interval schedule had assigned a reinforcement and if the response terminated an interresponse time within a certain interval, or class, of interresponse times. One such class was scheduled on one key, and a second class was scheduled on a second key. The procedure was, therefore, a two-key concurrent paced variable-interval paced variable-interval schedule. In Exp. I, the lengths of the two reinforced interresponse times were varied. The relative frequency of responding on a key approximately equalled the relative reciprocal of the length of the interresponse time reinforced on that key. In Exp. II, the relative frequency and relative magnitude of reinforcement were varied. The relative frequency of responding on the key for which the shorter interresponse time was reinforced was a monotonically increasing, negatively accelerated function of the relative frequency of reinforcement on that key. The relative frequency of responding depended on the relative magnitude of reinforcement in approximately the same way as it depended on the relative frequency of reinforcement. The relative frequency of responding on the key for which the shorter interresponse time was reinforced depended on the lengths of the two reinforced interresponse times and on the relative frequency and relative magnitude of reinforcement in the same way as the relative frequency of the shorter interresponse time depended on these variables in previous one-key concurrent schedules of reinforcement for two interresponse times.  相似文献   

17.
Fixed-ratio reinforcement of spaced responding   总被引:1,自引:1,他引:0       下载免费PDF全文
Responses by rats were reinforced with food under a second-order schedule involving fixed-ratio reinforcement of temporally spaced responses. Requirements of 20, 8, and 3 responses were examined. The typical characteristic of spaced responding was maintained under the ratio schedules: interresponse time distributions were similar to those typically seen, and were not noticeably affected by the ratio value. Comparison of total response rate, correct response rate, and accuracy showed correct response rate to be the most consistently affected by changes in the ratio value. Substantial evidence of schedule control was seen only for correct responses. Incorrect response records were erratic, but rates generally declined as reinforcement was approached. Correct response records were characterized by increasing rate as reinforcement was approached. It was suggested that the pattern of fixed-ratio performance revealed may be affected by the behavioral unit examined.  相似文献   

18.
A two-choice discrete operant procedure was devised for the study of shock-correlated reinforcement effects in rats. In the presence of one auditory stimulus, responding on one response lever was reinforced with food; with another auditory stimulus, responding on a second lever was reinforced. It was found that discrimination performance of one group, relative to appropriate control groups, was facilitated when electric shock was correlated with reinforcement on one lever and not on the other. Further, relative discrimination levels were found to be higher on the lever correlated with the shock than on the alternate lever. The significance of the results for operant within-S studies and for a mediational theory of shock-correlated reinforcement was discussed.  相似文献   

19.
The stimuli that control responding in the peak procedure were investigated by training rats, in separate sessions, to make two different responses for food reinforcement. During one type of session, lever pressing was normally reinforced 32 s after the onset of a light. During the other type of session, chain pulling was normally reinforced either 8 s after the onset of one auditory cue or 128 s after the onset of a different auditory cue. For both types of sessions, only the appropriate manipulandum was available, and 20% of the trials lasted 240 s and involved no response-contingent consequences. Rats were then tested with the auditory cues in the presence of the lever and the light in the presence of the chain. If the time of reinforcement associated with each stimulus was learned, response rates should peak at these times during transfer testing. However, if a specific response pattern was learned for each stimulus, little transfer should occur. The results did not clearly support either prediction, leading to the conclusion that both a representation of the time of reinforcement and the rat's own behavior may control responding in this situation.  相似文献   

20.
Two experiments studied the effects of reinforcement schedules on generalization gradients. In Exp. 1, after pigeons' responding to a vertical line was reinforced, the pigeons were tested with 10 lines differing in orientation. Reconditioning and the redetermination of generalization gradients were repeated from 8 to 11 times with the schedule of reinforcement varied in the reconditioning phase. Stable gradients could not be observed because the successive reconditionings and tests steepened the gradients and reduced responding. Experiment 2 over-came these effects by first training the birds to respond to all of the stimuli. Then, brief periods of reinforced responding to the stimulus correlated with reinforcement alternated with the presentation of the 10 lines in extinction. The development of stimulus control was studied eight times with each bird, twice with each of four schedules of reinforcement. Gradients were similar each time a schedule was imposed; the degree of control by the stimulus correlated with reinforcement varied with particular schedules. Behavioral contrast occurred when periods of reinforcement and extinction alternated and was more durable with fixed-interval, variable-interval, and variable-ratio schedules than with fixed-ratio or differential-reinforcement-of-low-rate schedules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号