首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Overall reinforcer rate appears to affect choice. The mechanism for such an effect is uncertain, but may relate to reinforcer rate changing the discrimination of the relation between stimuli and reinforcers. We assessed whether a quantitative model based on a stimulus‐control approach could be used to account for the effects of overall reinforcer rate on choice under changing time‐based contingencies. On a two‐key concurrent schedule, the likely availability of a reinforcer reversed when a fixed time had elapsed since the last reinforcer, and the overall reinforcer rate was varied across conditions. Changes in the overall reinforcer rate produced a change in response bias, and some indication of a change in discrimination. These changes in bias and discrimination always occurred quickly, usually within the first session of a condition. The stimulus‐control approach provided an excellent account of the data, suggesting that changes in overall reinforcer rate affect choice because they alter the frequency of reinforcers obtained at different times, or in different stimulus contexts, and thus change the discriminated relation between stimuli and reinforcers. These findings support the notion that temporal and spatial discriminations can be understood in terms of discrimination of reinforcers across time and space.  相似文献   

2.
The extent to which a stimulus exerts control over behavior depends largely on its informativeness. However, when reinforcers have discriminative properties, they often exert less control over behavior than do other less reliable stimuli such as elapsed time. We investigated why less reliable cues in the present often overshadow stimulus control by more reliable cues presented in the recent past, by manipulating the reliability and duration of stimulus presentations. Five pigeons worked on a modified concurrent schedule in which the location of the response that produced the last reinforcer was a discriminative stimulus for the likely time and location of the next reinforcer. In some conditions, either the location of the previous reinforcer, or the location of the next reinforcer, was signaled by a red key light. This stimulus was either Brief, occurring for 10 s starting a fixed time after the most recent reinforcer, or Extended, being present at all times between food deliveries. Brief and Extended stimuli that signaled the same information had a similar effect on choice when they were present, but control by Brief stimuli weakened as time since stimulus offset elapsed. Control was divided among stimuli in the present and recent past according to the apparent reliability of the information signaled about the next reinforcer. More reliable stimuli in the present degraded, but did not erase, control by less reliable stimuli presented in the recent past. Thus, we conclude that less reliable stimuli in the present control behavior to a greater degree than do more reliable stimuli in the recent past because these more reliable stimuli are forgotten, and hence their relation to the likely availability of food cannot be discriminated.  相似文献   

3.
The study presented here investigated the effect of common and uncommon elements on class merger as predicted by Sidman in his reconceptualization of stimulus equivalence suggesting that common elements among contingencies can facilitate emergent performances (1994, 1997, 2000). Eight adult participants were exposed to a procedure that arranged for stimulus–reinforcer correlations in Phase 1 and response–reinforcer correlations in Phase 2 of a 3-phase study. In the common element group, the visual images serving as reinforcers were the same in Phase 1 and Phase 2. In the uncommon elements group, the images serving as reinforcers were different in Phases 1 and 2. In Phase 3, participants were given an opportunity to respond but no feedback was programmed. The results showed that participants' responding was well differentiated in the common element group and undifferentiated in the uncommon elements group. These results are predicted by Sidman's revised formulation of the provenance and scope of equivalence relations. Specifically, these data support Sidman's (1994, 1997, 2000) suggestion that elements of a contingency enter into an equivalence class and common elements among contingencies are sufficient to produce class mergers. The findings highlight an emergent simple discrimination and raise some interesting considerations about the definition of equivalence under the new formulation.  相似文献   

4.
Five pigeons were trained in a concurrent foraging procedure in which reinforcers were occasionally available after fixed times in two discriminated patches. In Part 1 of the experiment, the fixed times summed to 10 s, and were individually varied between 1 and 9 s over five conditions, with the probability of a reinforcer being delivered at the fixed times always .5. In Part 2, both fixed times were 5 s, and the probabilities of food delivery were varied over conditions, always summing to 1.0. In Parts 3 and 4, one fixed time was kept constant (Part 3, 3 s; Part 4, 7 s) while the other fixed time was varied from 1 s to 15 s. Median residence times in both patches increased with increases in the food-arrival times in either patch, but increased considerably more strongly in the patch in which the arrival time was increased. However, when arrival times were very different in the two patches, residence time in the longer arrival-time patch often decreased. Patch residence also increased with increasing probability of reinforcement, but again tended to fall when one probability was much larger than the other. A detailed analysis of residence times showed that these comprised two distributions, one around a shorter mode that remained constant with changes in arrival times, and one around a longer mode that monotonically increased with increasing arrival time. The frequency of shorter residence times appeared to be controlled by the probability of, and arrival time of, reinforcers in the alternative patch. The frequency of longer residence times was controlled directly by the arrival time of reinforcers in a patch, but not by the probability of reinforcers in a patch. The environmental variables that control both staying in a patch and exiting from a patch need to be understood in the study both of timing processes and of foraging.  相似文献   

5.
There is evidence suggesting aggression may be a positive reinforcer in many species. However, only a few studies have examined the characteristics of aggression as a positive reinforcer in mice. Four types of reinforcement schedules were examined in the current experiment using male Swiss CFW albino mice in a resident—intruder model of aggression as a positive reinforcer. A nose poke response on an operant conditioning panel was reinforced under fixed‐ratio (FR 8), fixed‐interval (FI 5‐min), progressive ratio (PR 2), or differential reinforcement of low rate behavior reinforcement schedules (DRL 40‐s and DRL 80‐s). In the FR conditions, nose pokes were maintained by aggression and extinguished when the aggression contingency was removed. There were long postreinforcement pauses followed by bursts of responses with short interresponse times (IRTs). In the FI conditions, nose pokes were maintained by aggression, occurred more frequently as the interval elapsed, and extinguished when the contingency was removed. In the PR conditions, nose pokes were maintained by aggression, postreinforcement pauses increased as the ratio requirement increased, and responding was extinguished when the aggression contingency was removed. In the DRL conditions, the nose poke rate decreased, while the proportional distributions of IRTs and postreinforcement pauses shifted toward longer durations as the DRL interval increased. However, most responses occurred before the minimum IRT interval elapsed, suggesting weak temporal control of behavior. Overall, the findings suggest aggression can be a positive reinforcer for nose poke responses in mice on ratio‐ and time‐based reinforcement schedules.  相似文献   

6.
We present a study that links optimal foraging theory (OFT) to behavioral timing. OFT's distinguishing feature is the use of models that compute the most advantageous behavior for a particular foraging problem and compare the optimal solution to empirical data with little reference to psychological processes. The study of behavioral timing, in contrast, emphasizes performance in relation to time, most often without strategic or functional considerations. In three experiments, reinforcer-maximizing behavior and timing performance are identified and related to each other. In all three experiments starlings work in a setting that simulates food patches separated by a flying distance between the two perches. The patches contain a variable and unpredictable number of reinforcers and deplete suddenly without signal. Before depletion, patches deliver food at fixed intervals (FI). Our main dependent variables are the times of occurrence of three behaviors: the “peak” in pecking rate (Peak), the time of the last peck before “giving in” (GIT), and the time for “moving on” to a new patch (MOT). We manipulate travel requirement (Experiment 1), level of deprivation and FI (Experiment 2), and size of reinforcers (Experiment 3). For OFT, Peak should equal the FI in all conditions while GIT and MOT should just exceed it. Behavioral timing and Scalar Expectancy Theory (SET) in particular predict a Peak at around the FI and a longer (unspecified) GIT, and make no prediction for MOT. We found that Peak was close to the FI and GIT was approximately 1.5 times longer, neither being affected by travel, hunger, or reinforcer size manipulations. MOT varied between 1.5 and just over 3 times the FI, was responsive to both travel time and the FI, and did not change when the reinforcer rate was manipulated. These results support the practice of producing models that explicitly separate information available to the subject from strategic use of this information.  相似文献   

7.
Reinforcers affect behavior. A fundamental assumption has been that reinforcers strengthen the behavior they follow, and that this strengthening may be context‐specific (stimulus control). Less frequently discussed, but just as evident, is the observation that reinforcers have discriminative properties that also guide behavior. We review findings from recent research that approaches choice using nontraditional procedures, with a particular focus on how choice is affected by reinforcers, by time since reinforcers, and by recent sequences of reinforcers. We also discuss how conclusions about these results are impacted by the choice of measurement level and display. Clearly, reinforcers as traditionally considered are conditionally phylogenetically important to animals. However, their effects on behavior may be solely discriminative, and contingent reinforcers may not strengthen behavior. Rather, phylogenetically important stimuli constitute a part of a correlated compound stimulus context consisting of stimuli arising from the organism, from behavior, and from physiologically detected environmental stimuli. Thus, the three‐term contingency may be seen, along with organismic state, as a correlation of stimuli. We suggest that organisms may be seen as natural stimulus‐correlation detectors so that behavioral change affects the overall correlation and directs the organism toward currently appetitive goals and away from potential aversive goals. As a general conclusion, both historical and recent choice research supports the idea that stimulus control, not reinforcer control, may be fundamental.  相似文献   

8.
Six pigeons were trained to respond on two keys, each of which provided reinforcers on an arithmetic variable-interval schedule. These concurrent schedules ran nonindependently with a 2-s changeover delay. Six sets of conditions were conducted. Within each set of conditions the ratio of reinforcers available on the two alternatives was varied, but the arranged overall reinforcer rate remained constant. Each set of conditions used a different overall reinforcer rate, ranging from 0.22 reinforcers per minute to 10 reinforcers per minute. The generalized matching law fit the data from each set of conditions, but sensitivity to reinforcer frequency (a) decreased as the overall reinforcer rate decreased for both time allocation and response allocation based analyses of the data. Overall response rates did not vary with changes in relative reinforcer rate, but decreased with decreases in overall reinforcer rate. Changeover rates varied as a function of both relative and overall reinforcer rates. However, as explanations based on changeover rate seem unable to deal with the changes in generalized matching sensitivity, discrimination accounts of choice may offer a more promising interpretation.  相似文献   

9.
In three experiments we investigated the effect on the performance of thirsty rats of varying the instrumental contingency between lever pressing and the delivery of a saccharin reinforcer. In Experiment 1, the subjects performed more slowly in a non-contingent condition, in which the momentary probability of reinforcement was unaffected by whether or not the animals pressed, than in a contingent condition in which the reinforcer was never presented except following a lever press. This was true of performance under both random ratio and interval schedules in which the function determining the probability of reinforcement following a lever press remained the same across the contingent and non-contingent conditions. Experiment 2 demonstrated that instrumental performance was less affected when the contingency was degraded by the introduction of free reinforcers if these reinforcers were signalled. In Experiment 3, lever pressing was reinstated to some degree after non-contingent training by giving non-reinforced exposure to the operant chamber in the absence of the lever. These results suggest that free reinforcers depress instrumental behaviour through a performance mechanism engaged by their ability to support conditioning of the contextual cues.  相似文献   

10.
We compared the effects of direct and indirect reinforcement contingencies on the performance of 6 individuals with profound developmental disabilities. Under both contingencies, completion of identical tasks (opening one of several types of containers) produced access to identical reinforcers. Under the direct contingency, the reinforcer was placed inside the container to be opened; under the indirect contingency, the therapist held the reinforcer and delivered it to the participant upon task completion. One participant immediately performed the task at 100% accuracy under both contingencies. Three participants showed either more immediate or larger improvements in performance under the direct contingency. The remaining 2 participants showed improved performance only under the direct reinforcement contingency. Data taken on the occurrence of "irrelevant" behaviors under the indirect contingency (e.g., reaching for the reinforcer instead of performing the task) provided some evidence that these behaviors may have interfered with task performance and that their occurrence was a function of differential stimulus control.  相似文献   

11.
Five pigeons were trained on pairs of concurrent variable-interval schedules in a switching-key procedure. The arranged overall rate of reinforcement was constant in all conditions, and the reinforcer-magnitude ratios obtained from the two alternatives were varied over five levels. Each condition remained in effect for 65 sessions and the last 50 sessions of data from each condition were analyzed. At a molar level of analysis, preference was described well by a version of the generalized matching law, consistent with previous reports. More local analyses showed that recently obtained reinforcers had small measurable effects on current preference, with the most recently obtained reinforcer having a substantially larger effect. Larger reinforcers resulted in larger and longer preference pulses, and a small preference was maintained for the larger-magnitude alternative even after long inter-reinforcer intervals. These results are consistent with the notion that the variables controlling choice have both short- and long-term effects. Moreover, they suggest that control by reinforcer magnitude is exerted in a manner similar to control by reinforcer frequency. Lower sensitivities when reinforcer magnitude is varied are likely to be due to equal frequencies of different sized preference pulses, whereas higher sensitivities when reinforcer rates are varied might result from changes in the frequencies of different sized preference pulses.  相似文献   

12.
An adjusting‐delay procedure was used to study the choices of pigeons and rats when both delay and amount of reinforcement were varied. In different conditions, the choice alternatives included one versus two reinforcers, one versus three reinforcers, and three versus two reinforcers. The delay to one alternative (the standard alternative) was kept constant in a condition, and the delay to the other (the adjusting alternative) was increased or decreased many times a session so as to estimate an indifference point—a delay at which the two alternatives were chosen about equally often. Indifference functions were constructed by plotting the adjusting delay as a function of the standard delay for each pair of reinforcer amounts. The experiments were designed to test the prediction of a hyperbolic decay equation that the slopes of the indifference functions should increase as the ratio of the two reinforcer amounts increased. Consistent with the hyperbolic equation, the slopes of the indifference functions depended on the ratios of the two reinforcer amounts for both pigeons and rats. These results were not compatible with an exponential decay equation, which predicts slopes of 1 regardless of the reinforcer amounts. Combined with other data, these findings provide further evidence that delay discounting is well described by a hyperbolic equation for both species, but not by an exponential equation. Quantitative differences in the y‐intercepts of the indifference functions from the two species suggested that the rate at which reinforcer strength decreases with increasing delay may be four or five times slower for rats than for pigeons.  相似文献   

13.
Reporting contingencies of reinforcement in concurrent schedules   总被引:2,自引:2,他引:0       下载免费PDF全文
Five pigeons were trained on concurrent variable-interval schedules in which two intensities of yellow light served as discriminative stimuli in a switching-key procedure. A conditional discrimination involving a simultaneous choice between red and green keys followed every reinforcer obtained from both alternatives. A response to the red side key was occasionally reinforced if the prior reinforcer had been obtained from the bright alternative, and a response to the green side key was occasionally reinforced if the prior reinforcer had been obtained from the dim alternative. Measures of the discriminability between the concurrent-schedule alternatives were obtained by varying the reinforcer ratio for correct red and correct green responses across conditions in two parts. Part 1 arranged equal rates of reinforcement in the concurrent schedule, and Part 2 provided a 9:1 concurrent-schedule reinforcer ratio. Part 3 arranged a 1:9 reinforcer ratio in the conditional discrimination, and the concurrent-schedule reinforcer ratio was varied across conditions. Varying the conditional discrimination reinforcer ratio did not affect response allocation in the concurrent schedule, but varying the concurrent-schedule reinforcer ratio did affect conditional discrimination performance. These effects were incompatible with a contingency-discriminability model of concurrent-schedule performance (Davison & Jenkins, 1985), which implies a constant discriminability parameter that is independent of the obtained reinforcer ratio. However, a more detailed analysis of conditional discrimination performance showed that the discriminability between the concurrent-schedule alternatives decreased with time since changing over to an alternative. This effect, combined with aspects of the temporal distribution of reinforcers obtained in the concurrent schedules, qualitatively predicted the molar results and identified the conditions that operate whenever contingency discriminability remains constant.  相似文献   

14.
Sensitization and habituation regulate reinforcer effectiveness   总被引:1,自引:1,他引:0  
We argue that sensitization and habituation occur to the sensory properties of reinforcers when those reinforcers are presented repeatedly or for a prolonged time. Sensitization increases, and habituation decreases, the ability of a reinforcer to control behavior. Supporting this argument, the rate of operant responding changes systematically within experimental sessions even when the programmed rate of reinforcement is held constant across the session. These within-session changes in operant responding are produced by repeated delivery of the reinforcer, and their empirical characteristics correspond to the characteristics of behavior undergoing sensitization and habituation. Two characteristics of habituation (dishabituation, stimulus specificity) are particularly useful in separating habituation from alternative explanations. Arguing that habituation occurs to reinforcers expands the domain of habituation. The argument implies that habituation occurs to biologically important, not just to neutral, stimuli. The argument also implies that habituation may be observed in “voluntary” (operant), not just in reflexive, behavior. Expanding the domain of habituation has important implications for understanding operant and classical conditioning. Habituation may also contribute to the regulation of motivated behaviors. Habituation provides a more accurate and a less cumbersome explanation for motivated behaviors than homeostasis. Habituation also has some surprising, and easily testable, implications for the control of motivated behaviors.  相似文献   

15.
16.
The relation between reinforcer magnitude and timing behavior was studied using a peak procedure. Four rats received multiple consecutive sessions with both low and high levels of brain stimulation reward (BSR). Rats paused longer and had later start times during sessions when their responses were reinforced with low-magnitude BSR. When estimated by a symmetric Gaussian function, peak times also were earlier; when estimated by a better-fitting asymmetric Gaussian function or by analyzing individual trials, however, these peak-time changes were determined to reflect a mixture of large effects of BSR on start times and no effect on stop times. These results pose a significant dilemma for three major theories of timing (SET, MTS, and BeT), which all predict no effects for chronic manipulations of reinforcer magnitude. We conclude that increased reinforcer magnitude influences timing in two ways: through larger immediate after-effects that delay responding and through anticipatory effects that elicit earlier responding.  相似文献   

17.
Animals make surprising anticipatory and perseverative errors when faced with a midsession reversal of reinforcer contingencies on a choice task with highly predictable stimulus–time relationships. In the current study, we asked whether pigeons would anticipate changes in reinforcement when the reinforcer contingencies for each stimulus were not fixed in time. We compared the responses of pigeons on a simultaneous choice task when the initially correct stimulus was randomized or alternated across sessions. Pigeons showed more errors overall compared with the typical results of a standard midsession reversal procedure, and they did not show the typical anticipatory errors prior to the contingency reversal. Probe tests that manipulated the spacing between trials also suggested that timing of the session exerted little control of pigeons’ behavior. The temporal structure of the experimental session thus appears to be an important determinant for animals’ use of time in midsession reversal procedures.  相似文献   

18.
Pigeons responded on two keys in each component of a multiple concurrent schedule. In one series of conditions the distribution of reinforcers between keys within one component was varied so as to produce changes in ratios of reinforcer totals for key locations when summed across components. In a second series, reinforcer allocation between components was varied so as to produce changes in ratios of reinforcer totals for components, summed across key locations. In each condition, resistance to change was assessed by presenting response-independent reinforcers during intercomponent blackouts and (for the first series) by extinction of responding on both keys in both components. Resistance to change for response totals within a component was always greater for the component with the larger total reinforcer rate. However, resistance to change for response totals at a key location was not a positive function of total reinforcement for pecking that key; indeed, relative resistance to extinction for the two locations showed a weak negative relation to ratios of reinforcer totals for key location. These results confirm the determination of resistance to change by stimulus—reinforcer contingencies.  相似文献   

19.
Results of previous research on the effects of noncontingent reinforcement (NCR) have been inconsistent when magnitude of reinforcement was manipulated. We attempted to clarify the influence of NCR magnitude by including additional controls. In Study 1, we examined the effects of reinforcer consumption time by comparing the same magnitude of NCR when session time was and was not corrected to account for reinforcer consumption. Lower response rates were observed when session time was not corrected, indicating that reinforcer consumption can suppress response rates. In Study 2, we first selected varying reinforcer magnitudes (small, medium, and large) on the basis of corrected response rates observed during a contingent reinforcement condition and then compared the effects of these magnitudes during NCR. One participant exhibited lower response rates when large-magnitude reinforcers were delivered; the other ceased responding altogether even when small-magnitude reinforcers were delivered. We also compared the effects of the same NCR magnitude (medium) during 10-min and 30-min sessions. Lower response rates were observed during 30-min sessions, indicating that the number of reinforcers consumed across a session can have the same effect as the number consumed per reinforcer delivery. These findings indicate that, even when response rate is corrected to account for reinforcer consumption, larger magnitudes of NCR (defined on either a per-delivery or per-session basis) result in lower response rates than do smaller magnitudes.  相似文献   

20.
Reinforcers may increase operant responding via a response-strengthening mechanism whereby the probability of the preceding response increases, or via some discriminative process whereby the response more likely to provide subsequent reinforcement becomes, itself, more likely. We tested these two accounts. Six pigeons responded for food reinforcers in a two-alternative switching-key concurrent schedule. Within a session, equal numbers of reinforcers were arranged for responses to each alternative. Those reinforcers strictly alternated between the two alternatives in half the conditions, and were randomly allocated to the alternatives in half the conditions. We also varied, across conditions, the alternative that became available immediately after a reinforcer. Preference after a single reinforcer always favored the immediately available alternative, regardless of the local probability of a reinforcer on that alternative (0 or 1 in the strictly alternating conditions, .5 in the random conditions). Choice then reflected the local reinforcer probabilities, suggesting some discriminative properties of reinforcement. At a more extended level, successive same-alternative reinforcers from an alternative systematically shifted preference towards that alternative, regardless of which alternative was available immediately after a reinforcer. There was no similar shift when successive reinforcers came from alternating sources. These more temporally extended results may suggest a strengthening function of reinforcement, or an enhanced ability to respond appropriately to "win-stay" contingencies over "win-shift" contingencies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号