首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Pigeons were exposed to multiple second-order schedules in which responding on the “main key” was reinforced according to either a variable-interval or fixed-interval schedule by production of a brief stimulus on the “brief-stimulus key”. A response was required to the brief stimulus during its fourth (final) presentation to produce food; responses to the earlier brief stimuli indicated the extent to which the final brief stimulus was discriminated from preceding ones. Main-key response rates were higher in early components of paired brief-stimulus schedules, in which each brief stimulus was the same as that paired with reinforcement, than in comparable unpaired brief-stimulus or tandem schedules. Poor discrimination occurred between paired brief stimuli (Experiment I). When chain stimuli on the main key induced a discrimination between the first two and second two brief stimuli, the response-rate enhancement in the paired brief-stimulus schedule persisted (Experiment II). Rate enhancement diminished when the initial link of the chain included the first three components (Experiment IV). Eliminating the contingency between responding and brief-stimulus production also diminished rate enhancement (Experiment III). The results show that the discriminative and conditioned reinforcing effects of food-paired brief stimuli may be selectively manipulated and suggest that the reinforcing effects are modulated by other reinforcers in the situation.  相似文献   

2.
Conditioned reinforcement and choice   总被引:1,自引:1,他引:0       下载免费PDF全文
In a series of three experiments, rats were exposed to successive schedule components arranged on two levers, in which lever pressing produced a light, and nose-key pressing produced water in 50% of the light periods. When one auditory signal was presented only during those light periods correlated with water on one lever, and a different signal was presented only during those light periods correlated with nonreinforcement on the other lever, the former lever was preferred in choice trials, and higher rates of responding were maintained on the former lever in nonchoice (forced) trials. Thus, the rats preferred a schedule component that included a conditioned reinforcer over one that did not, with the schedules of primary reinforcement and the information value of the signals equated. Preferences were maintained when one or the other of the auditory signals was deleted, but were not established in naive subjects when training began with either the positive or negative signal only. Discriminative control of nose-key pressing by the auditory signals was highly variable across subjects and was not correlated with choice.  相似文献   

3.
Stimuli associated with primary reinforcers appear themselves to acquire the capacity to strengthen behavior. This paper reviews research on the strengthening effects of conditioned reinforcers within the context of contemporary quantitative choice theories and behavioral momentum theory. Based partially on the finding that variations in parameters of conditioned reinforcement appear not to affect response strength as measured by resistance to change, long-standing assertions that conditioned reinforcers do not strengthen behavior in a reinforcement-like fashion are considered. A signposts or means-to-an-end account is explored and appears to provide a plausible alternative interpretation of the effects of stimuli associated with primary reinforcers. Related suggestions that primary reinforcers also might not have their effects via a strengthening process are explored and found to be worthy of serious consideration.  相似文献   

4.
5.
Pigeons were trained on a discrete-trials, simultaneous discrimination procedure, with confusable stimuli such that asymptotic performance was about 85% correct. Trials were terminated if no response occurred within 2 sec of stimulus onset, so that probability of responding was free to vary. The schedule of reinforcement for correct responses was varied, with the following results: (1) there was no relation between frequency of reinforcement and accuracy of responding. (2) In extinction, the probability of responding fell to low levels, but accuracy remained roughly constant. (3) When reinforcement was available after a fixed number of trials or after a fixed number of correct responses, the probability of responding increased with successive trials after reinforcement, but accuracy was generally constant. (4) When every fifth correct response was reinforced, accuracy decreased immediately after reinforcement if the birds were required to respond on every trial.  相似文献   

6.
Conditioned suppression is a decrease in response rate during a relatively short duration stimulus that terminates independently of the animal's behavior and coincidentally with a brief unavoidable shock. The degree of conditioned suppression was measured for each of three birds on three variable ratio schedules; that is, the number of responses required for food reinforcement was varied around a mean of 50, 100, or 200. The results indicated a slight and possibly negligible decrease in the degree of suppression as the mean number of responses required on the schedule was increased from 50, to 100, and 200. In general, it was found that all of the variable ratio schedules tested were quite insensitive to the conditioned suppression procedure, although almost complete suppression was obtained on a few occasions. Since the reinforcement was contingent upon the emission of responses, the birds typically displayed a high rate of response during the pre-shock stimulus on all schedules. In addition, the rate during the pre-shock stimulus often changed abruptly independent of the presentation of a reinforcement. As a result of the high rate of response and the abrupt changes in rate, the degree of suppression from trial to trial was quite variable. A clear analysis of an experimental variable on this baseline is thus difficult.  相似文献   

7.
8.
Conditioned reinforcement value and choice.   总被引:4,自引:4,他引:0       下载免费PDF全文
The delay-reduction hypothesis of conditioned reinforcement states that the reinforcing value of a food-associated stimulus is determined by the delay to primary reinforcement signaled by the onset of the stimulus relative to the average delay to primary reinforcement in the conditioning situation. In contrast, most contemporary models of conditioned reinforcement strength posit that the reinforcing strength of a stimulus is some simple function only of the delay to primary reinforcement in the presence of stimulus. The delay-reduction hypothesis diverges from other conditioned reinforcement models in that it predicts that a fixed-duration food-paired stimulus will have different reinforcing values depending on the frequency of its presentation. In Experiment 1, pigeons' key pecks were reinforced according to concurrent-chains schedules with variable-interval 10-second and variable-interval 20-second terminal-link schedules. The initial-link schedule preceding the shorter terminal link was always variable-interval 60 seconds, and the initial-link schedule requirement preceding the longer terminal link was varied between 1 second and 60 seconds across conditions. In Experiment 2, the initial-link schedule preceding the longer of two terminal links was varied for each of three groups of pigeons. The terminal links of the concurrent chains for the three groups were variable-interval 10 seconds and 20 seconds, variable-interval 10 seconds and 30 seconds, and variable-interval 30 seconds and 50 seconds. In both experiments, preference for the shorter terminal link was either a bitonic function or an inverse function of the initial-link schedule preceding the longer terminal-link schedule. Consistent with the predictions of the delay-reduction hypothesis, the relative values of the terminal-link stimuli changed as a function of the overall frequency of primary reinforcement. Vaughan's (1985) melioration model, which was shown to be formally similar to Squires and Fantino's (1971) delay-reduction model, can be modified so as to predict these results without changing its underlying assumptions.  相似文献   

9.
Conditioned reinforcement in second-order schedules   总被引:9,自引:6,他引:3       下载免费PDF全文
Pigeons responded under a schedule in which food was presented only after a fixed number of fixed-interval components were completed. Two such second-order schedules were studied: under one, 30 consecutive 2-min fixed-interval components were required; under the other, 15 consecutive 4-min fixed-interval components were required. Under both schedules, when a 0.7-sec stimulus light was presented at completion of each fixed interval, positively accelerated responding developed in each component. When no stimulus change occurred at completion of each fixed interval, relatively low and constant rates of responding prevailed in each component; a similar result was obtained when a 0.7-sec stimulus change occurred at completion of each fixed interval except the one which terminated with primary reinforcement. The 0.7-sec stimulus correlated with food delivery was an effective conditioned reinforcer in maintaining patterns of responding in fixed-interval components despite low average frequencies of food reinforcement.  相似文献   

10.
Conditioned reinforcement value and resistance to change   总被引:1,自引:0,他引:1  
Three experiments examined the effects of conditioned reinforcement value and primary reinforcement rate on resistance to change using a multiple schedule of observing-response procedures with pigeons. In the absence of observing responses in both components, unsignaled periods of variable-interval (VI) schedule food reinforcement alternated with extinction. Observing responses in both components intermittently produced 15 s of a stimulus associated with the VI schedule (i.e., S+). In the first experiment, a lower-valued conditioned reinforcer and a higher rate of primary reinforcement were arranged in one component by adding response-independent food deliveries uncorrelated with S+. In the second experiment, one component arranged a lower valued conditioned reinforcer but a higher rate of primary reinforcement by increasing the probability of VI schedule periods relative to extinction periods. In the third experiment, the two observing-response components provided similar rates of primary reinforcement but arranged different valued conditioned reinforcers. Across the three experiments, observing-response rates were typically higher in the component associated with the higher valued conditioned reinforcer. Resistance to change was not affected by conditioned reinforcement value, but was an orderly function of the rate of primary reinforcement obtained in the two components. One interpretation of these results is that S+ value does not affect response strength and that S+ deliveries increase response rates through a mechanism other than reinforcement. Alternatively, because resistance to change depends on the discriminative stimulus-reinforcer relation, the failure of S+ value to impact resistance to change could have resulted from a lack of transfer of S+ value to the broader discriminative context.  相似文献   

11.
The concept of conditioned reinforcement has received decreased attention in learning textbooks over the past decade, in part because of criticisms of its validity by major behavior theorists and in part because its explanatory function in a variety of different conditioning procedures has become uncertain. Critical data from the major procedures that have been used to investigate the concept (second-order schedules, chain schedules, concurrent chains, observing responses, delay-of-reinforcement procedures) are reviewed, along with the major issues of interpretation. Although the role played by conditioned reinforcement in some procedures remains unresolved, the results taken together leave little doubt that the underlying idea of conditioned value is a critical component of behavior theory that is necessary to explain many different types of data. Other processes (marking, bridging) may also operate to produce effects similar to those of conditioned reinforcement, but these clearly cannot explain the full domain of experimental effects ascribed to conditioned reinforcement and should be regarded as complements to the concept rather than theoretical competitors. Examples of practical and theoretical applications of the concept of conditioned reinforcement are also considered.  相似文献   

12.
Pigeons were trained on three-component chain schedules in which the initial component was either a fixed-interval or variable-interval schedule. The middle and terminal components were varied among fixed-interval fixed-interval, variable-interval variable-interval, and an interdependent variable-interval variable-interval schedule in which the sum of the durations of the two variable-interval components was always equal to the sum of the fixed-interval fixed-interval components. At issue was whether the response rate in the initial component was controlled by its time to primary reinforcement or by the temporal parameters of the stimulus correlated with the middle terminal link. The fixed-interval initial-link schedule maintained much lower response rates than the variable-interval initial-link schedule regardless of the schedules in the middle and terminal links. Nevertheless, the intervening schedules played some role: With fixed-interval schedules in the initial links, response rates were consistently highest with independent variable-interval schedules in the middle and terminal links and intermediate with the interdependent variable-interval schedules; these initial-link differences were predicted by the response rates in the middle link of the chain. With variable-interval schedules in the initial links, response rates were lowest with the fixed-interval fixed-interval schedules following the initial link and were not systematically different for the two types of variable-interval variable-interval schedules. The results suggest that time to reinforcement itself accounts for little if any variance in initial-link responding.  相似文献   

13.
14.
In Experiment 1, Japanese monkeys were trained on three conditional position-discrimination problems with colors as the conditional cues. Within each session, each problem was presented for two blocks of ten reinforcements; correct responses were reinforced under continuous-reinforcement, fixed-ratio 5, and variable-ratio 5 schedules, each assigned to one of the three problems. The assignment of schedules to problems was rotated a total of three times (15 sessions per assignment) after 30 sessions of acquisition training. Accuracy of discrimination increased to a moderate level with fewer trials under CRF than under ratio schedules. In contrast, the two ratio schedules, fixed and variable, were more effective in maintaining accurate discrimination than was CRF. With further training, as asymptotes were reached, accuracy was less affected by the schedule differences. These results demonstrated an interaction between the effects of reinforcement schedules and the level of acquisition. In Experiment 2, ratio sizes were gradually increased to 30. Discrimination accuracy was maintained until the ratio reached 20; ratio 30 strained the performance. Under FR conditions, accuracy increased as correct choice responses cumulated after reinforcement.  相似文献   

15.
16.
A concurrent-chains schedule was used to examine how a delay to conditional discriminative stimuli affects conditioned reinforcement strength. Pigeons' key-peck responses in the initial link produced either of two terminal links according to independent variable-interval 30-s schedules. Each terminal link involved an identical successive conditional discrimination and was segmented into three links: a delay interval (green), a color conditional discriminative stimulus (blue or red), and a line conditional discriminative stimulus (vertical or horizontal lines). Food delivery occurred 45 s after entering the terminal link with a probability of .5, but its conditional probability (1.0 or 0) depended on the combination of the color and the line stimuli. One of the color stimuli occurred independently of further responding, 5 s after entry into the right terminal link, but it occurred 35 s after entry into the left terminal link. One of the line stimuli occurred independently of responding 40 s after entry into either terminal link, synchronized with the offset of the color stimulus. The initial-link relative response rate for the right was consistently higher in comparison with a control condition in which the color stimuli occurred 20 s after entry into either terminal link. The preference for the short delay to the color conditional discriminative stimuli suggests the possibility of conditioned reinforcement by information about the relation between the line conditional discriminative stimuli and the outcomes.  相似文献   

17.
This experiment compared two modes of practice at a difficult frequency discrimination, i.e., one in which the frequency difference was initially correctly discriminated only 65% of the trials in a two-alternative forced-choice task. One group of Ss (N = 13) was assigned to a progressive-practice group, in which the frequency difference to be discriminated was progressivley changed from a large, easy, difference to the difficult, small, difference. The other group of Ss (N = 13) received the same amount of practice as the first, but all at the difficult discrimination. Only the progressive-practice group improved their discrimination performance. Since no feedback was given, the effect of progressive practice is interpreted as "shaping" Ss' attentional response by virtue of the information provided by the successively more difficult discriminations. This "shaping" process is potentially available as a learning mechanism for other fine discriminations.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号