首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, we describe a test of the active time model for concurrent variable interval (VI) choice. The active time model (ATM) suggests that the time since the most recent response is one of the variables controlling choice in concurrent VI VI schedules of reinforcement. In our experiment, pigeons were trained in a multiple concurrent similar to that employed by Belke (1992), with VI 20-s and VI 40-s schedules in one component, and VI 40-s and VI 80-s schedules in the other component. However, rather than use a free-operant design, we used a discrete-trial procedure that restricted interresponse times to a range of 0.5-9.0 s. After 45 sessions of training, unreinforced probe periods were mixed with reinforced training periods. These probes paired the two stimuli associated with the VI 40-s schedules. Further, the probes were defined such that during their occurrence, interresponse times were either "short" (0.5-3.0 s) or "long" (7.5-9.0 s). All pigeons showed a preference for the stimulus associated with the relatively rich VI 40-s schedule--a result mirroring that of Belke. We also observed, though, that this preference was more extreme during long probes than during short probes--a result predicted by ATM.  相似文献   

2.
Pigeons were trained on multiple schedules that provided concurrent reinforcement in each of two components. In Experiment 1, one component consisted of a variable-interval (VI) 40-s schedule presented with a VI 20-s schedule, and the other a VI 40-s schedule presented with a VI 80-s schedule. After extended training, probe tests measured preference between the stimuli associated with the two 40-s schedules. Probe tests replicated the results of Belke (1992) that showed preference for the 40-s schedule that had been paired with the 80-s schedule. In a second condition, the overall reinforcer rate provided by the two components was equated by adding a signaled VI schedule to the component with the lower reinforcer rate. Probe results were unchanged. In Experiment 2, pigeons were trained on alternating concurrent VI 30-s VI 60-s schedules. One schedule provided 2-s access to food and the other provided 6-s access. The larger reinforcer magnitude produced higher response rates and was preferred on probe trials. Rate of changeover responding, however, did not differ as a function of reinforcer magnitude. The present results demonstrate that preference on probe trials is not a simple reflection of the pattern of changeover behavior established during training.  相似文献   

3.
Morphological dictates of English usage call for the unvoiced allomorph /-s/ to form the plural of singular nouns with unvoiced endings (e.g., cups). Conversely, the voiced allomorph /-z/ is required to form the plural of nouns with voiced endings (e.g., tree). The study sought to determine the extent to which differential reinforcement could control the acquisition of plural allomorphs in two retarded subjects. In Condition 1, one subject was trained with reinforcement procedures on a list of words calling for the /-s/ allomorph. She was then given unreinforced probe items to determine the extent of generalization to words calling for the /-z/ allomorph. In Condition 2, the procedures were reversed and this subject was trained on a /-z/ list and probed for generalization of /-z/ to words calling for /-s/. A second subject was exposed to the same conditions in the opposite order. The results for the two subjects lent unequivocal support for the hypothesis of generalized training effects. It was concluded that appropriate usage of the linguistic response class "plurals" is susceptible to generalized training effects of differential reinforcement.  相似文献   

4.
Preference after training with differential changeover delays   总被引:3,自引:3,他引:0       下载免费PDF全文
Pigeons were trained on a multiple schedule in which each component consisted of concurrent variable-interval (VI) 30-s VI 60-s schedules. The two components of the multiple schedule differed only in terms of the changeover delays (COD): For one component short CODs were employed, and in the second component long CODs were used. After approximate matching was obtained in each component, probe tests involving new combinations of stimuli were presented (e.g., the VI 30-s schedule from each component) to determine how the different CODs affected preference. Despite shorter CODs producing higher changeover rates, the COD value had no systematic effect on preference on the probe trials. However, differences in reinforcement rate always produced preference for the schedule with the higher reinforcement rate. The results thus show that the the pattern of changeover behavior per se is not a critical determinant of choice in the probe-trial procedure.  相似文献   

5.
Pigeons were trained on two temporal bisection tasks, which alternated every two sessions. In the first task, they learned to choose a red key after a 1-s signal and a green key after a 4-s signal; in the second task, they learned to choose a blue key after a 4-s signal and a yellow key after a 16-s signal. Then the pigeons were exposed to a series of test trials in order to contrast two timing models, Learning-to-Time (LeT) and Scalar Expectancy Theory (SET). The models made substantially different predictions particularly for the test trials in which the sample duration ranged from 1 s to 16 s and the choice keys were Green and Blue, the keys associated with the same 4-s samples: LeT predicted that preference for Green should increase with sample duration, a context effect, but SET predicted that preference for Green should not vary with sample duration. The results were consistent with LeT. The present study adds to the literature the finding that the context effect occurs even when the two basic discriminations are never combined in the same session.  相似文献   

6.
Fat-tailed dunnarts (Sminthopsis crassicaudata) were trained on visual discrimination learning-set, reversal-set, and spatial delayed-alternation tasks. The learning set involved 36 2-way black-and-white pattern discriminations and 5 probe reversals. Ten reversals of a black-and-white pattern discrimination were followed by 5 novel tasks. Spatial alternation was tested at delays up to 20 s. Learning-set and reversal-set formation, including 1-trial learning and spontaneous transfer from learning set to reversals and vice versa, was found. Learning-set-experienced dunnarts showed no retention of previously learned tasks 1 week after testing but demonstrated consistently high Trial 2 performance, indicating the retention of a response strategy. Delayed-alternation tasks were learned up to 10-s delays. These results provide the first evidence of a visually guided "win-stay, lose-shift" strategy in a marsupial.  相似文献   

7.
Changeover behavior and preference in concurrent schedules   总被引:2,自引:2,他引:0       下载免费PDF全文
Pigeons were trained on a multiple schedule of reinforcement in which separate concurrent schedules occurred in each of two components. Key pecking was reinforced with milo. During one component, a variable-interval 40-s schedule was concurrent with a variable-interval 20-s schedule; during the other component, a variable-interval 40-s schedule was concurrent with a variable-interval 80-s schedule. During probe tests, the stimuli correlated with the two variable-interval 40-s schedules were presented simultaneously to assess preference, measured by the relative response rates to the two stimuli. In Experiment 1, the concurrently available variable-interval 20-s schedule operated normally; that is, reinforcer availability was not signaled. Following this baseline training, relative response rate during the probes favored the variable-interval 40-s alternative that had been paired with the lower valued schedule (i.e., with the variable-interval 80-s schedule). In Experiment 2, a signal for reinforcer availability was added to the high-value alternative (i.e., to the variable-interval 20-s schedule), thus reducing the rate of key pecking maintained by that schedule but leaving the reinforcement rate unchanged. Following that baseline training, relative response rates during probes favored the variable-interval 40-s alternative that had been paired with the higher valued schedule. The reversal in the pattern of preference implies that the pattern of changeover behavior established during training, and not reinforcement rate, determined the preference patterns obtained on the probe tests.  相似文献   

8.
Killeen and Fetterman's (1988) behavioral theory of animal timing predicts that decreases in the rate of reinforcement should produce decreases in the sensitivity (A') of temporal discriminations and a decrease in miss and correct rejection rates (decrease in bias toward "long" responses). Eight rats were trained on a 10- versus 0.1-s temporal discrimination with an intertrial interval of 5 s and were subsequently tested on probe days on the same discrimination with intertrial intervals of 1, 2.5, 5, 10, or 20 s. The rate of reinforcement declined for all animals as intertrial interval increased. Although sensitivity (A') decreased with increasing intertrial interval, all rats showed an increase in bias to make long responses.  相似文献   

9.
Five experiments examined recognition memory for sequentially presented odors. Participants were presented with a sequence of odors and then had to identify an odor from the list in a test probe containing 2 odors. All experiments demonstrated enhanced recognition of odors presented at the start and end of a series, compared with those presented in the middle of the series when a 3-s retention interval between list termination and test was used. In Experiments 2 and 3, when a 30-s or 60-s retention interval was used, participants performed at slightly lower levels, although the serial position function was similar to that obtained with the 3-s retention interval. These results were noted with a 5-item (Experiments 1 and 4), 7-item (Experiment 2), 6-item (Experiment 3), and 4-item (Experiment 5) list of odors. As the number of test trials increased, recognition performance decreased, indicating a strong role for olfactory fatigue or interference in these procedures. A verbal suppression task, used in Experiments 4 and 5, had little influence on serial-position-based performance.  相似文献   

10.
In the present study we extended errorless learning to a conditional temporal discrimination. Pigeons' responses to a left-red key after a 2-s sample and to a right-green key after a 10-s sample were reinforced. There were two groups: One learned the discrimination through trial and error and the other through an errorless learning procedure. Then, both groups were presented with three types of tests. First, they were exposed to intermediate durations between 2 s and 10 s, and given a choice between both keys (stimulus generalization test). Second, a delay from 1 s to 16 s was included between the offset of the sample and the onset of the choice keys (delay test). Finally, pigeons learned a new discrimination in which the stimuli were switched (reversal test). Results showed that pigeons from the Errorless group made significantly fewer errors than those in the Trial-and-Error group. Both groups performed similarly during the stimulus generalization test and the reversal test, but results of the delay test suggested that, on long stimulus trials, responding in the errorless training group was less disrupted by delays.  相似文献   

11.
In a discrete-trial procedure, pigeons could choose between 2-s and 6-s access to grain by making a single key peck. In Phase 1, the pigeons obtained both reinforcers by responding on fixed-ratio schedules. In Phase 2, they received both reinforcers after simple delays, arranged by fixed-time schedules, during which no responses were required. In Phase 3, the 2-s reinforcer was available through a fixed-time schedule and the 6-s reinforcer was available through a fixed-ratio schedule. In all conditions, the size of the delay or ratio leading to the 6-s reinforcer was systematically increased or decreased several times each session, permitting estimation of an "indifference point," the schedule size at which a subject chose each alternative equally often. By varying the size of the schedule for the 2-s reinforcer across conditions, several such indifference points were obtained from both fixed-time conditions and fixed-ratio conditions. The resulting "indifference curves" from fixed-time conditions and from fixed-ratio conditions were similar in shape, and they suggested that a hyperbolic equation describes the relation between ratio size and reinforcement value as well as the relation between reinforcer delay and its reinforcement value. The results from Phase 3 showed that subjects chose fixed-time schedules over fixed-ratio schedules that generated the same average times between a choice response and reinforcement.  相似文献   

12.
Behavioral data suggest that distinguishable orientations may be necessary for place learning even when distal cues define different start points in the room and a unique goal location. We examined whether changes in orientation are also important in place learning and navigation in a water T-maze. In Experiment 1, rats were trained to locate a hidden platform and given a no-platform probe trial after 16 and 64 trials with the maze moved to a new position. Direction and response strategies were more prevalent than a place strategy. In Experiment 2, acquisition of place, response and direction strategies was assessed in a water T-maze that was moved between two locations during training. Rats were impaired on the place task when the maze was translated (moved to the L or R) but were successful when the maze was rotated across trials. These data are consistent with findings from appetitive tasks.  相似文献   

13.
Two theories of timing, scalar expectancy theory (SET) and learning to time (LeT), make substantially different assumptions about what animals learn in temporal tasks. In a test of these assumptions, pigeons learned two discriminations: On Type 1 trials, they learned to choose a red key after a 1-s signal and a green key after a 4-s signal; on Type 2 trials, they learned to choose a blue key after a 4-s signal and a yellow key after a 16-s signal. Then, two psychometric functions were obtained by presenting them with intermediate durations (1 to 4 s and 4 to 16 s). The two functions did not superpose, and most bisection points were not at the geometric mean of the training stimuli (contra SET); for most birds, the function for Type 2 trials was to the left of the function for Type 1 trials (contra LeT). Finally, the birds were exposed to signals ranging from 1 to 16 s and given a choice between novel key combinations (e.g., red vs. blue). The results with the novel key combinations were always closer to LeT's than to SET's predictions. Observations of the birds' behavior also suggest that, more than being a mere expression of an internal clock, behavior constitutes the clock.  相似文献   

14.
Three rhesus monkeys were trained and tested in a same/different task with six successive sets of 70 item pairs to an 88% accuracy on each set. Their poor initial transfer performance (55% correct) with novel stimuli improved dramatically to 85% correct following daily item changes in the training stimuli. They acquired a serial-probe-recognition (SPR) task with variable (1-6) item list lengths. This SPR acquisition, although gradual, was more rapid for the monkeys than for pigeons similarly trained. Testing with a fixed list length of four items at different delays between the last list item and the probe test item revealed changes in the serial-position function: a recency effect (last items remembered well) for 0-s delay, recency and primacy effects (first and last list items remembered well) for 1-, 2-, and 10-s delays, and only a primacy effect for the longest 30-s delay. These results are compared with similar ones from pigeons and are discussed in relation to theories of memory processing.  相似文献   

15.
Experiment I investigated the effects of reinforcer magnitude on differential-reinforcement-of-low-rate (DRL) schedule performance in three phases. In Phase 1, two groups of rats (n = 6 and 5) responded under a DRI. 72-s schedule with reinforcer magnitudes of either 30 or 300 microl of water. After acquisition, the water amounts were reversed for each rat. In Phase 2, the effects of the same reinforcer magnitudes on DRL 18-s schedule performance were examined across conditions. In Phase 3, each rat responded unider a DR1. 18-s schedule in which the water amotnts alternated between 30 and 300 microl daily. Throughout each phase of Experiment 1, the larger reinforcer magnitude resulted in higher response rates and lower reinforcement rates. The peak of the interresponse-time distributions was at a lower value tinder the larger reinforcer magnitude. In Experiment 2, 3 pigeons responded under a DRL 20-s schedule in which reinforcer magnitude (1-s or 6-s access to grain) varied iron session to session. Higher response rates and lower reinforcement rates occurred tinder the longer hopper duration. These results demonstrate that larger reinforcer magnitudes engender less efficient DRL schedule performance in both rats and pigeons, and when reinforcer magnitude was held constant between sessions or was varied daily. The present results are consistent with previous research demonstrating a decrease in efficiency as a function of increased reinforcer magnituide tinder procedures that require a period of time without a specified response. These findings also support the claim that DRI. schedule performance is not governed solely by a timing process.  相似文献   

16.
The present research examines the semantic priming effects of a centrally presented single prime word to which participants were instructed to either "attend and remember" or "ignore". The prime word was followed by a central probe target on which the participants made a lexical decision task. The main variables manipulated across experiments were prime duration (50 or 100 ms), the presence or absence of a mask following the prime, and the presence (or absence) and type of distractor stimulus (random set of consonants or pseudowords) on the probe display. There was a consistent interaction between the instructions and the semantic priming effects. Relative to the "attend and remember" instruction, an "ignore" instruction produced reduced positive priming from single primes presented for 100 ms, irrespective of the presence or absence of a prime mask, and regardless of whether the probe target was presented with or without distractors. Additionally, reliable negative priming was found from ignored primes presented for briefer durations (50 ms) and immediately followed by a mask. Methodological and theoretical implications of the present findings for the extant negative priming literature are discussed.  相似文献   

17.
In a discrete-trials procedure with pigeons, a response on a green key led to a 4-s delay (during which green houselights were lit) and then a reinforcer might or might not be delivered. A response on a red key led to a delay of adjustable duration (during which red houselights were lit) and then a certain reinforcer. The delay was adjusted so as to estimate an indifference point--a duration for which the two alternatives were equally preferred. Once the green key was chosen, a subject had to continue to respond on the green key until a reinforcer was delivered. Each response on the green key, plus the 4-s delay that followed every response, was called one "link" of the green-key schedule. Subjects showed much greater preference for the green key when the number of links before reinforcement was variable (averaging four) than when it was fixed (always exactly four). These findings are consistent with the view that probabilistic reinforcers are analogous to reinforcers delivered after variable delays. When successive links were separated by 4-s or 8-s "interlink intervals" with white houselights, preference for the probabilistic alternative decreased somewhat for 2 subjects but was unaffected for the other 2 subjects. When the interlink intervals had the same green houselights that were present during the 4-s delays, preference for the green key decreased substantially for all subjects. These results provided mixed support for the view that preference for a probabilistic reinforcer is inversely related to the duration of conditioned reinforcers that precede the delivery of food.  相似文献   

18.
Agents that alter adrenergic receptors, such as "beta-blockers," also alter memory storage. However, reports suggest that beta-adrenergic receptor antagonists, such as propranolol, have conflicting behavioral effects with acute vs chronic dosing. This study was designed to evaluate the effects of chronic propranolol on retention for a spatial learning task. Adult male ICR mice were given daily injections of propranolol (2, 4, 8, or 12 mg/kg ip) or 0. 9% NaCl for 15 days prior to, and during, trials in a Morris water maze. Mice received five massed acquisition (escape) trials in each of three daily sessions, followed by a single 60-s probe trial on the fourth day. The location of the submerged platform was constant for each animal over acquisition trials, but varied across animals; starting position varied across trials. A 5 (dose) x 3 (trial blocks) mixed factorial ANOVA for escape time yielded a significant trial blocks effect only (p <.001), showing performance improving over sessions. Time spent in the target quadrant on the probe trial was shorter under all doses of propranolol when compared to vehicle group (all p <.001), indicating poorer retention of prior platform location. This effect, however, was not dose-related. Swim speed was not significantly affected by propranolol. These data demonstrate that chronic dosing with propranolol can impair retention of spatial learning, which cannot be attributed to reduced arousal or motor function.  相似文献   

19.
Panel pressing was generated and maintained in 5 adult humans by schedules of points exchangeable for money. Following exposure to a variable-interval 30-s schedule and to a linear variable-interval 30-s schedule (which permitted points to accumulate in an unseen "store" in the absence of responding), subjects were exposed to a series of conditions with a point-subtraction contingency arranged conjointly with the linear variable-interval schedule. Specifically, points were added to the store according to the linear-variable interval 30-s schedule and were subtracted from the store according to a ratio schedule. Ratio value varied across conditions and was determined individually for each subject such that the subtraction contingency would result in an approximately 50% reduction in the rate of point delivery. Conditions that included the subtraction contingency were termed negative slope schedules because the feedback functions were negatively sloped across all response rates greater than the inverse of the variable-interval schedule, in this case, two per minute. Overall response rates varied inversely with the subtraction ratio, indicating sensitivity to the negative slope conditions, but were in excess of that required by accounts based on strict maximization of overall reinforcement rate. Performance was also not well described by a matching-based account. Detailed analyses of response patterning revealed a consistent two-state pattern in which bursts of high-rate responding alternated with periods of prolonged pausing, perhaps reflecting the joint influence of local and overall reinforcement rates.  相似文献   

20.
In a temporal double bisection task, animals learn two discriminations. In the presence of Red and Green keys, responses to Red are reinforced after 1-s samples and responses to Green are reinforced after 4-s samples; in the presence of Blue and Yellow keys, responses to Blue are reinforced after 4-s samples and responses to Yellow are reinforced after 16-s samples. Subsequently, given a choice between Green and Blue, the probability of choosing Green increases with the sample duration-the context effect. In the present study we asked whether this effect could be predicted from the stimulus generalization gradients induced by the two basic discriminations. Six pigeons learned to peck Green following 4-s samples (S(+)) but not following 1-s samples (S(-)) and to peck Red following 4-s samples (S(+)) but not following 16-s samples (S(-)). Temporal generalization gradients for Green and Red were then obtained. Finally, the pigeons were given a choice between Green and Red following sample durations ranging from 1 to 16 s. Results showed that a) the two generalization gradients had the minimum at the S(-) duration, an intermediate value between the S(-) and the S(+) durations, and the maximum at the S(+) as well as more extreme durations; b) on choice trials, preference for Green over Red increased with sample duration, the context effect; and c) the two generalization gradients predicted the average context effect well. The Learning-to-Time model accounts for the major trends in the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号