首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Does the spelling of a word mandatorily constrain spoken word production, or does it do so only when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling effects in spoken word production in English using a prompt-response word generation task. Preparation of the response words was disrupted when the responses shared initial phonemes that differed in spelling, suggesting that spelling constrains speech production mandatorily. The present experiments, conducted in Dutch, tested for spelling effects using word production tasks in which spelling was clearly relevant (oral reading in Experiment 1) or irrelevant (object naming and word generation in Experiments 2 and 3, respectively). Response preparation was disrupted by spelling inconsistency only with the word reading, suggesting that the spelling of a word constrains spoken word production in Dutch only when it is relevant for the word production task at hand.  相似文献   

2.
Discrete trial training was delivered using English and Spanish languages to a student with autism from a Spanish-speaking family. An alternating treatments design was used to examine the effects of language of instruction on the child’s response accuracy and challenging behavior. More correct responses and fewer challenging behaviors occurred when instruction was delivered in Spanish compared to English. Results suggest that the language of instruction may be an important variable even when a student initially presents with very little spoken language and comparable scores on English and Spanish standardized language assessments.  相似文献   

3.
The name–picture verification task is widely used in spoken production studies to control for nonlexical differences between picture sets. In this task a word is presented first and followed, after a pause, by a picture. Participants must then make a speeded decision on whether both word and picture refer to the same object. Using regression analyses, we systematically explored the characteristics of this task by assessing the independent contribution of a series of factors that have been found relevant for picture naming in previous studies. We found that, for “match” responses, both visual and conceptual factors played a role, but lexical variables were not significant contributors. No clear pattern emerged from the analysis of “no-match” responses. We interpret these results as validating the use of “match” latencies as control variables in studies or spoken production using picture naming. Norms for match and no-match responses for 396 line drawings taken from Cycowicz, Friedman, Rothstein, and Snodgrass (1997) can be downloaded at: http://language.psy.bris.ac.uk/name-picture_verification.html  相似文献   

4.
When up and down stimuli are mapped to left and right keypresses or "left" and "right" vocalizations in a 2-choice reaction task, performance is often better with the up-right/down-left mapping than with the opposite mapping. This study investigated whether performance is influenced by the type of initiating action. In all, 4 experiments showed the up-right/down-left advantage to be reduced when the participant's initiating action was a left response compared with when it was a right response. This reduction occurred when the initiating action and response were both keypresses, both were spoken location names, and one was a spoken location name and the other a keypress. The results are consistent with the view that the up-right/down-left advantage is due to asymmetry in coding the alternatives on each dimension, and a distinction between categorical and coordinate spatial codes seems to provide the best explanation of the advantage.  相似文献   

5.
In four experiments, listeners’ response times to detect vowel targets in spoken input were measured. The first three experiments were conducted in English. In two, one using real words and the other, nonwords, detection accuracy was low, targets in initial syllables were detected more slowly than targets in final syllables, and both response time and missed-response rate were inversely correlated with vowel duration. In a third experiment, the speech context for some subjects included all English vowels, while for others, only five relatively distinct vowels occurred. This manipulation had essentially no effect, and the same response pattern was again observed. A fourth experiment, conducted in Spanish, replicated the results in the first three experiments, except that miss rate was here unrelated to vowel duration. We propose that listeners’ responses to vowel targets in naturally spoken input are effectively cautious, reflecting realistic appreciation of vowel variability in natural context.  相似文献   

6.
In two sets of experiments, we examined dimensional stimulus control of pigeons' responses to a visual flicker-rate continuum. In the first experiment, responses to a single key were reinforced periodically during stimuli from one half of the stimulus continuum, and responses during other stimuli were extinguished. In the second experiment, two response keys were simultaneously available, with reinforcement for each response alternative associated with different halves of the stimulus continuum. Conditions of the second experiment involved either free-operant or discrete-trial stimulus presentations. Results from these experiments show that positive dimensional contrast appeared in discrimination tasks with one or two response alternatives, but only with free-operant procedures. In addition, discrimination between stimulus classes established by differential reinforcement was assessed as accurately by continuous rate measures as by discrete response choice in the two-alternative situation. The general implication of these experiments is that response rate measures, when properly applied, may reveal sources of variation within stimulus classes, such as dimensional contrast, that are not evident with discrete measures.  相似文献   

7.
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /d?g/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.  相似文献   

8.
From a meta-analysis of recognition experiments using the remember-know-guess paradigm, Gardiner, Ramponi, and Richardson-Klavehn (2002) reported two findings that they viewed as evidence against the one-dimensional model for that paradigm: (1) Memory strength increased when know responses were added to remember responses, decreasing when guess responses were also included. (2) The accuracy of guess responses was correlated with the location of the old-new criterion in the one-dimensional model for the paradigm, implying that guesses were influenced by decision processes. We question both findings. The first result is contradicted by a signal-detection (SDT) analysis, which shows that both know and guess responses reduced estimated memory strength. The discrepancy results from the properties of A′, the measure of accuracy used by Gardiner et al., which we argue is flawed. The second result follows directly from the one-dimensional model, in which accuracy and response criteria are fixed. The authors' reasons for rejecting the one-dimensional model are thus not persuasive, but it can nonetheless be rejected because ROC curves implied by the data are inconsistent with ROCs derived from ratings experiments. A two-dimensional SDT model (Rotello, Macmillan, & Reeder, 2004) accounts for both sets of data. The analysis illustrates the importance of models in interpreting remember-know data.  相似文献   

9.
Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.  相似文献   

10.
Four forced-choice letter-detection experiments examined the effect on detection latency of noise letters that were visually similar to target letters. A single target letter was present in each display. Noise letters were similar to the target letter present in the display (the signal), to a different target letter assigned to the same response as the signal, or to a target letter assigned to the opposite response from the signal. Noise letters were present in either relevant or irrelevant display positions, and two quite different stimulus sets were used. The experiments were designed to test a prediction of models in which information about noise letters is transmitted continuously from the recognition to the decision process. These models predict that responses should be faster when noise letters are visually similar to a target assigned to the same response as the signal than when noise letters are similar to a target assigned to the opposite response. Statistically reliable effects of the type predicted by continuous models were obtained when noise letters appeared in relevant display positions, but not when they appeared in irrelevant positions.  相似文献   

11.
From a meta-analysis of recognition experiments using the remember-know-guess paradigm, Gardiner, Ramponi, and Richardson-Klavehn (2002) reported two findings that they viewed as evidence against the one-dimensional model for that paradigm: (1) Memory strength increased when know responses were added to remember responses, decreasing when guess responses were also included. (2) The accuracy of guess responses was correlated with the location of the old-new criterion in the one-dimensional model for the paradigm, implying that guesses were influenced by decision processes. We question both findings. The first result is contradicted by a signal-detection (SDT) analysis, which shows that both know and guess responses reduced estimated memory strength. The discrepancy results from the properties of A', the measure of accuracy used by Gardiner et al., which we argue is flawed. The second result follows directly from the one-dimensional model, in which accuracy and response criteria are fixed. The authors' reasons for rejecting the one-dimensional model are thus not persuasive, but it can nonetheless be rejected because ROC curves implied by the data are inconsistent with ROCs derived from ratings experiments. A two-dimensional SDT model (Rotello, Macmillan, & Reeder, 2004) accounts for both sets of data. The analysis illustrates the importance of models in interpreting remember--know data.  相似文献   

12.
Tact training is a common element of many habilitative programs for individuals with developmental disabilities. A commonly recommended practice is to include a supplemental question (e.g., “What is this?”) during training trials for tacts of objects. However, the supplemental question is not a defining feature of the tact relation, and prior research suggests that its inclusion might sometimes impede tact acquisition. The present study compared tact training with and without the supplemental question in terms of acquisition and maintenance. Two of 4 children with autism acquired tacts more efficiently in the object-only condition; the remaining 2 children acquired tacts more efficiently in the object + question condition. During maintenance tests in the absence of the supplemental question, all participants emitted tacts at end-of-training levels across conditions with no differential effect observed between training conditions.Key words: autism, language training, stimulus control, tacts, verbal behaviorSkinner (1957) defined the tact as a response “evoked by a particular object or event or property of an object or event” (p. 82) and considered it to be one of the most important verbal operants. Tacts are maintained by generalized social reinforcement and, thus, they are central to many social interactions. For example, the tact “That cloud looks like a horse” (under the control of a visual stimulus) could evoke a short verbal interaction about the sky or horses. The tact “My tummy hurts” (under the control of an interoceptive stimulus) could evoke soothing statements from a parent. A child who tacts “doggie” in the presence of a cat likely would evoke a correction statement from an adult, further refining two stimulus classes (i.e., dog and cat). These examples illustrate that, despite their topographical differences, the tact relations share antecedent control by a nonverbal discriminative stimulus (SD) and are maintained by generalized social reinforcement.In habilitative programs for individuals with language impairments, autism, and intellectual disabilities, tacts often are taught for objects (e.g., ball), object features (e.g., color, size, shape), activities (e.g., jumping), prepositions (e.g., between), and emotions (e.g., sad) among others. Although conceptualized differently among therapeutic approaches, the tact relation occupies a central position in many early-intervention curricula. For example, Lovaas (2003) and Leaf and McEachin (1999) describe these relations as expressive labels and recommend that they be taught early in language training using three-dimensional objects accompanied by the supplemental questions “What is it?” or “What''s this?” Alternatively, Sundberg and Partington (1998) explicitly refer to the relation as a tact and recommend beginning instruction by including the question “What is it?” before eventually fading the question. In addition to these clinical manuals, the use of supplemental questions during tact training has appeared in some empirical studies on tact or expressive-label training (e.g., Braam & Sundberg, 1991; Coleman & Stedman, 1974), but not others (e.g., Williams & Greer, 1993). Regardless of whether tact training initially includes supplemental questions prior to response opportunities, tacts ultimately should be emitted readily under the sole control of the nonverbal SD as well as when it happens to be accompanied by a question.Conceptually, at least four potential problems could arise from introducing supplemental questions early and consistently in tact training. First, the acquired responses might not be emitted unless the question is posed (i.e., prompt dependence). This problem would lead to few spontaneous tacts occurring outside the explicit stimulus control of the training environment. Williams and Greer (1993) compared comprehensive language training conducted under the stimulus control specified in Skinner''s (1957) taxonomy of verbal behavior to a more traditional psycholinguistic perspective with supplemental questions and instructions embedded within trials. For all three adolescents with developmental disabilities, the targets taught from the verbal behavior perspective were maintained better in natural contexts than those taught from the psycholinguistic perspective. However, because data were not reported for each individual verbal operant, it is unclear what specific impact their tact-training procedures had on the outcomes.The second potential problem is that the supplemental question might acquire intraverbal control over early responses and interfere with the acquisition of subsequent tact targets. For example, Partington, Sundberg, Newhouse, and Spengler (1994) showed that the tact repertoire of a child with autism had been hindered by prior instruction during which she was asked “What is this?” while being shown an object. The supplemental question subsequently evoked previously acquired responses and blocked the ability of new nonverbal SDs (i.e., objects) to evoke new responses. Partington et al. then showed that new tacts were acquired by eliminating the supplemental question from instructional trials.The third potential problem is that learners might imitate part of or the entire supplemental question prior to emitting the target response (e.g., “What is it” → “What is it … ball.”). For example, Coleman and Stedman (1974) demonstrated that a 10-year-old girl with autism imitated the question “What is this?” while being taught to label stimuli depicted in color photographs. Such an outcome results in a socially awkward tact repertoire and requires additional intervention to remedy the problem.Finally, including supplemental questions during tact training might impede skill acquisition, perhaps via a combination of the problems described earlier. Sundberg, Endicott, and Eigenheer (2000) taught sign tacts to two young children with autism who had prior difficulty acquiring tacts. In one condition, the experimenter held up an object and asked, “What is that?” In the comparison condition, the experimenter intraverbally prompted the participant to “sign [object name]” in the presence of the object. Sundberg et al. demonstrated substantially more efficient tact acquisition under the sign-prompt condition than when the question “What is that?” was included in trials; the latter condition sometimes failed to produce mastery-level responding.Teaching an entire tact repertoire while including supplemental questions (e.g., “What is it?”) during training trials could produce a learner who is able to talk about his or her environment only when asked to do so with similar questions. To the extent that this is not a therapist''s clinical goal, teaching the tact under its proper controlling variables may eliminate such problems. Of course, inclusion of supplemental questions during the early phases of language training could be faded over time such that the target tact relation is left intact prior to the end of training (Sundberg & Partington, 1998). However, the aforementioned studies have documented problems with using supplemental questions during tact training. Given the ubiquity of tact training in habilitation programs, the numerous problems that may arise when supplemental questions are included in training trials, and the limited research on the topic, further investigation is warranted. Thus, the purpose of the present study was to compare directly the rate of acquisition and subsequent maintenance of tacts taught using only a nonverbal SD (i.e., object only) with tacts taught using a question (“What is this?”) in conjunction with the nonverbal SD (i.e., object + question). The present study extends earlier research by examining both acquisition and maintenance and by including individuals with no prior history of formal tact training.  相似文献   

13.
In two-choice tasks, the compatible mapping of left stimulus to left response and right stimulus to right response typically yields better performance than does the incompatible mapping. Nonetheless, when compatible and incompatible mappings are mixed within a block of trials, the spatial compatibility effect is eliminated. Two experiments evaluated whether the elimination of compatibility effects by mixing compatible and incompatible mappings is a general or specific phenomenon. Left-right physical locations, arrow directions, and location words were mapped to keypress responses in Experiment 1 and vocal responses in Experiment 2. With keypresses, mixing compatible and incompatible mappings eliminated the compatibility effect for physical locations and arrow directions, but enhanced it for words. With vocal responses, mixing significantly reduced the compatibility effect only for words. Overall, the mixing effects suggest that elimination or reduction of compatibility effects occurs primarily when the stimulus-response sets have both conceptual and perceptual similarity. This elimination may be due to suppression of a direct response-selection route, but to account for the full pattern of mixing effects it is also necessary to consider changes in an indirect response-selection route and the temporal activation properties of different stimulus-response sets.  相似文献   

14.
The Autobiographical Memory Test (AMT) is widely used in research contexts to measure the extent to which participants (children or adults) report specific or general memories in response to cue words. Recalling fewer specific and more general memories (overgeneral memory) has been shown to be linked to depression in adults, but findings for youth, in particular, are mixed. Different versions of the AMT may be one contributing factor, yet this issue has received little research attention. The current study investigated the influence of reporting mode (written vs. spoken) on the specificity, length, and content of memories provided by 8- to 10-year-old children (N?=?48). No significant differences were found in the number of specific responses given in the written and spoken modes. On the other hand, the spoken mode elicited longer and more detailed memories, although most content differences were eliminated when memory length was controlled. These findings suggest that different reporting modes can influence the nature of the memories reported, but the absolute differences are relatively small.  相似文献   

15.
Salience of stimulus and response features in choice-reaction tasks.   总被引:3,自引:0,他引:3  
A pattern of differential reaction time (RT) benefits obtained in spatial-precuing tasks has been attributed to translation processes that operate on mental codes formed to represent the stimulus and response sets. According to the salient-features coding principle, the codes are based on the salient stimulus and response features, with RTs being fastest when the two sets of features correspond. Three experiments are reported in which the stimulus and response sets were manipulated using Gestalt grouping principles. In the first two experiments, stimuli and responses were grouped according to spatial proximity, whereas in the last experiment, they were grouped according to similarity. With both types of manipulations, the grouping of the stimulus set systematically affected the pattern of precuing benefits. Thus, in these experiments, the organization of the stimulus set was the primary determinant of the features selected for coding the stimulus and response sets in the translation process.  相似文献   

16.
Subjects were presented with a sequence of two letters, each letter spoken in either a male or female voice. On each trial, the subject was required to indicate, as quickly as possible, whether the two letters had the same name. Reaction times (RTs) were faster for letters spoken in the same voice for both “same” and “different” responses, even when letters were separated by 8 s. These results are incompatible with the notion of physical and name codes in auditory memory since a “different” response should always be based on a comparison of letter names and should not be influenced by voice quality. It was also found that RTs were not influenced by the phonemic distinctive feature similarity of the letters.  相似文献   

17.
A pattern of differential reaction time (RT) benefits obtained in spatial-precuing tasks has been attributed to translation processes that operate on mental codes formed to represent the-stimulus and response sets. According to the salient-features coding principle, the codes are based on the salient stimulus and response features, with RTs being fastest when the two sets of features correspond. Three experiments are reported in which the stimulus and response sets were manipulated using Gestalt grouping principles. In the first two experiments, stimuli and responses were grouped according to spatial proximity, whereas in the last experiment, they were grouped according to similarity. With both types of manipulations, the grouping of the stimulus set systematically affected the pattern of precuingbenefits. Thus, in these experiments, the organization of the stimulus set was the primary determinant of the features selected for coding the stimulus and response sets in the translation process.  相似文献   

18.
According to a recent hypothesis, executive functions should be particularly vulnerable to the effects of total sleep deprivation. Random generation is a task that taps executive functions. In three experiments we examined the effects of total sleep deprivation on random generation of keypresses, numbers, and nouns, in particular on the suppression of prepotent responses and the selection of next responses by way of applying a local-representativeness heuristic. With random key-presses suppression of prepotent responses did not suffer from lack of sleep, but it became poorer at a sufficiently high pacing rate. In contrast, suppression of prepotent responses suffered when numbers and nouns were generated. According to these findings different types of random generation tasks involve different types of inhibitory process. With only four response alternatives, but not with larger response sets, application of the local-representativeness heuristic was impaired after a night without sleep. In terms of a simple formal model, serial-order representations of the preceding responses are used in selecting the next response only for the small response set, and not for larger response sets. Thus, serial-order representations are likely to suffer from loss of sleep. These findings strongly suggest that random generation involves multiple processes and that total sleep deprivation does not impair all sorts of executive functions, but only some.  相似文献   

19.
There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.  相似文献   

20.
The present research addresses whether music training acts as a mediator of the recall of spoken and sung lyrics and whether presentation rate is the essential variable, rather than the inclusion of melody. In Experiment 1, 78 undergraduates, half with music training and half without, heard spoken or sung lyrics. Recall for sung lyrics was superior to that for spoken lyrics for both groups. In Experiments 2 and 3, presentation rate was manipulated so that the durations of the spoken and the sung materials were equal. With presentation rate equated, there was no advantage for sung over spoken lyrics. In all the experiments, those participants with music training outperformed those without training in all the conditions. The results suggest that music training leads to enhanced memory for verbal material. Previous findings of melody's aiding text recall may be attributed to presentation rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号