首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
We describe a set of pictorial and auditory stimuli that we have developed for use in word learning tasks in which the participant learns pairings of novel auditory sound patterns (names) with pictorial depictions of novel objects (referents). The pictorial referents are drawings of “space aliens,” consisting of images that are variants of 144 different aliens. The auditory names are possible nonwords of English; the stimulus set consists of over 2,500 nonword stimuli recorded in a single voice, with controlled onsets, varying from one to seven syllables in length. The pictorial and nonword stimuli can also serve as independent stimulus sets for purposes other than word learning. The full set of these stimuli may be downloaded fromwww.psychonomic.org/archive/.  相似文献   

2.
An experiment investigated whether exposure to orthography facilitates oral vocabulary learning. A total of 58 typically developing children aged 8–9 years were taught 12 nonwords. Children were trained to associate novel phonological forms with pictures of novel objects. Pictures were used as referents to represent novel word meanings. For half of the nonwords children were additionally exposed to orthography, although they were not alerted to its presence, nor were they instructed to use it. After this training phase a nonword–picture matching posttest was used to assess learning of nonword meaning, and a spelling posttest was used to assess learning of nonword orthography. Children showed robust learning for novel spelling patterns after incidental exposure to orthography. Further, we observed stronger learning for nonword–referent pairings trained with orthography. The degree of orthographic facilitation observed in posttests was related to children's reading levels, with more advanced readers showing more benefit from the presence of orthography.  相似文献   

3.
The relative influences of language-related and memory-related constraints on the learning of novel words and sequences were examined by comparing individual differences in performance of children with and without specific deficits in either language or working memory. Children recalled lists of words in a Hebbian learning protocol in which occasional lists repeated, yielding improved recall over the course of the task on the repeated lists. The task involved presentation of pictures of common nouns followed immediately by equivalent presentations of the spoken names. The same participants also completed a paired-associate learning task involving word–picture and nonword–picture pairs. Hebbian learning was observed for all groups. Domain-general working memory constrained immediate recall, whereas language abilities impacted recall in the auditory modality only. In addition, working memory constrained paired-associate learning generally, whereas language abilities disproportionately impacted novel word learning. Overall, all of the learning tasks were highly correlated with domain-general working memory. The learning of nonwords was additionally related to general intelligence, phonological short-term memory, language abilities, and implicit learning. The results suggest that distinct associations between language- and memory-related mechanisms support learning of familiar and unfamiliar phonological forms and sequences.  相似文献   

4.
Subjects were asked to indicate which item of a word/nonword pair was a word. On critical trials the nonword was a pseudohomophone of the word. RTs of dyslexics were shorter in blocks of trials in which a congruent auditory prime was simultaneously presented with the visual stimuli. RTs of normal readers were longer for high frequency words when there was auditory priming. This provides evidence that phonology can activate orthographic representations; the size and direction of the effect of auditory priming on visual lexical decision appear to be a function of the relative speeds with which sight and hearing activate orthography.  相似文献   

5.
Two experiments are reported in which a word superiority effect is obtained under conditions where a fixed set of alternatives are employed with positional certainty as to the critical letter, trial type (word or nonword) is mixed, and the subject is told to fixate the position of the critical letter. A third experiment employed the same methodology except for the fact that the stimuli subtended a larger retinal angle. No word superiority effect was observed in the third experiment. It is suggested that the visual angle of the stimulus display is a crucial factor in experiments on the word superiority effect.  相似文献   

6.
Four of five patients with marked global amnesia, and others with new learning impairments, showed normal processing facilitation for novel stimuli (nonwords) and/or for familiar stimuli (words) on a word/nonword (lexical) decision task. The data are interpreted as a reflection of the learning capabilities of in-line neural processing stages with multiple, distinct, informational codes. These in-line learning processes are separate from the recognition/recall memory impaired by amygdalohippocampal/dosomedial thalamic damage, but probably supplement such memory in some tasks in normal individuals. Preserved learning of novel information seems incompatible with explanations of spared learning in amnesia that are based on the episodic/semantic or memory/habit distinctions, but is consistent with the procedural/declarative hypothesis.  相似文献   

7.
This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10–12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.  相似文献   

8.
The investigation of unconscious cognition involves especially problems with the methodology of measuring implicit and explicit proportions of different task performances. In this study the process dissociation procedure of Jacoby and its modification within the multinomial modelling framework for an indirect word-nonword-discrimination task is applied to a sample of 45 healthy students. The paradigm includes acoustically presented stimuli. During a learning phase, subjects listened to a series of neutral and threatening words. Performance was tested by letting subjects decide whether a presented stimulus (masked with white noise at signal-noise ratio of -17 dB or unmasked) had been a word or a nonword. Within this paradigm, implicit cognition occurs when (a) a word is more probably correctly recognized as "word" after presentation during the learning phase (typical priming effect) or when (b) a nonword derived from a word is more probably falsely recognized as "word" after its corresponding word had been presented during the learning phase (effect of implicit cognition given perceptual fluency). Frequencies for hits and false alarms were analyzed within the multinomial model which allows estimating parameters for the correct discrimination of words (c), the response bias (b), the classical priming effect (u1), and the parameter for the priming effect of "old" nonwords (u2). Under masked stimuli the multinomial model showed implicit cognition, an effect not equally found for neutral and threatening words. Threatening words exhibited a significantly higher portion of implicit cognition than neutral ones. Given the statistical complexity of multinomial models, the application of this method was explained in detail.  相似文献   

9.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

10.
Although infants have the ability to discriminate a variety of speech contrasts, young children cannot always use this ability in the service of spoken-word recognition. The research reported here asked whether the reason young children sometimes fail to discriminate minimal word pairs is that they are less efficient at word recognition than adults, or whether it is that they employ different lexical representations. In particular, the research evaluated the proposal that young children’s lexical representations are more “holistic” than those of adults, and are based on overall acoustic-phonetic properties, as opposed to phonetic segments. Three- and four-year-olds were exposed initially to an invariant target word and were subsequently asked to determine whether a series of auditory stimuli matched or did not match the target. The critical test stimuli were nonwords that varied in their degree of phonetic featural overlap with the target, as well as in terms of the position(s) within the stimuli at which they differed from the target, and whether they differed from the target on one or two segments. Data from four experiments demonstrated that the frequency with which children mistook a nonword stimulus for the target was influenced by extent of featural overlap, but not by word position. The data also showed that, contrary to the predictions of the holistic hypothesis, stimuli differing from the target by two features on a single segment were confused with the target more often than were stimuli differing by a single feature on each of two segments. This finding suggests that children use both phonetic features and segments in accessing their mental lexicons, and that they are therefore much more similar to adults than is suggested by the holistic hypothesis.  相似文献   

11.
The present research attempted to manipulate the encoding modality, pictorial or verbal, of schematic faces with well-learned names by manipulating S’s expectations of the way the material was to be used. On every trial, a single name or face was presented, followed by another one; the S was asked to respond “same” if the stimuli had the same name, and “different” otherwise. The majority of second stimuli of any session was either names or faces. It was hypothesized that if S had encoded the first stimulus in the modality of the second, his judgment would be faster than if he had not appropriately encoded the first stimulus. Significantly slower reaction times were obtained to stimulus pairs where the second stimulus modality was infrequent. Further evidence that encoding of the first stimulus was in the frequent second stimulus modality comes from the finding that “different” responses were shorter when the stimuli differed on more than one attribute in the encoding (second stimulus) modality, regardless of the modality of the stimuli. Thus, evidence is presented that not only can verbal material be pictorially encoded (and vice versa), but that whether either verbal or pictorial material is verbally or pictorially encoded depends on S’s anticipation of what he is to do with the material.  相似文献   

12.
This article reviews the research literature on the differences between word reading and picture naming. A theory for the visual and cognitive processing of pictures and words is then introduced. The theory accounts for slower naming of pictures than reading of words. Reading aloud involves a fast, grapheme-to-phoneme transformation process, whereas picture naming involves two additional processes: (a) determining the meaning of the pictorial stimulus and (b) finding a name for the pictorial stimulus. We conducted a reading-naming experiment, and the time to achieve (a) and (b) was determined to be approximately 160 ms. On the basis of data from a second experiment, we demonstrated that there is no significant difference in time to visually compare two pictures or two words when size of the stimuli is equated. There is no difference in time to make the two types of cross-modality conceptual comparisons (picture first, then word, or word first, then picture). The symmetry of the visual and conceptual comparison results supports the hypothesis that the coding of the mind is neither intrinsically linguistic nor imagistic, but rather it is abstract. There is a potent stimulus size effect, equal for both pictorial and lexical stimuli. Small stimuli take longer to be visually processed than do larger stimuli. For optimal processing, stimuli should not only be equated for size, but should subtend a visual angle of at least 3 degrees. The article ends with the presentation of a mathematical theory that jointly accounts for the data from word-reading, picture-naming visual comparison, and conceptual-comparison experiments.  相似文献   

13.
We present a new model for lexical decision, REM-LD, that is based on REM theory (e.g., ). REM-LD uses a principled (i.e., Bayes' rule) decision process that simultaneously considers the diagnosticity of the evidence for the 'WORD' response and the 'NONWORD' response. The model calculates the odds ratio that the presented stimulus is a word or a nonword by averaging likelihood ratios for lexical entries from a small neighborhood of similar words. We report two experiments that used a signal-to-respond paradigm to obtain information about the time course of lexical processing. Experiment 1 verified the prediction of the model that the frequency of the word stimuli affects performance for nonword stimuli. Experiment 2 was done to study the effects of nonword lexicality, word frequency, and repetition priming and to demonstrate how REM-LD can account for the observed results. We discuss how REM-LD could be extended to account for effects of phonology such as the pseudohomophone effect, and how REM-LD can predict response times in the traditional 'respond-when-ready' paradigm.  相似文献   

14.
Following pretraining with everyday objects, 14 children aged from 1 to 4 years were trained, for each of three pairs of different arbitrary wooden shapes (Set 1), to select one stimulus in response to the spoken word /zog/, and the other to /vek/. When given a test for the corresponding tacts ("zog" and "vek"), 10 children passed, showing that they had learned common names for the stimuli, and 4 failed. All children were trained to clap to one stimulus of Pair 1 and wave to the other. All those who named showed either transfer of the novel functions to the remaining two pairs of stimuli in Test 1, or novel function comprehension for all three pairs in Test 2, or both. Three of these children next participated in, and passed, category match-to-sample tests. In contrast, all 4 children who had learned only listener behavior failed both the category transfer and category match-to-sample tests. When 3 of them were next trained to name the stimuli, they passed the category transfer and (for the 2 subjects tested) category match-to-sample tests. Three children were next trained on the common listener relations with another set of arbitrary stimuli (Set 2); all succeeded on the tact and category tests with the Set 2 stimuli. Taken together with the findings from the other studies in the series, the present experiment shows that (a) common listener training also establishes the corresponding names in some but not all children, and (b) only children who learn common names categorize; all those who learn only listener behavior fail. This is good evidence in support of the naming account of categorization.  相似文献   

15.
Previous research indicates that word learning from auditory contexts may be more effective than written context at least through fourth grade. However, no study has examined contextual differences in word learning in older school-aged children when reading abilities are more developed. Here we examined developmental differences in children’s ability to deduce the meanings of unknown words from the surrounding linguistic context in the auditory and written modalities and sought to identify the most important predictors of success in each modality. A total of 89 children aged 8–15 years were randomly assigned to either read or listen to a narrative that included eight novel words, with five exposures to each novel word. They then completed three posttests to assess word meaning inferencing. Children across all ages performed better in the written modality. Vocabulary was the only significant predictor of success on the word inferencing task. Results indicate support for written stimuli as the most effective modality for novel word meaning deduction. Our findings suggest that the presence of orthographic information facilitates novel word learning even for early, less proficient readers.  相似文献   

16.
Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent.  相似文献   

17.
When describing scenes, speakers gaze at objects while preparing their names (Z. M. Griffin & K. Bock, 2000). In this study, the authors investigated whether gazes to referents occurred in the absence of a correspondence between visual features and word meaning. Speakers gazed significantly longer at objects before intentionally labeling them inaccurately with the names of similar things (e.g., calling a horse a dog) than when labeling them accurately. This held for grammatical subjects and objects as well as agents and patients. Moreover, the time spent gazing at a referent before labeling it with a novel word or accurate name was similar and decreased as speakers gained experience using the novel word. These results suggest that visual attention in speaking may be directed toward referents in the absence of any association between their visual forms and the words used to talk about them.  相似文献   

18.
In four experiments, we varied the time between the onset of distracting nonwords and target colour words in a word–word version of the colour–word contingency learning paradigm. Contingencies were created by pairing a distractor nonword more often with one target colour word than with other colour words. A contingency effect corresponds to faster responses to the target colour word on high-contingency trials (i.e., distractor nonword followed by the target colour word with which it appears most often) than on low-contingency trials (i.e., distractor nonword followed by a target colour word with which it appears only occasionally). Roughly equivalent-sized contingency effects were found at stimulus-onset asynchronies (SOAs) of 50, 250, and 450 ms in Experiment 1, and 50, 500, and 1,000 ms in Experiment 2. In Experiment 3, a contingency effect was observed at SOAs of –50, –200, and –350 ms. In Experiment 4, interstimulus interval (ISI) was varied along with SOA, and learning was equivalent for 200-, 700-, and 1,200-ms SOAs. Together, these experiments suggest that the distracting stimulus does not need to be presented in close temporal contiguity with the response to induce learning. Relations to past research on causal judgement and implications for further contingency learning research are discussed.  相似文献   

19.
Relational processing involves learning about the relationship between or among stimuli, transcending the individual stimuli, so that abstract knowledge generalizable to novel situations is acquired. Relational processing has been studied in animals as well as in humans, but little attention has been paid to the contribution of specific items to relational thinking or to the factors that may affect that contribution. This study assessed the intertwined effects of item and relational processing in nonhuman primates. Using a procedure that entailed both expanding and contracting sets of pictorial items, we trained 13 baboons on a two-alternative forced-choice task, in which they had to distinguish horizontal from vertical relational patterns. In Experiment 1, monkeys engaged in item-based processing with a small training set size, and they progressively engaged in relation-based processing as training set size was increased. However, in Experiment 2, overtraining with a small stimulus set promoted the processing of item-based information. These findings underscore similarities in how humans and nonhuman primates process higher-order stimulus relations.  相似文献   

20.
In four experiments, we varied the time between the onset of distracting nonwords and target colour words in a word-word version of the colour-word contingency learning paradigm. Contingencies were created by pairing a distractor nonword more often with one target colour word than with other colour words. A contingency effect corresponds to faster responses to the target colour word on high-contingency trials (i.e., distractor nonword followed by the target colour word with which it appears most often) than on low-contingency trials (i.e., distractor nonword followed by a target colour word with which it appears only occasionally). Roughly equivalent-sized contingency effects were found at stimulus-onset asynchronies (SOAs) of 50, 250, and 450?ms in Experiment 1, and 50, 500, and 1,000?ms in Experiment 2. In Experiment 3, a contingency effect was observed at SOAs of -50, -200, and -350?ms. In Experiment 4, interstimulus interval (ISI) was varied along with SOA, and learning was equivalent for 200-, 700-, and 1,200-ms SOAs. Together, these experiments suggest that the distracting stimulus does not need to be presented in close temporal contiguity with the response to induce learning. Relations to past research on causal judgement and implications for further contingency learning research are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号