首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We provide new behavioural norms for semantic classification of pictures and words. The picture stimuli are 288 black and white line drawings from the International Picture Naming Project ([Székely, A., Jacobsen, T., D'Amico, S., Devescovi, A., Andonova, E., Herron, D., et al. (2004). A new on-line resource for psycholinguistic studies. Journal of Memory & Language, 51, 247–250]). We presented these pictures for classification in a living/nonliving decision, and in a separate version of the task presented the corresponding word labels for classification. We analyzed behavioural responses to a subset of the stimuli in order to explore questions about semantic processing. We found multiple semantic richness effects for both picture and word classification. Further, while lexical-level factors were related to semantic classification of words, they were not related to semantic classification of pictures. We argue that these results are consistent with privileged semantic access for pictures, and point to ways in which these data could be used to address other questions about picture processing and semantic memory.  相似文献   

2.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory–visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory–visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory–visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory–visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory–visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.  相似文献   

3.
The effect of semantic priming upon lexical decisions made for words in isolation (Experiment 1) and during sentence comprehension (Experiment 2) was investigated using a cross-modal lexical decision task. In Experiment 1, subjects made lexical decisions to both auditory and visual stimuli. Processing auditorily presented words facilitated subsequent lexical decisions on semantically related visual words. In Experiment 2, subjects comprehended auditorily presented sentences while simultaneously making lexical decisions for visually presented stimuli. Lexical decisions were facilitated when a visual word appeared immediately following a related word in the sentential material. Lexical decisions were also facilitated when the visual word appeared three syllables following closure of the clause containing the related material. Arguments are made for autonomy of semantic priming during sentence comprehension.  相似文献   

4.
Semantic interference in picture naming is readily obtained when categorically related distractor words are displayed with picture targets; however, this is not typically the case when both primes and targets are words. Researchers have argued that to obtain semantic interference for word primes and targets, the prime must be shown for a sufficient duration, prime processing must be made difficult, and participants must attend to the primes. In this article, we used a novel procedure for prime presentation to investigate semantic interference in word naming. Primes were presented as the last word of a rapid serial visual presentation stream, with the target following 600-1,200 msec later. Semantic interference was observed for categorically related targets, whereas facilitation was found for associatively related targets.  相似文献   

5.
Earlier studies suggest that interhemispheric processing increases the processing power of the brain in cognitively complex tasks as it allows the brain to divide the processing load between the hemispheres. We report two experiments suggesting that this finding does not generalize to word-picture pairs: they are processed at least as efficiently when processed by a single hemisphere as compared to processing occurring between the two hemispheres. We examined whether dividing the stimuli between the visual fields/hemispheres would be more advantageous than unilateral stimulus displays in the semantic categorization of simultaneously presented pictures, words, and word-picture pairs. The results revealed that within-domain stimuli (semantically related picture pairs or word pairs) were categorized faster in bilateral than in unilateral displays, whereas cross-domain stimuli (word-picture pairs) were not categorized faster in bilateral than in unilateral displays. It is suggested that interhemispheric sharing of word-picture stimuli is not advantageous as compared to unilateral processing conditions because words and pictures use different access routes, and therefore, it may be possible to process in parallel simultaneously displayed word-picture stimuli within a single hemisphere.  相似文献   

6.
7.
In a delayed matching task, the influence of spatial congruence between study and test on visual short-term memory for geometric figures and words was investigated. Subjects processed series of pictures which showed three words or three geometric figures arranged as rows or as triangular configurations. At test, the elements were presented in the identical or in the alternative configuration as at study. In the non-matching case, one of the studied elements was exchanged. The delay was 5 s. Subjects judged whether the elements were the same as during study, independent of their configuration. In Exp. 1, pictures of figures and words were mixed within one list. For both modalities, the response times were longer if the configuration at test was incongruent to the one at study. This contradicts the results of Santa, who observed effects of spatial congruency for figures, but not for words. In Exp. 2 we therefore presented the same material as in Exp. 1, but now the lists were modality-pure, as in the experiment of Santa – i.e., words and figures were shown in different lists. This time, spatial incongruency impaired recognition of the figures, but not recognition of the words. These results show that in a non-verbal context, isolated visually presented words are spatially encoded as non-verbal stimuli (figures) are. However, the word stimuli are encoded differently if the task is a pure verbal one. In the latter case, spatial information is discarded. Received: 9 September 1997 / Accepted: 30 March 1998  相似文献   

8.
Two picture-word experiments are reported in which a delay of 7 to 10 was introduced between distractor and picture. Distractor words were either derived words (Experiment 1) or compounds (Experiment 2), morphologically related to the picture name. In both experiments, the position of morphological overlap between distractor (e.g., rosebud vs tea-rose) and picture name (rose) was manipulated. Clear facilitation of picture naming latencies was obtained when pictures were paired with morphological distractors, and effects were independent of distractor type and position of overlap. The results are evaluated against "full listing" and "decomposition" approaches of morphological representation.  相似文献   

9.
10.
Undergraduates were shown pictures or corresponding labels and then were tested for recognition either in the same mode or in a cross-over mode. Significantly more items were recognized in the picture-picture condition than in the picture-word and word-picture conditions. Informing subjects in advance of the change in modality significantly improved picture-word performance.  相似文献   

11.
12.
In this study, we investigated the effects of various interpolated tasks on hypermnesia (improved recall across repeated tests) for pictures and words. In five experiments, subjects studied either pictures or words and then completed two free-recall tests, with varying activities interpolated between the tests. The tasks performed between tests were varied to test several hypotheses concerning the possible factor(s) responsible for disruption of the hypermnesic effect. In each experiment, hypermnesia was obtained in a control condition in which there was no interpolated task between tests. The remaining conditions showed that the effect of the interpolated tasks was related to the overlap of the cognitive processes involved in encoding the target items and performing the interpolated tasks. When pictures were used as the target items, no hypermnesia was obtained when subjects engaged in imaginal processing interpolated tasks, even when these tasks involved materials that were very distinct from the target items. When words were used as the target items, no hypermnesia was obtained when the interpolated tasks required verbal/linguistic processing, even when the items used in these tasks were auditorily presented. The results are discussed in terms of a strength-based model of associative memory.  相似文献   

13.
Using a probability-learning technique with a single word as the cue and with the probability of a given event following this word fixed at .80, it was found (1) that neither high nor low associates to the original word and (2) that neither synonyms nor antonyms showed differential learning curves subsequent to original learning when the probability for the following event was shifted to .20. In a second study when feedback, in the form of knowledge of results, was withheld, there was a clear-cut similarity of predictions to the originally trained word and the synonyms of both high and low association value and a dissimilarity of these words to a set of antonyms of both high and low association value. Two additional studies confirmed the importance of the semantic dimension as compared with association value as traditionally measured.  相似文献   

14.
Previous studies have found that interference in long-term memory retrieval occurs when information cannot be integrated into a single situation model, but this interference is greatly reduced or absent when the information can be so integrated. The current study looked at the influence of presentation format-sentences or pictures-on this observed pattern. When sentences were used at memorisation and recognition, a spatial organisation was observed. In contrast, when pictures were used, a different pattern of results was observed. Specifically, there was an overall speed-up in response times, and consistent evidence of interference. Possible explanations for this difference were examined in a third experiment using pictures during learning, but sentences during recognition. The results from Experiment 3 were consistent with the organisation of information into situation models in long-term memory, even from pictures. This suggests that people do create situation models when learning pictures, but their recognition memory may be oriented around more "verbatim", surface-form memories of the pictures.  相似文献   

15.
16.
Children in kindergarten, third, and fifth grades were presented a list of either pictures or words (with items presented for varying numbers of times on the study trial). In both picture and word conditions, half of the Ss estimated how many times each item had been presented (absolute judgments) and the other half judged which of two items had occurred more often on the study trial (relative judgments). The primary finding was that while frequency judgment performance improved with age for both pictures and words, there was relatively greater improvement for pictures (i.e., the picture-word difference increased with age). These results lend strong support to the frequency theory of discrimination learning and, in particular, may be useful in accounting for effects associated with age and with age by stimulus mode interactions.  相似文献   

17.
There is evidence of maladaptive attentional biases for lexical information (e.g., Atchley, Ilardi, & Enloe, 2003; Atchley, Stringer, Mathias, Ilardi, & Minatrea, 2007) and for pictographic stimuli (e.g., Gotlib, Krasnoperova, Yue, & Joormann, 2004) among patients with depression. The current research looks for depressotypic processing biases among depressed out-patients and non-clinical controls, using both verbal and pictorial stimuli. A d′ measure (sensitivity index) was used to examine each participant's perceptual sensitivity threshold. Never-depressed controls evidenced a detection bias for positive picture stimuli, while depressed participants had no such bias. With verbal stimuli, depressed individuals showed specific decrements in the detection of positive person-referent words (WINNER), but not with positive non-person-referent words (SUNSHINE) or with negative words. Never-depressed participants showed no such differences across word types. In the current study, depression is characterised both by an absence of the normal positivistic biases seen in individuals without mood disorders (consistent with McCabe & Gotlib, 1995), and by a specific reduction in sensitivity for person-referent positive information that might be inconsistent with depressotypic self-schemas.  相似文献   

18.
Vision in a cluttered scene is extremely inefficient. This damaging effect of clutter, known as crowding, affects many aspects of visual processing (e.g., reading speed). We examined observers' processing of crowded targets in a lexical decision task, using single-character Chinese words that are compact but carry semantic meaning. Despite being unrecognizable and indistinguishable from matched nonwords, crowded prime words still generated robust semantic-priming effects on lexical decisions for test words presented in isolation. Indeed, the semantic-priming effect of crowded primes was similar to that of uncrowded primes. These findings show that the meanings of words survive crowding even when the identities of the words do not, suggesting that crowding does not prevent semantic activation, a process that may have evolved in the context of a cluttered visual environment.  相似文献   

19.
Memory performance estimates of men and women before and after a recall test were investigated. College students (17 men and 20 women), all juniors, participated in a memory task involving the recall of 80 stimuli (40 pictures and 40 words). Before and after the task they were asked to provide estimates of their pre- and postrecall performance. Although no sex differences were found for total correct recall, recall for pictures, and recall for words, or in the estimates of memory performance before the recall task, there were significant differences after the test: women underestimated their performance on the words and men underestimated their performance on the picture items.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号