首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Earlier studies suggest that interhemispheric processing increases the processing power of the brain in cognitively complex tasks as it allows the brain to divide the processing load between the hemispheres. We report two experiments suggesting that this finding does not generalize to word-picture pairs: they are processed at least as efficiently when processed by a single hemisphere as compared to processing occurring between the two hemispheres. We examined whether dividing the stimuli between the visual fields/hemispheres would be more advantageous than unilateral stimulus displays in the semantic categorization of simultaneously presented pictures, words, and word-picture pairs. The results revealed that within-domain stimuli (semantically related picture pairs or word pairs) were categorized faster in bilateral than in unilateral displays, whereas cross-domain stimuli (word-picture pairs) were not categorized faster in bilateral than in unilateral displays. It is suggested that interhemispheric sharing of word-picture stimuli is not advantageous as compared to unilateral processing conditions because words and pictures use different access routes, and therefore, it may be possible to process in parallel simultaneously displayed word-picture stimuli within a single hemisphere.  相似文献   

2.
We provide new behavioural norms for semantic classification of pictures and words. The picture stimuli are 288 black and white line drawings from the International Picture Naming Project ([Székely, A., Jacobsen, T., D'Amico, S., Devescovi, A., Andonova, E., Herron, D., et al. (2004). A new on-line resource for psycholinguistic studies. Journal of Memory & Language, 51, 247–250]). We presented these pictures for classification in a living/nonliving decision, and in a separate version of the task presented the corresponding word labels for classification. We analyzed behavioural responses to a subset of the stimuli in order to explore questions about semantic processing. We found multiple semantic richness effects for both picture and word classification. Further, while lexical-level factors were related to semantic classification of words, they were not related to semantic classification of pictures. We argue that these results are consistent with privileged semantic access for pictures, and point to ways in which these data could be used to address other questions about picture processing and semantic memory.  相似文献   

3.
In the present PET study, we examined brain activity related to processing of pictures and printed words in episodic memory. Our goal was to determine how the perceptual format of objects (verbal versus pictorial) is reflected in the neural organization of episodic memory for common objects. We investigated this issue in relation to encoding and recognition with a particular focus on medial temporal-lobe (MTL) structures. At encoding, participants saw pictures of objects or their written names and were asked to make semantic judgments. At recognition, participants made yes-no recognition judgments in four different conditions. In two conditions, target items were pictures of objects; these objects had originally been encoded either in picture or in word format. In two other conditions, target items were words; they also denoted objects originally encoded either as pictures or as words. Our data show that right MTL structures are differentially involved in picture processing during encoding and recognition. A posterior MTL region showed higher activation in response to the presentation of pictures than of words across all conditions. During encoding, this region may be involved in setting up a representation of the perceptual information that comprises the picture. At recognition, it may play a role in guiding retrieval processes based on the perceptual input, i.e. the retrieval cue. Another more anterior right MTL region was found to be differentially involved in recognition of objects that had been encoded as pictures, irrespective of whether the retrieval cue provided was pictorial or verbal in nature; this region may be involved in accessing stored pictorial representations. Our results suggest that left MTL structures contribute to picture processing only during encoding. Some regions in the left MTL showed an involvement in semantic encoding that was picture specific; others showed a task-specific involvement across pictures and words. Together, our results provide evidence that the involvement of some but not all MTL regions in episodic encoding and recognition is format specific.  相似文献   

4.
Semantic facilitation with pictures and words   总被引:1,自引:0,他引:1  
The present experiments explored the role of processing level and strategic factors in cross-form (word-picture and picture-word) and within-form (picture-picture and word-word) semantic facilitation. Previous studies have produced mixed results. The findings presented in this article indicate that semantic facilitation depends on the task and on the subjects' strategies. When the task required semantic processing of both picture and word targets (e.g., category verification), equivalent facilitation was obtained across all modality combinations. When the task required name processing (e.g., name verification, naming), facilitation was obtained for the picture targets. In contrast, with word targets, facilitation was obtained only when the situation emphasized semantic processing. The results are consistent with models that propose a common semantic representation for both picture and words but that also include assumptions regarding differential order of access to semantic and phonemic features for these stimulus modalities.  相似文献   

5.
6.
In this study, we investigated the effects of various interpolated tasks on hypermnesia (improved recall across repeated tests) for pictures and words. In five experiments, subjects studied either pictures or words and then completed two free-recall tests, with varying activities interpolated between the tests. The tasks performed between tests were varied to test several hypotheses concerning the possible factor(s) responsible for disruption of the hypermnesic effect. In each experiment, hypermnesia was obtained in a control condition in which there was no interpolated task between tests. The remaining conditions showed that the effect of the interpolated tasks was related to the overlap of the cognitive processes involved in encoding the target items and performing the interpolated tasks. When pictures were used as the target items, no hypermnesia was obtained when subjects engaged in imaginal processing interpolated tasks, even when these tasks involved materials that were very distinct from the target items. When words were used as the target items, no hypermnesia was obtained when the interpolated tasks required verbal/linguistic processing, even when the items used in these tasks were auditorily presented. The results are discussed in terms of a strength-based model of associative memory.  相似文献   

7.
8.
9.
Children in kindergarten, third, and fifth grades were presented a list of either pictures or words (with items presented for varying numbers of times on the study trial). In both picture and word conditions, half of the Ss estimated how many times each item had been presented (absolute judgments) and the other half judged which of two items had occurred more often on the study trial (relative judgments). The primary finding was that while frequency judgment performance improved with age for both pictures and words, there was relatively greater improvement for pictures (i.e., the picture-word difference increased with age). These results lend strong support to the frequency theory of discrimination learning and, in particular, may be useful in accounting for effects associated with age and with age by stimulus mode interactions.  相似文献   

10.
Previous studies have found that interference in long-term memory retrieval occurs when information cannot be integrated into a single situation model, but this interference is greatly reduced or absent when the information can be so integrated. The current study looked at the influence of presentation format-sentences or pictures-on this observed pattern. When sentences were used at memorisation and recognition, a spatial organisation was observed. In contrast, when pictures were used, a different pattern of results was observed. Specifically, there was an overall speed-up in response times, and consistent evidence of interference. Possible explanations for this difference were examined in a third experiment using pictures during learning, but sentences during recognition. The results from Experiment 3 were consistent with the organisation of information into situation models in long-term memory, even from pictures. This suggests that people do create situation models when learning pictures, but their recognition memory may be oriented around more "verbatim", surface-form memories of the pictures.  相似文献   

11.
12.
13.
High and low imagery individuals completed two paper-and-pencil identification tasks involving fragmented pictures and fragmented words. Although a trend was present for spatial measures of imagery ability to correlate more highly with picture than with word identification, high imagers nevertheless surpassed low imagers on both tasks. An interpretation emphasizing the spatial-imaginal demands of both tasks is suggested. Significant correlations between word identification and some verbal measures, as well as self-reported imagery, may have resulted from the differential priming of verbal and imaginal strategies following subtle procedural changes in the two experiments described.  相似文献   

14.
Memory performance estimates of men and women before and after a recall test were investigated. College students (17 men and 20 women), all juniors, participated in a memory task involving the recall of 80 stimuli (40 pictures and 40 words). Before and after the task they were asked to provide estimates of their pre- and postrecall performance. Although no sex differences were found for total correct recall, recall for pictures, and recall for words, or in the estimates of memory performance before the recall task, there were significant differences after the test: women underestimated their performance on the words and men underestimated their performance on the picture items.  相似文献   

15.
There is evidence of maladaptive attentional biases for lexical information (e.g., Atchley, Ilardi, & Enloe, 2003; Atchley, Stringer, Mathias, Ilardi, & Minatrea, 2007) and for pictographic stimuli (e.g., Gotlib, Krasnoperova, Yue, & Joormann, 2004) among patients with depression. The current research looks for depressotypic processing biases among depressed out-patients and non-clinical controls, using both verbal and pictorial stimuli. A d′ measure (sensitivity index) was used to examine each participant's perceptual sensitivity threshold. Never-depressed controls evidenced a detection bias for positive picture stimuli, while depressed participants had no such bias. With verbal stimuli, depressed individuals showed specific decrements in the detection of positive person-referent words (WINNER), but not with positive non-person-referent words (SUNSHINE) or with negative words. Never-depressed participants showed no such differences across word types. In the current study, depression is characterised both by an absence of the normal positivistic biases seen in individuals without mood disorders (consistent with McCabe & Gotlib, 1995), and by a specific reduction in sensitivity for person-referent positive information that might be inconsistent with depressotypic self-schemas.  相似文献   

16.
17.
Blind and sighted children’s memory for words and raised shape pictures was tested. The investigation compared performance with items when they were studied under neutral conditions (naming words and pictures) and when they were self-generated in response to cues (cue: hot ?: response: cold). The blind children could identify and name the raised shape pictures with the same apparent ease as blind-folded sighted children (as long as a cue was provided). The sighted children showed the generation effect (Slamecka and Graf, 1978) for both words and pictures, namely that self-generated items were far better recalled than neutral ones. The pattern of results for the blind children was markedly different. Although the level of memory performance overall was the same as that of the sighted controls, the congenitally blind children showed areverse generation effect. A stem completion study indicated that these results couldnot be accounted for by a relatively greater reliance on data-driven processing by the blind.  相似文献   

18.
In a short-term recognition memory experiment with words, subjects: (1) subvocally rehearsed the words, (2) generated a separate visual image for each word, (3) generated an interactive scene with such images, or (4) composed a covert sentence using the words in the memory set. Contrary to Seamon's (1972) results in a similar study, a serial memory search was found in all conditions, instead of the simultaneous scan which was expected when items were combined in interactive images. In a second study with pictures as stimuli, subjects who generated imaginal interactions between separate pictures, viewed interacting pictures, or viewed separate pictures also showed a serial search, i.e., longer RTs were obtained when more stimuli were held m memory. Since interactive imagery facilitated performance in an unexpected paired-associate task with memory set stimuli, one can argue that subjects actually processed or generated such interactions. It was suggested that memory search might not be simultaneous in tasks where the test stimulus constitutes only part of a memory image.  相似文献   

19.
Second- and fifth-graders' semantic decision times for pictures and words were analyzed relative to the predictions derived from unitary- and dual-memory models. At both grade levels, word-word response latencies were greater than picture-word latencies which, in turn, were greater than picture-picture latencies. An interaction between Grade and Condition indicated that verbal access times decreased more than pictorial access times. The data fit the predictions of a memory model postulating category storage in a single memory system as opposed to simultaneous representation in verbal and nonverbal memory systems. It was concluded that with increasing experience verbal access to this single semantic system is more rapid.  相似文献   

20.
There is considerable debate regarding the extent to which limbic regions respond differentially to items with different valences (positive or negative) or to different stimulus types (pictures or words). In the present event-related fMRI study, 21 participants viewed words and pictures that were neutral, negative, or positive. Negative and positive items were equated on arousal. The participants rated each item for whether it depicted or described something animate or inanimate or something common or uncommon. For both pictures and words, the amygdala, dorsomedial prefrontal cortex (PFC), and ventromedial PFC responded equally to all high-arousal items, regardless of valence. Laterality effects in the amygdala were based on the stimulus type (word 5 left, picture 5 bilateral). Valence effects were most apparent when the individuals processed pictures, and the results revealed a lateral/medial distinction within the PFC: The lateral PFC responded differentially to negative items, whereas the medial PFC was more engaged during the processing of positive pictures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号