首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Semantic facilitation with pictures and words   总被引:1,自引:0,他引:1  
The present experiments explored the role of processing level and strategic factors in cross-form (word-picture and picture-word) and within-form (picture-picture and word-word) semantic facilitation. Previous studies have produced mixed results. The findings presented in this article indicate that semantic facilitation depends on the task and on the subjects' strategies. When the task required semantic processing of both picture and word targets (e.g., category verification), equivalent facilitation was obtained across all modality combinations. When the task required name processing (e.g., name verification, naming), facilitation was obtained for the picture targets. In contrast, with word targets, facilitation was obtained only when the situation emphasized semantic processing. The results are consistent with models that propose a common semantic representation for both picture and words but that also include assumptions regarding differential order of access to semantic and phonemic features for these stimulus modalities.  相似文献   

2.
A divided visual field (DVF) experiment examined the semantic processing strategies employed by the cerebral hemispheres to determine if strategies observed with written word stimuli generalize to other media for communicating semantic information. We employed picture stimuli and vary the degree of semantic relatedness between the picture pairs. Participants made an on-line semantic relatedness judgment in response to sequentially presented pictures. We found that when pictures are presented to the right hemisphere responses are generally more accurate than the left hemisphere for semantic relatedness judgments for picture pairs. Furthermore, consistent with earlier DVF studies employing words, we conclude that the RH is better at accessing or maintaining access to information that has a weak or more remote semantic relationship. We also found evidence of faster access for pictures presented to the LH in the strongly-related condition. Overall, these results are consistent with earlier DVF word studies that argue that the cerebral hemispheres each play an important and separable role during semantic retrieval.  相似文献   

3.
Yap DF  So WC  Yap JM  Tan YQ  Teoh RL 《Cognitive Science》2011,35(1):171-183
Using a cross-modal semantic priming paradigm, both experiments of the present study investigated the link between the mental representations of iconic gestures and words. Two groups of the participants performed a primed lexical decision task where they had to discriminate between visually presented words and nonwords (e.g., flirp). Word targets (e.g., bird) were preceded by video clips depicting either semantically related (e.g., pair of hands flapping) or semantically unrelated (e.g., drawing a square with both hands) gestures. The duration of gestures was on average 3,500 ms in Experiment 1 but only 1,000 ms in Experiment 2. Significant priming effects were observed in both experiments, with faster response latencies for related gesture-word pairs than unrelated pairs. These results are consistent with the idea of interactions between the gestural and lexical representational systems, such that mere exposure to iconic gestures facilitates the recognition of semantically related words.  相似文献   

4.
The semantic category effect represents a category dissociation between biological and nonbiological objects in picture naming. The aim of this preliminary study was to further examine this phenomenon, and to explore the possible association between the effect and subjective emotional valence for the named objects. Using a speeded picture naming task, vocal reaction times for 45 items were divided into four categories based on emotional valence rating and semantic category, and examined in 36 female university students. Analyses of the data indicated an "animate/inanimate" category dissociation favouring animate objects, in tandem with a potential relationship between subjective emotional valence and semantic processing underlying picture naming.  相似文献   

5.
Two experiments investigated inhibitory processes in visual spatial attention. In particular, factors influencing the suppression of ignored visual stimuli were investigated. Subjects responded to geometric shapes while distractors were presented in the periphery. Distractors consisted of a single word, a pair of unrelated words, or a single word paired with a string of non-linguistic symbols. Semantic processing of the ignored words was measured with a subsequent lexical decision task. Test probes presented after the prime displays revealed suppression effects for words semantically related to a previously ignored word, but only for conditions in which two distractor words were presented. Suppression was not observed when the prime consisted of a single word or a single word paired with non-linguistic symbols. In Experiment 2 two different time delays between the onset of the primes and the onset of the test probes were used. At the shorter interval facilitatory priming was observed, while at longer intervals suppression was observed. The facilitation-suppression pattern suggests that ignored items are recognized before being suppressed. In summary, the results suggest the following: (1) that selectively attending does not restrict ignored items from gaining access to their semantic representations, and (2) that inhibition is an important process in determining the focus of attention. These results are discussed within a selective attention framework in which all items in a display gain access to memory representations, and attention to selected items causes competing items to be inhibited.  相似文献   

6.
Infants younger than 20 months of age interpret both words and symbolic gestures as object names. Later in development words and gestures take on divergent communicative functions. Here, we examined patterns of brain activity to words and gestures in typically developing infants at 18 and 26 months of age. Event-related potentials (ERPs) were recorded during a match/mismatch task. At 18 months, an N400 mismatch effect was observed for pictures preceded by both words and gestures. At 26 months the N400 effect was limited to words. The results provide the first neurobiological evidence showing developmental changes in semantic processing of gestures.  相似文献   

7.
Graded interference effects were tested in a naming task, in parallel for objects and actions. Participants named either object or action pictures presented in the context of other pictures (blocks) that were either semantically very similar, or somewhat semantically similar or semantically dissimilar. We found that naming latencies for both object and action words were modulated by the semantic similarity between the exemplars in each block, providing evidence in both domains of graded semantic effects.  相似文献   

8.
Abstract

The ability of Italian subjects to make phonological judgements was investigated in three experiments. The judgements comprised initial sound similarity and stress assignment on pain of both written words and pictures. Stress assignment on both words and pictures as well as initial sound similarity on pictures required the activation of phonological lexical representations, but this was not necessarily the case for initial sound similarity judgements on word pairs. The first study assessed the effects of concurrent articulatory suppression on the judgements. Experiment 2 used a concomitant task (chewing), which shares with suppression the use of articulatory components but does not involve speech programming and production. The third experiment investigated the effects of unattended speech on the phonological judgements. The results of these three experiments showed that articulatory suppression had a significant disrupting effect on accuracy in all four conditions, while neither articulatory non-speech (chewing) or unattended auditory speech had any effect on the subjects' performance. The results suggest that these phonological judgements involve the operation of an articulatory speech output component, which is not implemented peripherally and does not require the involvement of a non-articulatory input system.  相似文献   

9.
Sturt P 《Cognition》2007,105(2):477-488
Participant's eye-movements were recorded while they read locally ambiguous sentences. Evidence for processing difficulty was found when the interpretation of the initially preferred misanalysis clashed with that of the globally correct analysis, demonstrating the persistence of the earlier interpretation. Processing difficulty associated with the syntactic reanalysis was largely localised to the disambiguating region, with difficulty due to semantic persistence occurring later. The results show that semantic persistence is not limited to extreme cases of parse failure, and can occur even when reanalysis is relatively straightforward.  相似文献   

10.
The present study examined the general hypothesis that, as for nouns, stable representations of semantic knowledge relative to situations expressed by verbs are available and accessible in long term memory in normal people. Regular associations between verbs and past tenses in French adults allowed to abstract two superordinate semantic features in the representation of verb meaning: durativity and resultativity. A pilot study was designed to select appropriate items according to these features: durative, non-resultative verbs and non-durative, resultative verbs. An experimental study was then conducted to assess semantic priming in French adults with two visual semantic-decision tasks at a 200- and 100-ms SOA. In the durativity decision task, participants had to decide if the target referred to a durable or non-durable situation. In the resultativity decision task, they had to decide if it referred to a situation with a directly observable outcome or without any clear external outcome. Targets were preceded by similar, opposite, and neutral primes. Results showed that semantic priming can tap verb meaning at a 200- and 100-ms SOA, with the restriction that only the positive value of each feature benefited from priming, that is the durative and resultative values. Moreover, processing of durativity and resultativity is far from comparable since facilitation was shown on the former with similar and opposite priming, whereas it was shown on the latter only with similar priming. Overall, these findings support Le Ny’s (in: Saint-Dizier, Viegas (eds) Computational lexical semantics, 1995; Cahier de Recherche Linguistique LanDisCo 12:85–100, 1998; Comment l’esprit produit du sens, 2005) general hypothesis that classificatory properties of verbs could be interpreted as semantic features and the view that semantic priming can tap verb meaning, as noun meaning.  相似文献   

11.
Earlier studies suggest that interhemispheric processing increases the processing power of the brain in cognitively complex tasks as it allows the brain to divide the processing load between the hemispheres. We report two experiments suggesting that this finding does not generalize to word-picture pairs: they are processed at least as efficiently when processed by a single hemisphere as compared to processing occurring between the two hemispheres. We examined whether dividing the stimuli between the visual fields/hemispheres would be more advantageous than unilateral stimulus displays in the semantic categorization of simultaneously presented pictures, words, and word-picture pairs. The results revealed that within-domain stimuli (semantically related picture pairs or word pairs) were categorized faster in bilateral than in unilateral displays, whereas cross-domain stimuli (word-picture pairs) were not categorized faster in bilateral than in unilateral displays. It is suggested that interhemispheric sharing of word-picture stimuli is not advantageous as compared to unilateral processing conditions because words and pictures use different access routes, and therefore, it may be possible to process in parallel simultaneously displayed word-picture stimuli within a single hemisphere.  相似文献   

12.
Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture–word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants’ reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants’ reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants’ reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill.  相似文献   

13.
Semantic priming was analysed in two groups of French children contrasted on comprehension skills with a visual lexical-decision task using a long SOA (800 ms). Two relation types between related primes and targets were examined: pure semantic relation (categorical vs. functional), and lexical association strength (strong vs. weak). Targets were preceded by related, unrelated, and neutral primes. Skilled comprehenders showed semantic priming only for category-related words, whatever their association strength, and without any evidence of an associative boost. Less-skilled comprehenders also showed semantic priming for category-related words, irrespective of their association strength, but with an indication of an associative boost. They also displayed semantic priming for function-related awords that are strongly associated, but not for those that are weakly associated. These results are discussed within the theoretical frame proposed by Plaut and Booth (2000 Plaut, D. C. and Booth, J. R. 2000. Individual and developmental differences in semantic priming: Empirical and computational support for a single-mechanism account of lexical processing. Psychological Review, 4: 786823.  [Google Scholar]).  相似文献   

14.
In this study we examined conceptual priming using environmental sounds and visually displayed words. Priming for sounds and words was observed in response latency as well as in event-related potentials. Reactions were faster when a related word followed an environmental sound and vice versa. Moreover both stimulus types produced an N400-effect for unrelated compared to related trials. The N400-effect had an earlier onset for environmental sounds than for words. The results support the theoretical notion that conceptual processing may be similar for verbal and non-verbal stimuli.  相似文献   

15.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory–visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory–visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory–visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory–visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory–visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.  相似文献   

16.
Undergraduates were shown pictures or corresponding labels and then were tested for recognition either in the same mode or in a cross-over mode. Significantly more items were recognized in the picture-picture condition than in the picture-word and word-picture conditions. Informing subjects in advance of the change in modality significantly improved picture-word performance.  相似文献   

17.
This study explores implicit memory within the domain of text processing. Three experiments were designed to study cross-modality priming in a word-stem completion test following presentation of target words in the context of a coherent text. Four main results emerged. First, we found a significant priming effect for words previously studied in a text, this priming is higher with low-frequency words than with high-frequency words. Second, subjects demonstrated more repetition priming when study and test modalities matched than when they were different. Third, the magnitude of the priming effect in the visual condition varied with the perceptual processing of the text read. Fourth, priming effects did not depend on subjects' remembering of the words of text read as measured by a yes/no recognition test since no modality effect was found in this latter memory test. These results challenge Levy's (1993) view and are discussed in the framework of the transfer-appropriate processing view proposed by Roediger, Weldon and Challis (1989).  相似文献   

18.
In this study, we investigated the effects of various interpolated tasks on hypermnesia (improved recall across repeated tests) for pictures and words. In five experiments, subjects studied either pictures or words and then completed two free-recall tests, with varying activities interpolated between the tests. The tasks performed between tests were varied to test several hypotheses concerning the possible factor(s) responsible for disruption of the hypermnesic effect. In each experiment, hypermnesia was obtained in a control condition in which there was no interpolated task between tests. The remaining conditions showed that the effect of the interpolated tasks was related to the overlap of the cognitive processes involved in encoding the target items and performing the interpolated tasks. When pictures were used as the target items, no hypermnesia was obtained when subjects engaged in imaginal processing interpolated tasks, even when these tasks involved materials that were very distinct from the target items. When words were used as the target items, no hypermnesia was obtained when the interpolated tasks required verbal/linguistic processing, even when the items used in these tasks were auditorily presented. The results are discussed in terms of a strength-based model of associative memory.  相似文献   

19.
20.
Person recognition can be accomplished through several modalities (face, name, voice). Lesion, neurophysiology and neuroimaging studies have been conducted in an attempt to determine the similarities and differences in the neural networks associated with person identity via different modality inputs. The current study used event-related functional-MRI in 17 healthy participants to directly compare activation in response to randomly presented famous and non-famous names and faces (25 stimuli in each of the four categories). Findings indicated distinct areas of activation that differed for faces and names in regions typically associated with pre-semantic perceptual processes. In contrast, overlapping brain regions were activated in areas associated with the retrieval of biographical knowledge and associated social affective features. Specifically, activation for famous faces was primarily right lateralized and famous names were left-lateralized. However, for both stimuli, similar areas of bilateral activity were observed in the early phases of perceptual processing. Activation for fame, irrespective of stimulus modality, activated an extensive left hemisphere network, with bilateral activity observed in the hippocampi, posterior cingulate, and middle temporal gyri. Findings are discussed within the framework of recent proposals concerning the neural network of person identification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号