首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Semantic facilitation with pictures and words   总被引:1,自引:0,他引:1  
The present experiments explored the role of processing level and strategic factors in cross-form (word-picture and picture-word) and within-form (picture-picture and word-word) semantic facilitation. Previous studies have produced mixed results. The findings presented in this article indicate that semantic facilitation depends on the task and on the subjects' strategies. When the task required semantic processing of both picture and word targets (e.g., category verification), equivalent facilitation was obtained across all modality combinations. When the task required name processing (e.g., name verification, naming), facilitation was obtained for the picture targets. In contrast, with word targets, facilitation was obtained only when the situation emphasized semantic processing. The results are consistent with models that propose a common semantic representation for both picture and words but that also include assumptions regarding differential order of access to semantic and phonemic features for these stimulus modalities.  相似文献   

2.
A divided visual field (DVF) experiment examined the semantic processing strategies employed by the cerebral hemispheres to determine if strategies observed with written word stimuli generalize to other media for communicating semantic information. We employed picture stimuli and vary the degree of semantic relatedness between the picture pairs. Participants made an on-line semantic relatedness judgment in response to sequentially presented pictures. We found that when pictures are presented to the right hemisphere responses are generally more accurate than the left hemisphere for semantic relatedness judgments for picture pairs. Furthermore, consistent with earlier DVF studies employing words, we conclude that the RH is better at accessing or maintaining access to information that has a weak or more remote semantic relationship. We also found evidence of faster access for pictures presented to the LH in the strongly-related condition. Overall, these results are consistent with earlier DVF word studies that argue that the cerebral hemispheres each play an important and separable role during semantic retrieval.  相似文献   

3.
Yap DF  So WC  Yap JM  Tan YQ  Teoh RL 《Cognitive Science》2011,35(1):171-183
Using a cross-modal semantic priming paradigm, both experiments of the present study investigated the link between the mental representations of iconic gestures and words. Two groups of the participants performed a primed lexical decision task where they had to discriminate between visually presented words and nonwords (e.g., flirp). Word targets (e.g., bird) were preceded by video clips depicting either semantically related (e.g., pair of hands flapping) or semantically unrelated (e.g., drawing a square with both hands) gestures. The duration of gestures was on average 3,500 ms in Experiment 1 but only 1,000 ms in Experiment 2. Significant priming effects were observed in both experiments, with faster response latencies for related gesture-word pairs than unrelated pairs. These results are consistent with the idea of interactions between the gestural and lexical representational systems, such that mere exposure to iconic gestures facilitates the recognition of semantically related words.  相似文献   

4.
Words become associated following repeated co-occurrence episodes. This process might be further determined by the semantic characteristics of the words. The present study focused on how semantic and episodic factors interact in incidental formation of word associations. First, we found that human participants associate semantically related words more easily than unrelated words; this advantage increased linearly with repeated co-occurrence. Second, we developed a computational model, SEMANT, suggesting a possible mechanism for this semantic-episodic interaction. In SEMANT, episodic associations are implemented through lateral connections between nodes in a pre-existent self-organized map of word semantics. These connections are strengthened at each instance of concomitant activation, proportionally with the amount of the overlapping activity waves of activated nodes. In computer simulations SEMANT replicated the dynamics of associative learning in humans and led to testable predictions concerning normal associative learning as well as impaired learning in a diffuse semantic system like that characteristic of schizophrenia.  相似文献   

5.
Considerable work during the past two decades has focused on modeling the structure of semantic memory, although the performance of these models in complex and unconstrained semantic tasks remains relatively understudied. We introduce a two-player cooperative word game, Connector (based on the boardgame Codenames), and investigate whether similarity metrics derived from two large databases of human free association norms, the University of South Florida norms and the Small World of Words norms, and two distributional semantic models based on large language corpora (word2vec and GloVe) predict performance in this game. Participant dyads were presented with 20-item word boards with word pairs of varying relatedness. The speaker received a word pair from the board (e.g., exam-algebra) and generated a one-word semantic clue (e.g., math), which was used by the guesser to identify the word pair on the board across three attempts. Response times to generate the clue, as well as accuracy and latencies for the guessed word pair, were strongly predicted by the cosine similarity between word pairs and clues in random walk-based associative models, and to a lesser degree by the distributional models, suggesting that conceptual representations activated during free association were better able to capture search and retrieval processes in the game. Further, the speaker adjusted subsequent clues based on the first attempt by the guesser, who in turn benefited from the adjustment in clues, suggesting a cooperative influence in the game that was effectively captured by both associative and distributional models. These results indicate that both associative and distributional models can capture relatively unconstrained search processes in a cooperative game setting, and Connector is particularly suited to examine communication and semantic search processes.  相似文献   

6.
The semantic category effect represents a category dissociation between biological and nonbiological objects in picture naming. The aim of this preliminary study was to further examine this phenomenon, and to explore the possible association between the effect and subjective emotional valence for the named objects. Using a speeded picture naming task, vocal reaction times for 45 items were divided into four categories based on emotional valence rating and semantic category, and examined in 36 female university students. Analyses of the data indicated an "animate/inanimate" category dissociation favouring animate objects, in tandem with a potential relationship between subjective emotional valence and semantic processing underlying picture naming.  相似文献   

7.
ABSTRACT

The ability of young (aged 18–30) and older (aged 60–80) adults to discriminate pre-experimental (semantic) from experimental (episodic) associations was examined. Participants studied a list containing semantically related and unrelated word pairs and then made either associative recognition (Experiments 1a and b) or semantic relatedness (Experiment 2) judgments at various response deadlines. For associative recognition judgments, both young and older adults benefited from semantic relatedness, leading to more hits for related than unrelated pairs, and at the long response deadline, older adults' performance on those pairs matched that of young participants. Also, both young and older adults demonstrated superior discrimination for unrelated lures whose members had originally been studied in related pairs – evidence for recall-to-reject processing in both age groups. In making semantic relatedness judgments, both young and older adults showed an episodic priming effect. When older adults can rely on long-standing associations, their performance resembles that of young adults – both in associative recognition and in episodic priming.  相似文献   

8.
Two experiments investigated inhibitory processes in visual spatial attention. In particular, factors influencing the suppression of ignored visual stimuli were investigated. Subjects responded to geometric shapes while distractors were presented in the periphery. Distractors consisted of a single word, a pair of unrelated words, or a single word paired with a string of non-linguistic symbols. Semantic processing of the ignored words was measured with a subsequent lexical decision task. Test probes presented after the prime displays revealed suppression effects for words semantically related to a previously ignored word, but only for conditions in which two distractor words were presented. Suppression was not observed when the prime consisted of a single word or a single word paired with non-linguistic symbols. In Experiment 2 two different time delays between the onset of the primes and the onset of the test probes were used. At the shorter interval facilitatory priming was observed, while at longer intervals suppression was observed. The facilitation-suppression pattern suggests that ignored items are recognized before being suppressed. In summary, the results suggest the following: (1) that selectively attending does not restrict ignored items from gaining access to their semantic representations, and (2) that inhibition is an important process in determining the focus of attention. These results are discussed within a selective attention framework in which all items in a display gain access to memory representations, and attention to selected items causes competing items to be inhibited.  相似文献   

9.
Infants younger than 20 months of age interpret both words and symbolic gestures as object names. Later in development words and gestures take on divergent communicative functions. Here, we examined patterns of brain activity to words and gestures in typically developing infants at 18 and 26 months of age. Event-related potentials (ERPs) were recorded during a match/mismatch task. At 18 months, an N400 mismatch effect was observed for pictures preceded by both words and gestures. At 26 months the N400 effect was limited to words. The results provide the first neurobiological evidence showing developmental changes in semantic processing of gestures.  相似文献   

10.
Abstract

The ability of Italian subjects to make phonological judgements was investigated in three experiments. The judgements comprised initial sound similarity and stress assignment on pain of both written words and pictures. Stress assignment on both words and pictures as well as initial sound similarity on pictures required the activation of phonological lexical representations, but this was not necessarily the case for initial sound similarity judgements on word pairs. The first study assessed the effects of concurrent articulatory suppression on the judgements. Experiment 2 used a concomitant task (chewing), which shares with suppression the use of articulatory components but does not involve speech programming and production. The third experiment investigated the effects of unattended speech on the phonological judgements. The results of these three experiments showed that articulatory suppression had a significant disrupting effect on accuracy in all four conditions, while neither articulatory non-speech (chewing) or unattended auditory speech had any effect on the subjects' performance. The results suggest that these phonological judgements involve the operation of an articulatory speech output component, which is not implemented peripherally and does not require the involvement of a non-articulatory input system.  相似文献   

11.
Earlier studies suggest that interhemispheric processing increases the processing power of the brain in cognitively complex tasks as it allows the brain to divide the processing load between the hemispheres. We report two experiments suggesting that this finding does not generalize to word-picture pairs: they are processed at least as efficiently when processed by a single hemisphere as compared to processing occurring between the two hemispheres. We examined whether dividing the stimuli between the visual fields/hemispheres would be more advantageous than unilateral stimulus displays in the semantic categorization of simultaneously presented pictures, words, and word-picture pairs. The results revealed that within-domain stimuli (semantically related picture pairs or word pairs) were categorized faster in bilateral than in unilateral displays, whereas cross-domain stimuli (word-picture pairs) were not categorized faster in bilateral than in unilateral displays. It is suggested that interhemispheric sharing of word-picture stimuli is not advantageous as compared to unilateral processing conditions because words and pictures use different access routes, and therefore, it may be possible to process in parallel simultaneously displayed word-picture stimuli within a single hemisphere.  相似文献   

12.
Graded interference effects were tested in a naming task, in parallel for objects and actions. Participants named either object or action pictures presented in the context of other pictures (blocks) that were either semantically very similar, or somewhat semantically similar or semantically dissimilar. We found that naming latencies for both object and action words were modulated by the semantic similarity between the exemplars in each block, providing evidence in both domains of graded semantic effects.  相似文献   

13.
Sturt P 《Cognition》2007,105(2):477-488
Participant's eye-movements were recorded while they read locally ambiguous sentences. Evidence for processing difficulty was found when the interpretation of the initially preferred misanalysis clashed with that of the globally correct analysis, demonstrating the persistence of the earlier interpretation. Processing difficulty associated with the syntactic reanalysis was largely localised to the disambiguating region, with difficulty due to semantic persistence occurring later. The results show that semantic persistence is not limited to extreme cases of parse failure, and can occur even when reanalysis is relatively straightforward.  相似文献   

14.
The present study examined the general hypothesis that, as for nouns, stable representations of semantic knowledge relative to situations expressed by verbs are available and accessible in long term memory in normal people. Regular associations between verbs and past tenses in French adults allowed to abstract two superordinate semantic features in the representation of verb meaning: durativity and resultativity. A pilot study was designed to select appropriate items according to these features: durative, non-resultative verbs and non-durative, resultative verbs. An experimental study was then conducted to assess semantic priming in French adults with two visual semantic-decision tasks at a 200- and 100-ms SOA. In the durativity decision task, participants had to decide if the target referred to a durable or non-durable situation. In the resultativity decision task, they had to decide if it referred to a situation with a directly observable outcome or without any clear external outcome. Targets were preceded by similar, opposite, and neutral primes. Results showed that semantic priming can tap verb meaning at a 200- and 100-ms SOA, with the restriction that only the positive value of each feature benefited from priming, that is the durative and resultative values. Moreover, processing of durativity and resultativity is far from comparable since facilitation was shown on the former with similar and opposite priming, whereas it was shown on the latter only with similar priming. Overall, these findings support Le Ny’s (in: Saint-Dizier, Viegas (eds) Computational lexical semantics, 1995; Cahier de Recherche Linguistique LanDisCo 12:85–100, 1998; Comment l’esprit produit du sens, 2005) general hypothesis that classificatory properties of verbs could be interpreted as semantic features and the view that semantic priming can tap verb meaning, as noun meaning.  相似文献   

15.
ABSTRACT

Recent research has demonstrated that patients with Alzheimer's disease (AD) show deficits in semantic processing when compared to cognitively healthy individuals. This difference is thought to be attributed to losses in higher cortical systems that are predominantly associated with executive functioning. The first aim of the study will be to determine if differences in semantic clustering can accurately differentiate patients with amnestic mild cognitive impairment (aMCI) from cognitively normal (CN) individuals. The second aim will be to determine the extent to which semantic processing might be associated with executive functions. Data from 202 (134 CN, 68 aMCI) participants were analyzed to quantify differences in semantic clustering ratios on the HVLT-R. Study participants ages ranged from 51 to 87 with education ranging from 6 to 20 years. ANCOVA revealed statistically significant differences on semantic clustering ratios (p < .001). Moderate correlations between semantic clustering Category Fluency Test (r = .45) were also found. Statistically significant group differences were also present on Trails-B and WAIS-R Digit Symbol performance (p < .001). Overall, these data indicate that deficits in semantic clustering are present in aMCI patients.  相似文献   

16.
Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture–word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants’ reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants’ reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants’ reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill.  相似文献   

17.
In this study we examined conceptual priming using environmental sounds and visually displayed words. Priming for sounds and words was observed in response latency as well as in event-related potentials. Reactions were faster when a related word followed an environmental sound and vice versa. Moreover both stimulus types produced an N400-effect for unrelated compared to related trials. The N400-effect had an earlier onset for environmental sounds than for words. The results support the theoretical notion that conceptual processing may be similar for verbal and non-verbal stimuli.  相似文献   

18.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory–visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory–visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory–visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory–visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory–visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.  相似文献   

19.
Undergraduates were shown pictures or corresponding labels and then were tested for recognition either in the same mode or in a cross-over mode. Significantly more items were recognized in the picture-picture condition than in the picture-word and word-picture conditions. Informing subjects in advance of the change in modality significantly improved picture-word performance.  相似文献   

20.
It has been suggested that unconscious semantic processing is stimulus-dependent, and that pictures might have privileged access to semantic content. Those findings led to the hypothesis that unconscious semantic priming effect for pictorial stimuli would be stronger as compared to verbal stimuli. This effect was tested on pictures and words by manipulating the semantic similarity between the prime and target stimuli. Participants performed a masked priming categorization task for either words or pictures with three semantic similarity conditions: strongly similar, weakly similar, and non-similar. Significant differences in reaction times were only found between strongly similar and non-similar and between weakly similar and non-similar, for both pictures and words, with faster overall responses for pictures as compared to words. Nevertheless, pictures showed no superior priming effect over words. This could suggest the hypothesis that even though semantic processing is faster for pictures, this does not imply a stronger unconscious priming effect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号