首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The neural correlates of naming concrete entities such as tools (with nouns) and naming actions (with verbs) are partially distinct: the former are linked to the left inferotemporal (IT) region, whereas the latter are linked to the left frontal opercular (FO) and left posterior middle temporal (MT) regions. This raises an intriguing question: How would such neural patterns be influenced by noun-verb homonymy, specifically, naming tasks in which the target words denote objects or actions (e.g., "comb")? To explore this, we conducted a PET study in which 10 normal participants named visually presented tools or actions. The factor of homonymy yielded interesting effects: For tools, non-homonymous nouns (e.g., "camera") activated left IT, whereas homonymous nouns (e.g., "comb") activated both left IT and left FO. For actions, non-homonymous (e.g., "juggle") and homonymous (e.g., "comb") verbs activated left FO, MT, and IT, but there was evidence that the FO and MT activations were less widespread for the homonymous verbs. We also found that retrieval of the same exact words (e.g., "comb" and "comb") produced differential activation in left MT-there was greater MT activation when the words were being used to name actions, than when they were being used to name tools. Our results suggest that noun-verb homonymy has an important influence on the patterns of neural activation associated with words denoting objects and actions, and that even when the phonological forms are identical, the patterns of neural activation are different according to the demands of the task.  相似文献   

2.
Stroop-like stimuli were presented to either the left or the right visual half-field. Subjects responded to the identity of the words above and below (the target dimension), which appeared above or below a reference point (the cuing dimension). Automatic Stroop-like effects were assessed as the difference in reaction times between congruent trials (e.g., above the reference point) and incongruent trials (e.g., above below the reference point) when both trial types were equally frequent. In blocks in which most trials were of one type (e.g., 80% congruent trials), controlled Stroop-like effects could be assessed. Automatic Stroop-like effects remained unchanged under different task manipulations. In contrast, controlled Stroop-like effects were reduced by lowering cue-response compatibility and by increasing the response alternatives from two to four. Thus, similar to other cuing effects, controlled Stroop-like effects are susceptible to manipulations that affect the response-decision stage and appear to involve response-selection processes. The resources supporting these response-selection decisions were not hemisphere-specific, and were sufficiently nonspecific that interference from a memory-load task was found. When resources were scarce, a consistent bias to attend to stimuli presented or responded to on the right was evident.  相似文献   

3.
We examined the conditions under which short-term associations between stimuli and responses can produce spatial Simon effects. On location-relevant trials, participants gave neutral responses (i.e., they uttered the nonsense syllable "bee" or "boo") on the basis of whether the presented word had the meaning of "left" or "right." On location-irrelevant trials, they gave the same responses on the basis of the color of left and right squares. Performance on the location-irrelevant trials was affected by the match between the irrelevant location information and the location to which the correct response was assigned on the location-relevant trials. Experiment 1 showed that this extrinsic Simon effect was found only when the words on the location-relevant trials came from two different languages. In Experiment 2, we found an extrinsic Simon effect even when participants only received instructions about how to respond on location-relevant trials but no such trials were actually presented. Our findings suggest that task demands determine whether short-term associations are mode specific or mode independent and confirm that such associations can be set up as the result of instructions only.  相似文献   

4.
We investigated the effects of language on vision by focusing on a well-known problem: the binding and maintenance of color-location conjunctions. Four-year-olds performed a task in which they saw a target (e.g., a split square, red on the left and green on the right) followed by a brief delay and then were asked to find the target in an array including the target, its reflection (e.g., red on the right and green on the left), and a square with a different geometric split. Errors were overwhelmingly reflections. This finding shows that the children failed to maintain color-location conjunctions. Performance improved when targets were accompanied by sentences specifying color and direction (e.g., "the red is on the left"), but not when the conjunction was highlighted using a nonlinguistic cue (e.g., flashing, pointing, changes in size), nor when sentences specified a nondirectional relationship (e.g., "the red is touching the green"). The relation between children's matching performance and their long-term knowledge of directional terms suggests two distinct mechanisms by which language can temporarily bridge delays, providing more stable representations.  相似文献   

5.
Recent studies have shown that spatial Simon effects can be modulated by short-term associations that are set up as a result of task instructions. I examined whether spatial Simon effects can also be produced by short-term associations even when the responses are unrelated to spatial position. Participants were to say “cale” or “cole” on the basis of the direction of arrows (i.e., left or right), the meaning of words (i.e.,left orright), and the color of squares presented left or right of the screen center. Responses to squares were faster when the correct response was associated with the same position as the irrelevant position of the square (e.g., say “cale” to a square on the left when “cale” was assigned to the wordleft and the left arrow). This new type of stimulus-response compatibility effect provides the first evidence for short-term associations that involve mode-independent representations.  相似文献   

6.
According to the ideomotor principle, action preparation involves the activation of associations between actions and their effects. However, there is only sparse research on the role of action effects in saccade control. Here, participants responded to lateralized auditory stimuli with spatially compatible saccades toward peripheral targets (e.g., a rhombus in the left hemifield and a square in the right hemifield). Prior to the imperative auditory stimulus (e.g., a left tone), an irrelevant central visual stimulus was presented that was congruent (e.g., a rhombus), incongruent (e.g., a square), or unrelated (e.g., a circle) to the peripheral saccade target (i.e., the visual effect of the saccade). Saccade targets were present throughout a trial (Experiment 1) or appeared after saccade initiation (Experiment 2). Results showed shorter response times and fewer errors in congruent (vs. incongruent) conditions, suggesting that associations between oculomotor actions and their visual effects play an important role in saccade control.  相似文献   

7.
This study tested the hypotheses that people had a bias for drawing agents on the left of a picture when given a verb stimulus targeting an active or passive event (e.g., "kicked" or "is kicked") and that orthographic directionality would influence the way events were illustrated. Monolingual English speakers, who read and write left-to-right, and Arabic speakers, who read and write right-to-left, drew agents and patients in response to verb stimuli. We found no significant orthographic directionality effects and no preference for positioning agents on the left of pictures in either group or sentence type. Instead, participants drew agents on the right regardless of language or sentence type, and this was exaggerated in English speakers illustrating passive verbs. These findings support the existence of a preference for placing agents in the right hemispace that may result from asymmetrical hemispheric (i.e., left>right) activation induced by language processing. Our results are consistent with findings that people prefer pictures in which focus is on the right, a preference strongest in pictures with no implicit directionality of movement. This suggests that the methodology of the current study encouraged a static rather than dynamic interpretation of the verb in most participants.  相似文献   

8.
Davis C  Kim J  Forster KI 《Cognition》2008,107(2):673-684
This study investigated whether masked priming is mediated by existing memory representations by determining whether nonwords targets would show repetition priming. To avoid the potential confound that nonword repetition priming would be obscured by a familiarity response bias, the standard lexical decision and naming tasks were modified to make targets unfamiliar. Participants were required to read a target string from right to left (i.e., "ECAF" should be read as "FACE") and then make a response. To examine if priming was based on lexical representations, repetition primes consisted of words when read forwards or backwards (e.g., "face", "ecaf") and nonwords (e.g., "pame", "emap"). Forward and backward primes were used to test if task instruction affected prime encoding. The lexical decision and naming tasks showed the same pattern of results: priming only occurred for forward primes with word targets (e.g., "face-ECAF"). Additional experiments to test if response priming affected the LDT indicated that the lexical status of the prime per se did not affect target responses. These results showed that the encoding of masked primes was unaffected by the novel task instruction and support the view that masked priming is due to the automatic triggering of pre-established computational processes based on stored information.  相似文献   

9.
Generic noun phrases ("Birds lay eggs") are important for expressing knowledge about abstract kinds. The authors hypothesized that genericity would be part of gist memory, such that young children would appropriately recall whether sentences were presented as generic or specific. In 4 experiments, preschoolers and college students (N = 280) heard a series of sentences in either generic form (e.g., "Bears climb trees") or specific form (e.g., "This bear climbs trees") and were asked to recall the sentences following a 4-min distractor task. Participants in all age groups correctly distinguished between generic and specific noun phrases (NPs) in their recall, even when forgetting the details of the NP form. Memory for predicate content (e.g., "climb trees") was largely unaffected by genericity, although memory for category labels (e.g., "bear") was at times better for those who heard sentences with generic wording. Overall, these results suggest that generic form is maintained in long-term memory even for young children and thus may serve as the foundation for constructing knowledge about kinds.  相似文献   

10.
In 3 experiments, the authors manipulated response instructions for 2 concurrently performed tasks. Specifically, the authors' instructions described left and right keypresses on a manual task either as left versus right or as blue versus green keypresses and required either "left" versus "right" or "blue" versus "green" concurrent verbalizations. When instructions for responses on the 2 tasks were in terms of location (Experiment 1) or color (Experiments 2a and 2b), then compatible responses on the tasks were faster than incompatible responses. However, when the verbal task required "left" versus "right" responses but instructions for manual keypresses referred to blue versus green (Experiments 3a and 3b), then no response compatibility effects were observed. These results suggest that response labels used in the instruction directly determine the codes that are used to control responding.  相似文献   

11.
In choice reaction time, stimuli and responses in some combinations (e.g., based on spatial arrangement) are faster than in other combinations. To test whether motion toward a position yields faster responses at that position, a computer-generated square in front of one hand appeared to move either toward that hand or toward the other hand. Compatible responses (e.g., motion toward left hand/left response) were faster than incompatible responses, even when that opposed traditional positional compatibility. In Experiment 2, subjects responded to the same stimuli but with both hands left, right, or on the body midline. Medial responses were the fastest, showing that destination, rather than mere relative position, was a critical variable. It was suggested that spatial compatibility effects are not unique to position but apply to a variety of task situations, describable by J.J. Gibson's theory of affordances, in which he claims that one perceives the actions (e.g., catching) permitted in a situation.  相似文献   

12.
Control by action representation and input selection (CARIS) is a modeling framework for task-switching experiments, which considers action-related effects as critical constraints. It assumes that control operates by choosing control parameter values, representing input selection and action representation. Competing CARIS models differ in whether (a) control parameters are determined by current instructions or represent a perseveration, (b) current instructions apply to the input selection and/or to action representation. According to the chosen model (a) task execution results in a default bias in favor of the executed task thus creating perseverative tendencies; (b) control counteracts these tendencies by applying a transient momentary bias whose locus (input selection or action representation) changes as a function of task preparation time; (c) this happens because the task-cue (e.g., SHAPE) initially attracts attention to the immediately available cue-information (e.g., target shape) and then attracts it to inferred or retrieved information (e.g., "circle" is related to the right key press).  相似文献   

13.
Does language comprehension depend, in part, on neural systems for action? In previous studies, motor areas of the brain were activated when people read or listened to action verbs, but it remains unclear whether such activation is functionally relevant for comprehension. In the experiments reported here, we used off-line theta-burst transcranial magnetic stimulation to investigate whether a causal relationship exists between activity in premotor cortex and action-language understanding. Right-handed participants completed a lexical decision task, in which they read verbs describing manual actions typically performed with the dominant hand (e.g., "to throw," "to write") and verbs describing nonmanual actions (e.g., "to earn," "to wander"). Responses to manual-action verbs (but not to nonmanual-action verbs) were faster after stimulation of the hand area in left premotor cortex than after stimulation of the hand area in right premotor cortex. These results suggest that premotor cortex has a functional role in action-language understanding.  相似文献   

14.
Three experiments measured serial position functions for character-in-string identification in peripheral vision. In Experiment 1, random strings of five letters (e.g., P F H T M) or five symbols (e.g., λ Б Þ Ψ ¥) were briefly presented to the left or to the right of fixation, and identification accuracy was measured at each position in the string using a post-cued two-alternative forced-choice task (e.g., was there a T or a B at the 4th position). In Experiment 2 the performance to letter stimuli was compared with familiar two-dimensional shapes (e.g., square, triangle, circle), and in Experiment 3 we compared digit strings (e.g., 6 3 7 9 2) with a set of keyboard symbols (e.g., % 4 @ < ?). Eye-movements were monitored to ensure central fixation. The results revealed a triple interaction between the nature of the stimulus (letters/digits vs. symbols/shapes), eccentricity, and visual field. In all experiments this interaction reflected a selective left visual field advantage for letter or digit stimuli compared with symbol or shape stimuli for targets presented at the greatest eccentricity. The results are in line with the predictions of the modified receptive field hypothesis proposed by Tydgat and Grainger (2009), and the predictions of the SERIOL2 model of letter string encoding.  相似文献   

15.
Does the naming of clocks always require conceptual preparation? To examine this question, speakers were presented with analog and digital clocks that had to be named in Dutch using either a relative (e.g., "quarter to four") or an absolute (e.g., "three forty-five") clock time expression format. Naming latencies showed evidence of conceptual preparation when speakers produced relative time expressions to analog and digital clocks, but not when they used absolute time expressions. These findings indicate that conceptual mediation is not always mandatory for telling time, but instead depends on clock time expression format, supporting a multiple-route account of Dutch clock time naming.  相似文献   

16.
The authors conducted 4 experiments to test whether hemispheric lateralization occurs for the processing of geometric word–shape combinations. In 3 experiments, participants responded to geometric shapes combined with geometric words (square, circle, triangle). In the 4th experiment, stimuli were combinations of geometric shapes and nongeometric words. The authors predicted that it would take longer to respond in incongruent conditions (e.g., the word “square” combined with the shape of a circle) than in congruent conditions. The authors found the strongest incongruency effects for the dominant hemisphere—that is, the left hemisphere for responding to words and the right hemisphere for responding to shapes. A Shape Interfering Properties hypothesis (SIP) is a possible explanation for these results.  相似文献   

17.
We investigated the possibility that implicit memory, like explicit memory, can be disrupted by proactive interference. Participants first viewed a list of words, with nontargets in the first half of the list and targets in the second. Nontargets were either similar in structure (e.g., "ANALOGY") or unrelated (e.g., "URGENCY") to the targets (e.g., "ALLERGY"). After several filler tasks, participants completed an implicit fragment-completion test (e.g., "A_L__ GY") for the target items. Participants who viewed similar nontargets completed fewer fragments with target items and made more intrusions than did participants who viewed unrelated nontargets. Together with previous findings, these results suggest that similar nontargets can compete with target items to produce interference in implicit memory.  相似文献   

18.
If a subject who is sufficiently farsighted removes his corrective, positive, lenses and looks with one eye from a distance of one or a few meters, at a small lighted area such as the (continuously "on") indicator light of an electric toothbrush, razor, or smoke detector, and if a small object such as a pin is then moved slowly from above to below the subject's eyes (in a plane close to the eye), the subject will perceive the object moving normally from above to below until it encroaches on his view of the lighted area. The object will then be seen to encroach first on the bottom of the lighted area, and as the object continues to move down it will be seen to be moving up across the lighted area, exiting the lighted area at the top. Similarly, an object moved in front of the eye from the subject's left to his right will be seen by the subject to traverse the lighted area in the reverse direction, right to left, even though the subject moves the object himself. Also, while the object is in front of the lighted area, it is perceived as an upside down silhouette having surprisingly clear and sharp edges, and it appears to be located on the lighted area rather than close to the eye where it really is.  相似文献   

19.
The present paper reviews data from two previous studies in our laboratory, as well as some additional new data, on the neuronal representation of movement and pain imagery in a subject with an amputated right arm. The subject imagined painful and non-painful finger movements in the amputated stump while being in a MRI scanner, acquiring EPI-images for fMRI analysis. In Study I (Ersland et al., 1996) the Subject alternated tapping with his intact left hand fingers and imagining "tapping" with the fingers of his amputated right arm. The results showed increased neuronal activation in the right motor cortex (precentral gyrus) when tapping with the fingers of the left hand, and a corresponding activation in the left motor cortex when imagining tapping with the fingers of the amputated right arm. Finger tappings of the intact left hand fingers also resulted in a larger activated precentral area than imagery "finger tapping" of the amputated right arm fingers. In Study II (Rosen et al., 2001 in press) the same subject imagining painful and pleasurable finger movements, and still positions of the fingers of the amputated arm. The results showed larger activations over the motor cortex for movement imagining versus imagining the hand being in a still position, and larger activations over the sensory cortex when imagining painful experiences. It can therefore be concluded that not only does imagery activate the same motor areas as real finger movements, but also that adding instructions of pain together with imaging moving the fingers intensified the activation compared with adding instructions about non-painful experiences. From these studies, it is clear that areas activated during actual motor execution to a large extent also are activated during mental imagery of the same motor commands. In this respect the present studies add to studies of visual imagery that have shown a similar correspondence in activation between actual object perception and imagery of the same object.  相似文献   

20.
This study examines the effects of semantic satiation on lexical ambiguity resolution. On a given trial, participants were presented with a word triad. The first word (e.g., HEART) was presented on average 2.5, 12.5, or 22.5 times, and then participants received 2 new words for relatedness judgments. The first of the two new words was always a homograph (e.g., "ORGAN") and the other word was a related or unrelated pairmate (e.g., "KIDNEY"). In Experiment 1, when blocks of trials were intermixed with concordant (e.g., "HEART-ORGAN-KIDNEY"), discordant (e.g., "PIANO-ORGAN-KIDNEY"), and neutral (e.g., "CEILING-ORGAN-KIDNEY") trials, participants did not produce evidence of semantic satiation. In a second experiment in which only concordant and neutral trials were presented, however, participants did produce evidence of semantic satiation in the concordant condition. Taken together, Experiments 1 and 2 indicate that semantic satiation of the context-appropriate meaning of a homograph may impede ambiguity resolution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号