首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We examined the influence of semantic transparency on morphological facilitation in English in three lexical decision experiments. Decision latencies to visual targets (e.g., CASUALNESS) were faster after semantically transparent (e.g., CASUALLY) than semantically opaque (e.g., CASUALTY) primes whether primes were auditory and presented immediately before onset of the target (Experiment 1a) or visual with an stimulus onset asynchrony (SOA) of 250 ms (Experiment 1b). Latencies did not differ at an SOA of 48 ms (Experiment 2) or with a forward mask at an SOA of 83 ms (Experiment 3). Generally, effects of semantic transparency among morphological relatives were evident at long but not at short SOAs with visual targets, regardless of prime modality. Moreover, the difference in facilitation after opaque and transparent primes was graded and increased with family size of the base morpheme.  相似文献   

2.
Pseudohomophones were used in a primed naming task. In Experiments 1 and 2, target pseudowords that sounded like real words (e.g., CHARE) were preceded either by context words that related associatively to the word with which the target was homophonic (TABLE-CHARE) or by context words that were not associatively related (NOVEL-CHARE). Control pairs were TABLE-THARE and NOVEL-THARE (Experiment 1) and TABLE-CHARK and NOVEL-CHARK (Experiment 2). In relation to NOVEL, TABLE benefited the naming of CHARE but not the naming of THARE or CHARK. TAYBLE-CHAIR pairs were used in Experiment 3. If the prime TAYBLE activated/table/,then/chair/would be activated associatively and the target CHAIR would be named faster than if TARBLE was the prime. Experiment 4 extended the design of Experiment 3 to include TABLE-CHAIR pairs and a comparison of a short (280 ms) and a long (500 ms) delay between context and target onsets. The priming due to associated pseudohomophones was unaffected by onset asynchrony and equal in magnitude to that due to associated words. Results suggest that lexical representations are coded and accessed phonologically.  相似文献   

3.
Previous studies have shown that attention is drawn to the location of manipulable objects and is distributed across pairs of objects that are positioned for action. Here, we investigate whether central, action-related objects can cue attention to peripheral targets. Experiment 1 compared the effect of uninformative arrow and object cues on a letter discrimination task. Arrow cues led to spatial-cueing benefits across a range of stimulus onset asynchronies (SOAs: 0?ms, 120?ms, 400?ms), but object-cueing benefits were slow to build and were only significant at the 400-ms SOA. Similar results were found in Experiment 2, in which the targets were objects that could be either congruent or incongruent with the cue (e.g., screwdriver and screw versus screwdriver and glass). Cueing benefits were not influenced by the congruence between the cue and target, suggesting that the cueing effects reflected the action implied by the central object, not the interaction between the objects. For Experiment 3 participants decided whether the cue and target objects were related. Here, the interaction between congruent (but not incongruent) targets led to significant cueing/positioning benefits at all three SOAs. Reduced cueing benefits were obtained in all three experiments when the object cue did not portray a legitimate action (e.g., a bottle pointing towards an upper location, since a bottle cannot pour upwards), suggesting that it is the perceived action that is critical, rather than the structural properties of individual objects. The data suggest that affordance for action modulates the allocation of visual attention.  相似文献   

4.
We report four picture-naming experiments in which the pictures were preceded by visually presented word primes. The primes could either be semantically related to the picture (e.g., "boat" - TRAIN: co-ordinate pairs) or associatively related (e.g., "nest" - BIRD: associated pairs). Performance under these conditions was always compared to performance under unrelated conditions (e.g., "flower" - CAT). In order to distinguish clearly the first two kinds of prime, we chose our materials so that (a) the words in the co-ordinate pairs were not verbally associated, and (b) the associate pairs were not co-ordinates. Results show that the two related conditions behaved in different ways depending on the stimulus-onset asynchrony (SOA) separating word and picture appearance, but not on how long the primes were presented. When presented with a brief SOA (114 ms, Experiment 1), the co-ordinate primes produced an interference effect, but the associated primes did not differ significantly from the unrelated primes. Conversely , with a longer SOA (234 ms, Experiment 2) the co-ordinate primes produced no effect, whereas a significant facilitation effect was observed for associated primes, independent of the duration of presentation of the primes. This difference is interpreted in the context of current models of speech production as an argument for the existence, at an automatic processing level, of two distinguishable kinds of meaning relatedness.  相似文献   

5.
Previous studies have shown that attention is drawn to the location of manipulable objects and is distributed across pairs of objects that are positioned for action. Here, we investigate whether central, action-related objects can cue attention to peripheral targets. Experiment 1 compared the effect of uninformative arrow and object cues on a letter discrimination task. Arrow cues led to spatial-cueing benefits across a range of stimulus onset asynchronies (SOAs: 0 ms, 120 ms, 400 ms), but object-cueing benefits were slow to build and were only significant at the 400-ms SOA. Similar results were found in Experiment 2, in which the targets were objects that could be either congruent or incongruent with the cue (e.g., screwdriver and screw versus screwdriver and glass). Cueing benefits were not influenced by the congruence between the cue and target, suggesting that the cueing effects reflected the action implied by the central object, not the interaction between the objects. For Experiment 3 participants decided whether the cue and target objects were related. Here, the interaction between congruent (but not incongruent) targets led to significant cueing/positioning benefits at all three SOAs. Reduced cueing benefits were obtained in all three experiments when the object cue did not portray a legitimate action (e.g., a bottle pointing towards an upper location, since a bottle cannot pour upwards), suggesting that it is the perceived action that is critical, rather than the structural properties of individual objects. The data suggest that affordance for action modulates the allocation of visual attention.  相似文献   

6.
Undergraduate students were presented with word pairs (e.g., egg-yolk) and were timed as they decided whether one word named part of the thing named by the other word. In Experiment 1, "no" responses to nonpart pairs (e.g., fish-flaps) were slowed by the similarity of the stimulus part (flaps) to a part that the stimulus object did possess (fins). This suggested that decisions were made by retrieving parts of the stimulus object from memory and comparing them to the stimulus part. Whereas the parts used as stimuli in Experiment 1 were nonspecific, belonging to several different types of object (e.g., wheel), those selected for Experiment 2 were specific to a single type of object (e.g., thumb). In Experiment 2, "no" responses to nonpart pairs (e.g., foot-thumb) were slowed by similarity of the stimulus object (foot) to an object that the stimulus part (thumb) belonged to (hand). This suggested that decisions were made by retrieving the object to which the stimulus part belonged and comparing it to the stimulus object. The results support a hybrid model of part-whole decisions that includes directed retrieval of relational knowledge from memory and a comparison process.  相似文献   

7.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

8.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

9.
Several studies of metacontrast masking in the 1960s apparently showed that the latency of simple detection responses was uninfluenced by the phenomenal dimming of the target induced by the mask. More recent studies using more suitable methodologies have clearly shown that such is not the case for situations in which the masking is a monotonically decreasing function of stimulus onset asynchrony. Experiment 1 investigated this issue for the situation in which masking is a U-shaped function of stimulus onset asynchrony. Contrary to the results obtained in monotonic masking situations, simple detection responses were not slowed by the masking. Experiment 2 demonstrated that although detection responses are not slowed in the U-shaped masking situation, spatial-choice judgments are. Experiments 3 and 4 indicated that this masking effect on spatial-choice reaction time is lost relatively rapidly with practice. However, changing the stimulus-response assignments reinstates the effect. The experiments suggest that for the situation in which U-shaped masking functions are obtained, responses that require attention (spatial-choice judgments early in practice or after stimulus-response relationships have been switched) are influenced by the metacontrast-induced phenomenal dimming, whereas responses that are automatic (i.e., detection responses; practiced spatial-choice judgments with consistent stimulus-response mappings) are not.  相似文献   

10.
Research reported here concerns neural processes relating to stimulus equivalence class formation. In Experiment 1, two types of word pairs were presented successively to normally capable adults. In one type, the words had related usage in English (e.g., uncle, aunt). In the other, the two words were not typically related in their usage (e.g., wrist, corn). For pairs of both types, event‐related cortical potentials were recorded during and immediately after the presentation of the second word. The obtained waveforms differentiated these two types of pairs. For the unrelated pairs, the waveforms were significantly more negative about 400 ms after the second word was presented, thus replicating the “N400” phenomenon of the cognitive neuroscience literature. In addition, there was a strong positive‐tending wave form difference post‐stimulus presentation (peaked at about 500 ms) that also differentiated the unrelated from related stimulus pairs. In Experiment 2, the procedures were extended to study arbitrary stimulus—stimulus relations established via matching‐to‐sample training. Participants were experimentally näive adults. Sample stimuli (Set A) were trigrams, and comparison stimuli (Sets B, C, D, E, and F) were nonrepresentative forms. Behavioral tests evaluated potentially emergent equivalence relations (i.e., BD, DF, CE, etc.). All participants exhibited classes consistent with the arbitrary matching training. They were also exposed also to an event‐related potential procedure like that used in Experiment 1. Some received the ERP procedure before equivalence tests and some after. Only those participants who received ERP procedures after equivalence tests exhibited robust N400 differentiation initially. The positivity observed in Experiment 1 was absent for all participants. These results support speculations that equivalence tests may provide contextual support for the formation of equivalence classes including those that emerge gradually during testing.  相似文献   

11.
Undergraduate students were presented with word pairs (e.g., egg-yolk) and were timed as they decided whether one word named part of the thing named by the other word. In Experiment 1, “no” responses to nonpart pairs (e.g., fish-flaps) were slowed by the similarity of the stimulus part (flaps) to a part that the stimulus object did possess (fins). This suggested that decisions were made by retrieving parts of the stimulus object from memory and comparing them to the stimulus part. Whereas the parts used as stimuli in Experiment 1 were nonspecific, belonging to several different types of object (e.g., wheel), those selected for Experiment 2 were specific to a single type of object (e.g., thumb). In Experiment 2, “no” responses to nonpart pairs (e.g., foot-thumb) were slowed by similarity of the stimulus object (foot) to an object that the stimulus part (thumb) belonged to (hand). This suggested that decisions were made by retrieving the object to which the stimulus part belonged and comparing it to the stimulus object. The results support a hybrid model of part-whole decisions that includes directed retrieval of relational knowledge from memory and a comparison process.  相似文献   

12.
In two experiments, subjects made pairs of lexical decisions verbally. In Experiment 1, masked stimuli appeared concurrently to the left and right of fixation; in Experiment 2, nonmasked stimuli appeared sequentially at fixation. The left-hand letter strings were judged more accurately in in Experiment 1, and the second letter strings were judged more accurately in Experiment 2. Each string in the pair could be either a word (e.g., fork) or a nonword anagram (e.g., frok). Consequently, the two strings in the pair could be related (e.g., fork-spoon, frok-spoon, etc.) or unrelated (e.g., fork-door, frok-door, etc.), independently of whether neither, either, or both strings were words. Semantically related stimuli induced consistent biases to respond "word," as noted in other studies. These biases were typically stronger for the event reported second. Minimal evidence was found for perceptual priming effects. The asymmetrical effects were consistent with spreading-activation-type mechanisms, but other considerations support a multiple-process view.  相似文献   

13.
The relative time course of semantic and phonological activation was investigated in the context of whether phonology mediates access to lexical representations in reading Chinese. Compound words (Experiment 1) and single-character words (Experiments 2 and 3) were preceded by semantic and phonological primes. Strong semantic priming effects were found at both short (57 ms) and long (200 ms) stimulus onset asynchrony (SOA), but phonological effects were either absent in lexical decision (Experiment 1), were present only at the longer SOA in character decision (Experiment 2) or were equally strong as semantic effects in naming (Experiment 3). Experiment 4 revealed facilitatory or inhibitory effects, depending on SOA, in phonological judgments to character pairs that were not phonologically but semantically related. It was concluded that, in reading Chinese, semantic information in the lexicon is activated at least as early and just as strongly as phonological information.  相似文献   

14.
Previous studies of the auditory analogue of repetition blindness have led to different conclusions regarding the nature of the effect (e.g., N. Kanwisher & M. C. Potter, 1989; M. Miller & D. MacKay, 1994). In the present study, recall accuracy for repeated elements was examined with lists of 2 or 3 items presented dichotically under high temporal pressure. When this procedure was used, a repetition deficit in recall was obtained for both vowels (Experiment 1) and consonant-vowel syllables (Experiment 2). Further experiments demonstrated that this deficit decreases as the stimulus onset asynchrony between the 2 critical elements increases (Experiment 3) and showed that the effect also occurs for words and not just nonsense syllables (Experiment 4). In all 4 experiments, estimations of guessing biases showed that responses to unrepeated lists were not artificially favored over responses to repeated lists.  相似文献   

15.
Happy, surprised, disgusted, angry, sad, fearful, and neutral facial expressions were presented extrafoveally (2.5° away from fixation) for 150 ms, followed by a probe word for recognition (Experiment 1) or a probe scene for affective valence evaluation (Experiment 2). Eye movements were recorded and gaze-contingent masking prevented foveal viewing of the faces. Results showed that (a) happy expressions were recognized faster than others in the absence of fixations on the faces, (b) the same pattern emerged when the faces were presented upright or upside-down, (c) happy prime faces facilitated the affective evaluation of emotionally congruent probe scenes, and (d) such priming effects occurred at 750 but not at 250 ms prime–probe stimulus–onset asynchrony. This reveals an advantage in the recognition of happy faces outside of overt visual attention, and suggests that this recognition advantage relies initially on featural processing and involves processing of positive affect at a later stage.  相似文献   

16.
Does sunshine prime loyal? Affective priming in the naming task   总被引:3,自引:0,他引:3  
Recent work has found an affective priming effect using the naming task: In pronouncing target words, pronunciation latencies were consistently shorter when the target (e.g., loyal) was preceded by an evaluatively congruent (e.g., sunshine) rather than incongruent prime word (e.g, rain). Using the naming task, no affective priming was found in the present studies irrespective of prime-set size and target-set size (Experiment 1), irrespective of stimulus-onset asynchrony (Experiment 2), and even when a nearly exact replication of previous work that demonstrated the effect was conducted (Experiment 3). Finally, bilingual German/English speakers exhibited strong associative priming, but no affective priming, in both the English as well as the German language (Experiment 4). The results show that priming for evaluatively related words is not a general finding.  相似文献   

17.
Though much is known about the N400 component, an event-related EEG potential that is sensitive to semantic manipulations, it is unclear whether modulations of N400 amplitude reflect automatic processing, controlled processing, or both. We examined this issue using a semantic judgment task that manipulated local and global contextual cues. Word triplets (prime-noun-target, e.g., finance-bank-money) were sequentially presented on a computer screen (500 ms duration, 1000 ms stimulus onset asynchrony), in which the second word was a homograph. The first word (prime) created a neutral-, dominant-meaning- or subordinate-meaning-biased "global context," and the third word (target) created a dominant- or subordinate-biased "local context" that was either congruent or incongruent with the "global context" established by the first prime word. Participants were instructed to read all three words but to decide only whether the second and third words were semantically related. Event-related potentials (ERPs), specifically the N400, were recorded to the third terminal word. N400 amplitudes evoked by dominant meaning-related third words incongruent with the globally biased subordinate context (e.g., river-bridge-money) were significantly more negative than dominant endings in neutral contexts (e.g., taxi-bank-money), but not different from unrelated filler triplets. In addition, there was some indication that left hemisphere, temporal-parietal electrode sites were associated with greater N400 negativity for dominant targets in conflicting subordinate global contexts than homologous right hemisphere electrode sites, the latter of which showed significant activation to subordinate meanings in cooperating contexts. Thus, N400 amplitude was more affected by global than local context suggesting that controlled processes may take priority over automatic processes in modulating N400 amplitude, especially for left hemisphere electrode sites.  相似文献   

18.
The effects of line length and of spatial or temporal distance on illusory line motion (i.e., on the perception that a stationary line unfolds or expands away from a previously presented stationary cue) were examined in five experiments. Ratings of relative velocity decreased with increases in stimulus onset asynchrony between appearance of the cue and appearance of the line (from 50 to 450 ms), whereas the extremity of ratings of direction (i.e., strength of the ratings of illusory line motion) increased with increases in stimulus onset asynchrony (from 50 to either 250 or 450 ms). Ratings of relative velocity increased with increases in line length, whereas ratings of direction were not influenced by increases in line length. Ratings of relative velocity and direction were not influenced by increases in the distance of the near or the far end of the line from the cue. Implications of these data for attentional theories and apparent-motion theories of illusory line motion are discussed.  相似文献   

19.
According to the ideomotor principle, action preparation involves the activation of associations between actions and their effects. However, there is only sparse research on the role of action effects in saccade control. Here, participants responded to lateralized auditory stimuli with spatially compatible saccades toward peripheral targets (e.g., a rhombus in the left hemifield and a square in the right hemifield). Prior to the imperative auditory stimulus (e.g., a left tone), an irrelevant central visual stimulus was presented that was congruent (e.g., a rhombus), incongruent (e.g., a square), or unrelated (e.g., a circle) to the peripheral saccade target (i.e., the visual effect of the saccade). Saccade targets were present throughout a trial (Experiment 1) or appeared after saccade initiation (Experiment 2). Results showed shorter response times and fewer errors in congruent (vs. incongruent) conditions, suggesting that associations between oculomotor actions and their visual effects play an important role in saccade control.  相似文献   

20.
AUTOMATIC STEREOTYPING   总被引:7,自引:0,他引:7  
Abstract— Two experiments tested a form of automatic stereotyping Subjects saw primes related to gender (e g, mother, father, nurse, doctor ) or neutral with respect to gender (e g, parent, student, person ) followed by target pronouns (stimulus onset asynchrony = 300 ms) that Mere gender related (e g, she, he ) or neutral ( it, me ) or followed by nonpronouns ( do. alt , Experiment 2 only) In Experiment I, subjects judged whether each pronoun was male or female Automatic gender beliefs (stereotypes) were observed in faster responses to pronouns consistent than inconsistent with the gender component of the prime regardless of subjects' awareness of the prime-target relation, and independently of subjects explicit beliefs about gender stereotypes and language reform In Experiment 2, automatic stereotyping was obtained even though a gender irrelevant judgment task (pronoun/not pronoun) was used Together, these experiments demonstrate that gender information imparted by words can automatically influence judgment, although the strength of such effects may be moderated by judgment task and prime type.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号