首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本研究采用联结再认范式考察联结记忆中感知觉水平和概念加工程度对图片优势效应的影响。实验1通过呈现清晰或模糊的词语或图片对操纵了感知觉水平,结果发现只有在清晰条件下图片优势效应才会出现;实验2则在模糊条件下通过要求被试想象两个项目之间的关系操纵了概念加工程度,结果发现在有概念加工条件下,出现了图片优势效应。研究结果表明:(1)降低的感知觉水平会导致联结记忆中的图片优势效应消失;(2)对模糊项目对进行概念加工会使联结记忆中出现图片优势效应。  相似文献   

2.
In Experiment 1, Ss were exposed to slides of intermediate blur for 5, 10, 30, or 60 sec and were asked to guess the identity of the blurred object and to estimate how confident they were of each guess. The S’s task in Experiment 2 was merely to view the blurred slides while S’s EEG waves were being recorded. After being exposed to a blurred slide for a certain duration, in Experiment 3, Ss were required, by means of a key press, to choose to view either a clear version of the blurred slide or an unrelated clear picture. Uncertainty/second, EEG desynchronization/’second, and related choices were all found to be a negatively sloped function of viewing duration.  相似文献   

3.
4.
Three experiments tested the hypothesis that pictorial memory is much less dependent on rehearsal than is verbal memory. Experiment I examined incidental learning since this is assumed to reflect learning with little or no rehearsal. Following a classification task, intentional and incidental learning for pictures and for words was compared. The superiority of pictorial memory was especially marked in incidental learning. Experiment II showed that this result was not due to differences in the amount of processing required to classify pictures and words. RTs to classify words and pictures did not differ, and incidental learning was again superior for pictures. In Experiment III rehearsal opportunity was restricted by a concurrent task during presentation of word and picture lists, and the decrement was very much greater for word learning than for picture learning. It was concluded that manipulation of rehearsal opportunity has relatively little effect on pictorial memory.  相似文献   

5.
The cumulative semantic cost describes a phenomenon in which picture naming latencies increase monotonically with each additional within-category item that is named in a sequence of pictures. Here we test whether the cumulative semantic cost requires the assumption of lexical selection by competition. In Experiment 1 participants named a sequence of pictures, while in Experiment 2 participants named words instead of pictures, preceded by a gender marked determiner. We replicate the basic cumulative semantic cost with pictures (Exp. 1) and show that there is no cumulative semantic cost for word targets (Exp. 2). This pattern was replicated in Experiment 3 in which pictures and words were named along with their gender marked definite determiner, and were intermingled within the same experimental design. In addition, Experiment 3 showed that while picture naming induces a cumulative semantic cost for subsequently named words, word naming does not induce a cumulative semantic cost for subsequently named pictures. These findings suggest that the cumulative semantic cost arises prior to lexical selection and that the effect arises due to incremental changes to the connection weights between semantic and lexical representations.  相似文献   

6.
The current study explored whether new words in a foreign language are learned better from pictures than from native language translations. In both between-subjects and within-subject designs, Swahili words were not learned better from pictures than from English translations (Experiments 1-3). Judgments of learning revealed that participants exhibited greater overconfidence in their ability to recall a Swahili word from a picture than from a translation (Experiments 2-3), and Swahili words were also considered easier to process when paired with pictures rather than translations (Experiment 4). When this overconfidence bias was eliminated through retrieval practice (Experiment 2) and instructions warning participants to not be overconfident (Experiment 3), Swahili words were learned better from pictures than from translations. It appears, therefore, that pictures can facilitate learning of foreign language vocabulary--as long as participants are not too overconfident in the power of a picture to help them learn a new word.  相似文献   

7.
Altering retrieval demands reverses the picture superiority effect   总被引:4,自引:0,他引:4  
In Experiment 1 subjects studied a mixed list of pictures and words and then received either a free recall test or a word fragment completion test (e.g.,_yr_mi_forpyramid) on which some fragments corresponded to previously studied items. Free recall of pictures was better than that of words. However, words produced greater priming than did pictures on the fragment completion test, although a small amount of picture priming did occur. Experiments 2 and 3 showed that the picture priming was not due to implicit naming of the pictures during study. In Experiment 4 subjects studied words and pictures and received either the word fragment completion test or a picture fragment identification test in which they had to name degraded pictures. Greater priming was obtained with words in word fragment completion, but greater priming was obtained with pictures on the picture identification test. We conclude that (1) the type of retrieval query determines whether pictures or words will exhibit superior retention, and (2)our results conform to the principle of transfer appropriate processing by which performance on transfer or retention tests benefits to the extent that the tests recapitulate operations used during learning.  相似文献   

8.
Three experiments showed that the pattern of interference of single-modality Stroop tests also exists cross-modally. Distractors and targets were either pictures or auditory words. In a naming task (Experiment 1), word distractors from the same semantic category as picture targets interfered with picture naming more than did semantically unrelated distractors; the semantic category of picture distractors did not differentially affect word naming. In a categorization task (Experiment 2), this Stroop-like effect was reversed: Picture distractors from the same semantic category as word targets interfered less with word categorization than picture distractors that were semantically unrelated; the semantic category of word distractors did not differentially affect picture categorization. Experiment 3 replicated these effects when each subject performed both tasks; the task, naming or categorizing, determined the pattern of interference between pictures and auditory words. The results thus support the existence of a semantic component of a cross-modal Stroop-like effect.  相似文献   

9.
Three experiments studied the effects of voluntary and involuntary focus of attention on recognition memory for pictures. Experiments 1 and 3 tested the conceptual-masking hypothesis, which holds that a visual event will automatically disrupt processing of a previously glimpsed picture if that event is new and meaningful. Memory for 112-ms pictures was tested under conditions where the to-be-ignored 1.5-s interstimulus interval contained a blank field; a repeating picture; a new picture; a new, nonsense picture; or a new, inverted picture each time. The blank field, repeating picture, and new, nonsense picture did not disrupt memory as much as a new, meaningful picture, supporting the conceptual-masking hypothesis. Experiment 2 studied voluntary attentional control of encoding by instructing subjects to focus attention on the brief pictures, all pictures, or the long pictures in a sequence. Recognition memory for pictures of both durations showed a striking ability of observers to process pictures selectively. The possible role of these effects in visual scanning are discussed.  相似文献   

10.
Pigeons' key pecks were reinforced in the presence of pictures from one of two categories, cats or cars. A single picture associated with reinforcement was used in Experiment 1, and 20 pictures from the same category were associated with reinforcement in Experiment 2. Pigeons then were presented with novel test pictures from the training category and from the other, previously unseen, category. During Session 1 of testing, pigeons pecked no more often at pictures from the reinforced category than at pictures from the previously unseen category. When pigeons were trained with pictures associated with reinforcement or its absence from different categories in Experiment 3, differential responding to novel pictures from different categories appeared during Session 1. These findings argue against a process of automatic stimulus generalization within natural categories and in favor of the position that category distinctions are not made until members of at least two categories are compared with one another.  相似文献   

11.
Eye movements were monitored while observers inspected photographs of natural scenes. At the end of each saccade (i.e., at the beginning of each period of steady fixation), the stimulus was replaced for a certain period of time by a uniform field (Experiment 1) or a blurred version of the stimulus scene (Experiment 2). Total fixation duration was measured as a function of the duration of the initial uniform field or the blurred image that followed the saccade. It was found that fixation duration increased proportionally with the duration of the initial replacement field, even for durations as short as 25 msec. These results suggest that the visual system uses information on the retina right after each saccade is completed and that the blurred, low-resolution information used in Experiment 2 (cutoff frequency of 0.8 cpd) is not sufficient for the requirements of picture processing in this task.  相似文献   

12.
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway.  相似文献   

13.
Three experiments investigated the conditions under which pictures facilitate learning. In Experiment 1, confusing verbal relationships were supplemented with pictures that illustrated the key concepts in each verbal relationship (base pictures), illustrated the key concepts in more elaborate arbitrary relationships (pictures of arbitrary elaborations), or illustrated the key concepts in ways that helped clarify the verbal relationships (pictures of explanatory elaborations). All three types of pictures facilitated the retention of the verbal relationships, although pictures of explanatory elaborations were superior to other types of visual illustrations. In Experiment 2, the facilitative effects of base pictures depended on a schematically unique illustration of the key concepts in a single image. In Experiment 3, picture facilitation effects were constrained by the type of verbal elaborations that accompanied the pictures. Specifically, base pictures and pictures of arbitrary elaborations did not facilitate the retention of confusing verbal relationships that were elaborated with explanatory information, and actually interfered with the retention of those explanatory elaborations. The implications of these results are discussed.  相似文献   

14.
Emotional information is often remembered better than neutral information, but the emotional benefit for positive information is less consistently observed than the benefit for negative information. The current study examined whether positive emotional pictures are recognised better than neutral pictures, and further examined whether participants can predict how emotion affects picture recognition. In two experiments, participants studied a mixed list of positive and neutral pictures, and made immediate judgements of learning (JOLs). JOLs for positive pictures were consistently higher than for neutral pictures. However, recognition performance displayed an inconsistent pattern. In Experiment 1, neutral pictures were more discriminable than positive pictures, but Experiment 2 found no difference in recognition based on emotional content. Despite participants’ beliefs, positive emotional content does not appear to consistently benefit picture memory.  相似文献   

15.
In this study, we report results from two experiments in which pictures were shown with superimposed distractors that varied along two dimensions: frequency (high vs. low) and semantic relation with respect to the picture (related vs. unrelated). In one condition of Experiment 1, participants named pictures with a noun utterance; in the other condition of Experiment 1 and in Experiment 2 participants named pictures with a pronominal utterance. Low frequency distractor words produced greater interference with respect to high frequency words in noun production, but not in pronoun production. Critically, a semantic interference effect, greater interference in the semantically related than unrelated condition, was reported in both experiments, suggesting that distractor words were equally processed in both noun and pronoun conditions. These results are discussed in the context of current models of picture–word interference.  相似文献   

16.
Tracing the time course of picture--word processing   总被引:6,自引:0,他引:6  
A number of independent lines of research have suggested that semantic and articulatory information become available differentially from pictures and words. The first of the experiments reported here sought to clarify the time course by which information about pictures and words becomes available by considering the pattern of interference generated when incongruent pictures and words are presented simultaneously in a Stroop-like situation. Previous investigators report that picture naming is easily disrupted by the presence of a distracting word but that word naming is relatively immune to interference from an incongruent picture. Under the assumption that information available from a completed process may disrupt an ongoing process, these results suggest that words access articulatory information more rapidly than do pictures. Experiment 1 extended this paradigm by requiring subjects to verify the category of the target stimulus. In accordance with the hypothesis that picture access the semantic code more rapidly than words, there was a reversal in the interference pattern: Word categorization suffered considerable disruption, whereas picture categorization was minimally affected by the presence of an incongruent word. Experiment 2 sought to further test the hypothesis that access to semantic and articulatory codes is different for pictures and words by examining memory for those items following naming or categorization. Categorized words were better recognized than named words, whereas the reverse was true for pictures, a result which suggests that picture naming involves more extensive processing than picture categorization. Experiment 3 replicated this result under conditions in which viewing time was held constant. The last experiment extended the investigation of memory differences to a situation in which subjects were required to generate the superordinate category name. Here, memory for categorized pictures was as good as memory for named pictures. Category generation also influenced memory for words, memory performance being superior to that following a yes--no verification of category membership. These experiments suggest a model of information access whereby pictures access semantic information were readily than name information, with the reverse being true for words. Memory for both pictures and words was a function of the amount of processing required to access a particular type of information as well as the extent of response differentiation necessitated by the task.  相似文献   

17.
In the study phase of these experiments, subjects were asked to think of an item suggested by the omission in an incomplete sentence, and then look at a picture or word describing an item and say whether it was the same as theirs. In the test phase, they were asked to identify studied and nonstudied items presented briefly in either picture or word form. Subjects were then required to recall the words or pictures shown in the study phase. Experiment 1, with a within-subjects design, revealed that the studied pictures were identified more readily than studied words and nonstudied pictures. This indicates a physical priming effect. In word identification, studied words were identified more readily than nonstudied words; however, there was no difference between studied words and studied pictures, and the performance for studied pictures and nonstudied items were largely the same. The physical priming effect on picture identification was also shown in Experiment 2, with a between-subjects design. Different processing mechanisms in picture and word identification are discussed.  相似文献   

18.
How nonhuman primates process pictures of natural scenes or objects remains a matter of debates. This issue was addressed in the current research by questioning the processing of the canonical orientation of pictures in baboons. Two adult guinea baboons were trained to use an interactive key (IK) on a touch-screen to change the orientation of target pictures showing humans or quadruped mammals until upright. In experiment 1, both baboons successfully learned to use the IK when that key induced a 90 degrees rightward rotation of the picture, but post-training transfer of performance did not occur to novel pictures of natural scenes due to potential motor biases. In Experiment 2, a touch on IK randomly displayed the pictures in any of the four cardinal orientations. Baboons successfully learned the task, but transfer to novel pictures could only be demonstrated after they had been exposed to 360-480 pictures in that condition. Experiment 3 confirmed positive transfers to novel pictures, and showed that both the figure and background information controlled the behavior. Our research on baboons therefore demonstrates the development and use of an "upright" concept, and indicates that picture processing modes strongly depend on the subject's past experience with naturalistic pictorial stimuli.  相似文献   

19.
In five language production experiments it was examined which aspects of words are activated in memory by context pictures and words. Context pictures yielded Stroop-like and semantic effects on response times when participants generated gender-marked noun phrases in response to written words (Experiment 1A). However, pictures yielded no such effects when participants simply read aloud the noun phrases (Experiment 2). Moreover, pictures yielded a gender congruency effect in generating gender-marked noun phrases in response to the written words (Experiments 3A and 3B). These findings suggest that context pictures activate lemmas (i.e., representations of syntactic properties), which leads to effects only when lemmas are needed to generate a response (i.e., in Experiments 1A, 3A, and 3B, but not in Experiment 2). Context words yielded Stroop-like and semantic effects in picture naming (Experiment 1B). Moreover, words yielded Stroop-like but no semantic effects in reading nouns (Experiment 4) and in generating noun phrases (Experiment 5). These findings suggest that context words activate the lemmas and forms of their names, which leads to semantic effects when lemmas are required for responding (Experiment 1B) but not when only the forms are required (Experiment 4). WEAVER++ simulations of the results are presented.  相似文献   

20.
In five language production experiments it was examined which aspects of words are activated in memory by context pictures and words. Context pictures yielded Stroop-like and semantic effects on response times when participants generated gender-marked noun phrases in response to written words (Experiment 1A). However, pictures yielded no such effects when participants simply read aloud the noun phrases (Experiment 2). Moreover, pictures yielded a gender congruency effect in generating gender-marked noun phrases in response to the written words (Experiments 3A and 3B). These findings suggest that context pictures activate lemmas (i.e., representations of syntactic properties), which leads to effects only when lemmas are needed to generate a response (i.e., in Experiments 1A, 3A, and 3B, but not in Experiment 2). Context words yielded Stroop-like and semantic effects in picture naming (Experiment 1B). Moreover, words yielded Stroop-like but no semantic effects in reading nouns (Experiment 4) and in generating noun phrases (Experiment 5). These findings suggest that context words activate the lemmas and forms of their names, which leads to semantic effects when lemmas are required for responding (Experiment 1B) but not when only the forms are required (Experiment 4). WEAVER++ simulations of the results are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号