首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime–target pairs were phonologically onset related (e.g., pijl–pijn, arrowpain), were from the same semantic category (e.g., pijl–zwaard, arrowsword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.  相似文献   

2.
Impossible figures are striking examples of inconsistencies between global and local perceptual structures, in which the overall spatial configuration of the depicted image does not yield a coherent three-dimensional object. In order to investigate whether structural “impossibility” is an important perceptual property of depicted objects, we used a category formation task in which subjects were asked to divide pictures of shapes into groups that seemed most natural to them. Category formation is usually unidimensional, such that sorting is dominated by a single perceptual property, so this task can serve as a measure of which dimensions are most salient. In Experiment 1, subjects received sets of 12 line drawings consisting of six possible and six impossible objects. Very few subjects grouped the figures by impossibility on the first try, and only half did so after multiple attempts at sorting. In Experiment 2, we investigated other global properties of figures: symmetry and complexity. Subjects readily sorted objects by complexity, but seldom by symmetry. In Experiment 3, subjects were asked to draw each of the figures before sorting them, which had only a minimal effect on categorization. Finally, in Experiment 4, subjects were explicitly instructed to divide the shapes by symmetry or impossibility. Performance on this task was perfect for symmetry, but not for impossibility. Although global properties of figures seem extremely important to our perception, the results suggest that some of these cues are not immediately obvious or salient for most observers.  相似文献   

3.
Two experiments investigated whether separate sets of objects viewed in the same environment but from different views were encoded as a single integrated representation or maintained as distinct representations. Participants viewed two circular layouts of objects that were placed around them in a round (Experiment 1) or a square (Experiment 2) room and were later tested on perspective-taking trials requiring retrieval of either one layout (within-layout trials) or both layouts (between-layout trials). Results from Experiment 1 indicated that participants did not integrate the two layouts into a single representation. Imagined perspective taking was more efficient on within- than on between-layout trials. Furthermore, performance for within-layout trials was best from the perspective that each layout was studied. Results from Experiment 2 indicated that the stable environmental reference frame provided by the square room caused many, but not all, participants to integrate all locations within a common representation. Participants who integrated performed equally well for within-layout and between-layout judgments and also represented both layouts using a common reference frame. Overall, these findings highlight the flexibility of organizing information in spatial memory.  相似文献   

4.
Spatio-temporal interactions between simple geometrical shapes typically elicit strong impressions of intentionality. Recent research has started to explore the link between attentional processes and the detection of interacting objects. Here, we asked whether visual attention is biased toward such interactions. We investigated probe discrimination performance in algorithmically generated animations that involved two chasing objects and two randomly moving objects. In Experiment 1, we observed a pronounced attention capture effect for chasing objects. Because reduced interobject spacing is an inherent feature of interacting objects, in Experiment 2 we designed randomly moving objects that were matched to the chasing objects with respect to interobject spacing at probe onset. In this experiment, the capture effect attenuated completely. Therefore, we argue that reduced interobject spacing reflects an efficient cue to guide visual attention toward objects that interact intentionally.  相似文献   

5.
A current debate regarding face and object naming concerns whether they are equally vulnerable to semantic interference. Although some studies have shown similar patterns of interference, others have revealed different effects for faces and objects. In Experiment 1, we compared face naming to object naming when exemplars were presented in a semantically homogeneous context (grouped by their category) or in a semantically heterogeneous context (mixed) across four cycles. The data revealed significant slowing for both face and object naming in the homogeneous context. This semantic interference was explained as being due to lexical competition from the conceptual activation of category members. When focusing on the first cycle, a facilitation effect for objects but not for faces appeared. This result permits us to explain the previously observed discrepancies between face and object naming. Experiment 2 was identical to Experiment 1, with the exception that half of the stimuli were presented as face/object names for reading. Semantic interference was present for both face and object naming, suggesting that faces and objects behave similarly during naming. Interestingly, during reading, semantic interference was observed for face names but not for object names. This pattern is consistent with previous assumptions proposing the activation of a person identity during face name reading.  相似文献   

6.
In five experiments, we examined whether the number of items can guide visual focal attention. Observers searched for the target area with the largest (or smallest) number of dots (squares in Experiment 4 and “checkerboards” in Experiment 5) among distractor areas with a smaller (or larger) number of dots. Results of Experiments 1 and 2 show that search efficiency is determined by target to distractor dot ratios. In searches where target items contained more dots than did distractor items, ratios over 1.5:1 yielded efficient search. Searches for targets where target items contained fewer dots than distractor items were harder. Here, ratios needed to be lower than 1:2 to yield efficient search. When the areas of the dots and of the squares containing them were fixed, as they were in Experiments 1 and 2, dot density and total dot area increased as dot number increased. Experiment 3 removed the density and area cues by allowing dot size and total dot area to vary. This produced a marked decline in search performance. Efficient search now required ratios of above 3:1 or below 1:3. By using more realistic and isoluminant stimuli, Experiments 4 and 5 show that guidance by numerosity is fragile. As is found with other features that guide focal attention (e.g., color, orientation, size), the numerosity differences that are able to guide attention by bottom-up signals are much coarser than the differences that can be detected in attended stimuli.  相似文献   

7.
Three experiments are reported that collectively show that listeners perceive speech sounds as contrasting auditorily with neighboring sounds. Experiment 1 replicates the well-established finding that listeners categorize more of a [d–g] continuum as [g] after [l] than after [r]. Experiments 2 and 3 show that listeners discriminate stimuli in which the energy concentrations differ in frequency between the spectra of neighboring sounds better than those in which they do not differ. In Experiment 2, [alga–arda] pairs, in which the energy concentrations in the liquid-stop sequences are H(igh) L(ow)–LH, were more discriminable than [alda–arga] pairs, in which they are HH–LL. In Experiment 3, [da] and [ga] syllables were more easily discriminated when they were preceded by lower and higher pure tones, respectively—that is, tones that differed from the stops’ higher and lower F3 onset frequencies—than when they were preceded by H and L pure tones with similar frequencies. These discrimination results show that contrast with the target’s context exaggerates its perceived value when energy concentrations differ in frequency between the target’s spectrum and its context’s spectrum. Because contrast with its context does more that merely shift the criterion for categorizing the target, it cannot be produced by neural adaptation. The finding that nonspeech contexts exaggerate the perceived values of speech targets also rules out compensation for coarticulation by showing that their values depend on the proximal auditory qualities evoked by the stimuli’s acoustic properties, rather than the distal articulatory gestures.  相似文献   

8.
An important, but as yet incompletely resolved, issue is whether spatial knowledge acquired during navigation differs significantly from that acquired by studying a cartographic map. This, in turn, is relevant to understanding the generalizability of the concept of a “cognitive map,” which is often likened to a cartographic map. On the basis of previous theoretical proposals, we hypothesized that route and cartographic map learning would produce differences in the dynamics of acquisition of landmark-referenced (allocentric) knowledge, relative to view-referenced (egocentric) knowledge. We compared this model with competing predictions from two other models linked to route versus map learning. To test these ideas, participants repeatedly performed a judgment of relative direction (JRD) and a scene- and orientation-dependent pointing (SOP) task while undergoing route and cartographic map learning of virtual spatial environments. In Experiment 1, we found that map learning led to significantly faster improvements in JRD pointing accuracy than did route learning. In Experiment 2, in contrast, we found that route learning led to more immediate and greater improvements overall in SOP accuracy, as compared to map learning. Comparing Experiments 1 and 2, we found a significant three-way interaction effect, indicating that improvements in performance differed for the JRD versus the SOP task as a function of route versus map learning. We interpreted these findings as suggesting that the learning modality differentially affects the dynamics of how we utilize primarily landmark-referenced versus view-referenced knowledge, suggesting potential differences in how we utilize spatial representations acquired from routes versus cartographic maps.  相似文献   

9.
Context affects multiple cognitive and perceptual processes. In the present study, we asked how the context of a set of faces would affect the perception of a target face??s race in two distinct tasks. In Experiments 1 and 2, participants categorized target faces according to perceived racial category (Black or White). In Experiment 1, the target face was presented alone or with Black or White flanker faces. The orientation of flanker faces was also manipulated to investigate how face inversion effect would interact with the influences of flanker faces on the target face. The results showed that participants were more likely to categorize the target face as White when it was surrounded by inverted White faces (an assimilation effect). Experiment 2 further examined how different aspects of the visual context would affect the perception of the target face by manipulating flanker faces?? shape and pigmentation, as well as their orientation. The results showed that flanker faces?? shape and pigmentation affected the perception of the target face differently. While shape elicited a contrast effect, pigmentation appeared to be assimilative. These novel findings suggest that the perceived race of a face is modulated by the appearance of other faces and their distinct shape and pigmentation properties. However, the contrast and assimilation effects elicited by flanker faces?? shape and pigmentation may be specific to race categorization, since the same stimuli used in a delayed matching task (Experiment 3) revealed that flanker pigmentation induced a contrast effect on the perception of target pigmentation.  相似文献   

10.
Previous research has suggested that two color patches can be consolidated into visual short-term memory (VSTM) via an unlimited parallel process. Here we examined whether the same unlimited-capacity parallel process occurs for two oriented grating patches. Participants viewed two gratings that were presented briefly and masked. In blocks of trials, the gratings were presented either simultaneously or sequentially. In Experiments 1 and 2, the presentation of the stimuli was followed by a location cue that indicated the grating on which to base one’s response. In Experiment 1, participants responded whether the target grating was oriented clockwise or counterclockwise with respect to vertical. In Experiment 2, participants indicated whether the target grating was oriented along one of the cardinal directions (vertical or horizontal) or was obliquely oriented. Finally, in Experiment 3, the location cue was replaced with a third grating that appeared at fixation, and participants indicated whether either of the two test gratings matched this probe. Despite the fact that these responses required fairly coarse coding of the orientation information, across all methods of responding we found superior performance for sequential over simultaneous presentations. These findings suggest that the consolidation of oriented gratings into VSTM is severely limited in capacity and differs from the consolidation of color information.  相似文献   

11.
In five experiments, we extended the production effect—better memory for items said aloud than for items read silently—to paired-associate learning, the goal being to explore whether production enhances associative information in addition to enhancing item information. In Experiments 1 and 2, we used a semantic-relatedness task in addition to the production manipulation and found no evidence of a production effect, whether the measure was cued recall or item recognition. Experiment 3 showed that the semantic-relatedness task had overshadowed the production effect; the effect was present when the semantic-relatedness task was removed, again whether cued recall or item recognition was the measure. Experiments 4 and 5 provided further evidence that production can enhance recall for word pairs and, using an associate recognition test with intact versus rearranged pairs, indicated that production may also enhance associative information. That production boosts memory for both types of information is considered in terms of distinctive encoding.  相似文献   

12.
Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247?C279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.  相似文献   

13.
Across many areas of study in cognition, the capacity of working memory (WM) is widely agreed to be roughly three to five items: three to five objects (i.e., bound collections of object features) in the literature on visual WM or three to five role bindings (i.e., objects in specific relational roles) in the literature on memory and reasoning. Three experiments investigated the capacity of observers’ WM for the spatial relations among objects in a visual display, and the results suggest that the “items” in WM are neither simply objects nor simply role bindings. The results of Experiment 1 are most consistent with a model that treats an “item” in visual WM as an object, along with the roles of all its relations to one other object. Experiment 2 compared observers’ WM for object size with their memory for relative size and provided evidence that observers compute and store objects’ relations per se (rather than just absolute size) in WM. Experiment 3 tested and confirmed several more nuanced predictions of the model supported by Experiment 1. Together, these findings suggest that objects are stored in visual WM in pairs (along with all the relations between the objects in a pair) and that, from the perspective of WM, a given object in one pair is not the same “item” as that same object in a different pair.  相似文献   

14.
A major issue in visual scene recognition involves the extraction of recurring chunks from a sequence of complex scenes. Previous studies have suggested that this kind of learning is accomplished according to Bayesian principles that constrain the types of extracted chunks. Here we show that perceptual grouping cues are also incorporated in this Bayesian model, providing additional evidence for the possible span of chunks. Experiment 1 replicates previous results showing that observers can learn three-element chunks without learning smaller, two-element chunks embedded within them. Experiment 2 shows that the very same embedded chunks are learned if they are grouped by perceptual cues, suggesting that perceptual grouping cues play an important role in chunk extraction from complex scenes.  相似文献   

15.
We investigated the plastic effect in picture perception, in which the apparent depth of a picture is increased when it is reflected by a mirror. The plastic effect was well known in the mid-18th century, but very few studies have elucidated its nature. In Experiment 1, we examined how often the plastic effect occurs in different ocular conditions. A group of 22 observers compared directly observed pictures and their mirror-reflected images in each of free-binocular, free-monocular, and restrictive-monocular conditions. When the observers were forced to choose the picture that appeared greater in depth, 73?% of them chose the reflected pictures, regardless of oculomotor condition. In Experiment 2, we examined how often the plastic effect is detected as a function of observation time. When 22 observers compared a directly watched movie and its mirror-reflected movie for 5?min, the number of observers who judged the reflected movie to be greater in depth was about 55?% at the onset of the trial but was 86?% at the end. In Experiment 3, we examined transfer of the plastic effect. Ten observers judged the change in apparent depth of directly observed pictures after prolonged exposure to the same reflected or actual pictures. Transfer was confirmed and was greater for pictures that represented greater depth (r = .88). We suggested that the plastic effect is mainly induced by the double apparent locations of a reflected picture. From the long incubation time and the transfer to real pictures, we also suggested that it involves perceptual learning regarding visual skill.  相似文献   

16.
Recalling information involves the process of discriminating between relevant and irrelevant information stored in memory. Not infrequently, the relevant information needs to be selected from among a series of related possibilities. This is likely to be particularly problematic when the irrelevant possibilities not only are temporally or contextually appropriate, but also overlap semantically with the target or targets. Here, we investigate the extent to which purely perceptual features that discriminate between irrelevant and target material can be used to overcome the negative impact of contextual and semantic relatedness. Adopting a distraction paradigm, it is demonstrated that when distractors are interleaved with targets presented either visually (Experiment 1) or auditorily (Experiment 2), a within-modality semantic distraction effect occurs; semantically related distractors impact upon recall more than do unrelated distractors. In the semantically related condition, the number of intrusions in recall is reduced, while the number of correctly recalled targets is simultaneously increased by the presence of perceptual cues to relevance (color features in Experiment 1 or speaker’s gender in Experiment 2). However, as is demonstrated in Experiment 3, even presenting semantically related distractors in a language and a sensory modality (spoken Welsh) distinct from that of the targets (visual English) is insufficient to eliminate false recalls completely or to restore correct recall to levels seen with unrelated distractors . Together, the study shows how semantic and nonsemantic discriminability shape patterns of both erroneous and correct recall.  相似文献   

17.
Recognition without identification is the finding that, among recognition test items that go unidentified (as when a word is unidentified from a fragment), participants can discriminate those that were studied from those that were unstudied. In the present study, we extended this phenomenon to the more life-like situation of discriminating known from novel stimuli. Pictures of famous and nonfamous faces (Exp. 1), famous and nonfamous scenes (Exp. 2), and threatening and nonthreatening images (Exp. 3) were filtered in order to impede identification. As in list-learning recognition-without-identification paradigms, participants attempted to identify each image (e.g., whose face it was, what scene it was, or what was in the picture) and rated how familiar the image seemed on a scale of 0 (very unfamiliar) to 10 (very familiar). Among the unidentified stimuli, higher familiarity ratings were given to famous than to nonfamous faces (Exp. 1) and scenes (Exp. 2), and to threatening than to nonthreatening living/animate (but not to nonliving/nonanimate) images (Exp. 3). These findings suggest that even when a stimulus is too occluded to allow for conscious identification, enough information can be processed to allow a sense of familiarity or novelty with it, which appears also to be related to the sense of whether or not a living creature is a threat. That the sense of familiarity for unidentified stimuli may be related to threat detection for living or animate things suggests that it may be an adaptive aspect of human memory.  相似文献   

18.
Recognition of own-race faces is superior to recognition of other-race faces. In the present experiments, we explored the role of top-down social information in the encoding and recognition of racially ambiguous faces. Hispanic and African American participants studied and were tested on computer-generated ambiguous-race faces (composed of 50 % Hispanic and 50 % African American features; MacLin & Malpass, Psychology, Public Policy, and Law 7:98–118, 2001). In Experiment 1, the faces were randomly assigned to two study blocks. In each block, a group label was provided that indicated that those faces belonged to African American or to Hispanic individuals. Both participant groups exhibited superior memory for faces studied in the block with their own-race label. In Experiment 2, the faces were studied in a single block with no labels, but tested in two blocks in which labels were provided. Recognition performance was not influenced by the labeled race at test. Taken together, these results confirm the claim that purely top-down information can yield the well-documented cross-race effect in recognition, and additionally they suggest that the bias takes place at encoding rather than testing.  相似文献   

19.
Are we humans drawn to the forbidden? From jumbo-sized soft drinks to illicit substances, the influence of prohibited ownership on subsequent demand has made this question a pressing one. We know that objects that we ourselves own have a heightened psychological saliency, relative to comparable objects that are owned by others, but do these kinds of effects extend from self-owned to “forbidden” objects? To address this question, we developed a modified version of the Turk shopping paradigm in which “purchased” items were assigned to various recipients. Participants sorted everyday objects labeled as “self-owned”, “other-owned,” and either “forbidden to oneself” (Experiment 1) or “forbidden to everyone” (Experiment 2). Subsequent surprise recognition memory tests revealed that forbidden objects with high (Experiment 1) but not with low (Experiment 2) self-relevance were recognized as well as were self-owned objects, and better than other-owned objects. In a third and final experiment, we used event-related potentials (ERPs) to determine whether self-owned and self-forbidden objects, which showed a common memory advantage, are in fact treated the same at a neurocognitive–affective level. We found that both object types were associated with enhanced cognitive analysis, relative to other-owned objects, as measured by the P300 ERP component. However, we also found that self-forbidden objects uniquely triggered an enhanced response preceding the P300, in an ERP component (the N2) that is sensitive to more rapid, affect-related processing. Our findings thus suggest that, whereas self-forbidden objects share a common cognitive signature with self-owned objects, they are unique in being identified more quickly at a neurocognitive level.  相似文献   

20.
The ability to quickly and accurately match faces to photographs bears critically on many domains, from controlling purchase of age-restricted goods to law enforcement and airport security. Despite its pervasiveness and importance, research has shown that face matching is surprisingly error prone. The majority of face-matching research is conducted under idealized conditions (e.g., using photographs of individuals taken on the same day) and with equal proportions of match and mismatch trials, a rate that is likely not observed in everyday face matching. In four experiments, we presented observers with photographs of faces taken an average of 1.5 years apart and tested whether face-matching performance is affected by the prevalence of identity mismatches, comparing conditions of low (10 %) and high (50 %) mismatch prevalence. Like the low-prevalence effect in visual search, we observed inflated miss rates under low-prevalence conditions. This effect persisted when participants were allowed to correct their initial responses (Experiment 2), when they had to verify every decision with a certainty judgment (Experiment 3) and when they were permitted “second looks” at face pairs (Experiment 4). These results suggest that, under realistic viewing conditions, the low-prevalence effect in face matching is a large, persistent source of errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号