首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Despite the expense of designing and the popularity of using logos to represent brands, there is a paucity of information on how such symbols are processed. This series of experiments used Repetition Blindness (RB) to measure implicit association of logos and names that varied in (1) the abstractness of the logo, and (2) the level of familiarity with the logo. RB is a perceptual phenomenon that occurs when two items, presented in a rapid serial visual presentation (RSVP), are encoded along repeated dimensions (e.g. visual, phonological, semantic) resulting in only one item being perceived (Bavelier, 1994 ; Buttle, Ball, Zhang, & Raymond, 2005 ; Kanwisher, 1987 ). Phonological RB was revealed for both abstract and figurative logos and occurred regardless of familiarity. The results suggest that as long as a consumer has the opportunity to be exposed to the name of a logo then logo‐name association learning is a rapid process. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
The irrelevant sound effect (ISE) describes the significant reduction in verbal serial recall during irrelevant sounds with distinct temporal-spectral variations (changing-state sound). Whereas the ISE is well-documented for the serial recall of visual items accompanied by irrelevant speech and nonspeech sounds, an ISE caused by nonspeech sounds has not been reported for auditory items. Closing this empirical gap, Experiment 1 (n=90) verified that instrumental staccato-music reduces auditory serial recall compared to legato-music and silence. Its detrimental impact was not due to perceptual masking, disturbed encoding, or increased listening effort, as the employed experimental design and methods ensured. The found nonspeech ISE in auditory serial recall is corroborated by Experiment 1b (n=60), which, by using the same experimental design and methods, replicated the well-known ISE during irrelevant changing-state speech compared to steady-state speech, pink noise, and silence.  相似文献   

3.
Teinonen T  Aslin RN  Alku P  Csibra G 《Cognition》2008,108(3):850-855
Previous research has shown that infants match vowel sounds to facial displays of vowel articulation [Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. Science, 218, 1138–1141; Patterson, M. L., & Werker, J. F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. Infant Behaviour & Development, 22, 237–247], and integrate seen and heard speech sounds [Rosenblum, L. D., Schmuckler, M. A., & Johnson, J. A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59, 347–357; Burnham, D., & Dodd, B. (2004). Auditory-visual speech integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychobiology, 45, 204–220]. However, the role of visual speech in language development remains unknown. Our aim was to determine whether seen articulations enhance phoneme discrimination, thereby playing a role in phonetic category learning. We exposed 6-month-old infants to speech sounds from a restricted range of a continuum between /ba/ and /da/, following a unimodal frequency distribution. Synchronously with these speech sounds, one group of infants (the two-category group) saw a visual articulation of a canonical /ba/ or /da/, with the two alternative visual articulations, /ba/ and /da/, being presented according to whether the auditory token was on the /ba/ or /da/ side of the midpoint of the continuum. Infants in a second (one-category) group were presented with the same unimodal distribution of speech sounds, but every token for any particular infant was always paired with the same syllable, either a visual /ba/ or a visual /da/. A stimulus-alternation preference procedure following the exposure revealed that infants in the former, and not in the latter, group discriminated the /ba/–/da/ contrast. These results not only show that visual information about speech articulation enhances phoneme discrimination, but also that it may contribute to the learning of phoneme boundaries in infancy.  相似文献   

4.
When two targets (T1 and T2) are inserted in a rapid stream of visual distractors (RSVP), detection/ identification accuracy of T2 is impaired at intertarget lags shorter than about 500 msec. This phenomenon, the attentional blink (AB), has been regarded as a hallmark of the inability of the visual system to process multiple items. Yet, paradoxically, the AB is much reduced when T2 is presented directly after T1 (known aslag-1 sparing). Because lag-1 sparing is said to depend on observers’ spatial attention being set to process the first target, we predicted that if observers are set to monitor two RSVP streams, they could process more than two items; that is, two instances of lag-1 sparing would be obtained concurrently. The results of three experiments indicated that this was the case. When observers searched for two targets in each of two synchronized RSVP streams, lag-1 sparing occurred concurrently in both streams. These results suggest that the visual system can handle up to four items at one moment under RSVP circumstances.  相似文献   

5.
探讨在RSVP任务中重复刺激的呈现位置和报告方式对重复知盲效应的影响, 检验该研究提出的注意资源最优化假设和末位优势效应。实验1操作了刺激性质和重复刺激位置, 得出了刺激性质和重复刺激位置的交互效应; 实验2操作了刺激性质和情景信息, 结果是交互效应不显著; 实验3操作了刺激性质和报告方式, 出现了刺激性质和报告方式的交互效应。该结果表明: (1)由于注意资源分配的优化, 重复刺激有末位优势效应。(2)重复知盲的产生不是发生在知觉阶段, 而是发生在报告阶段。(3)注意资源分配最优化理论比建构/归因理论能更合理地解释重复知盲的发生。  相似文献   

6.
Repetition blindness (RB) refers to the failure to detect the second occurrence of a repeated item in rapid serial visual presentation (RSVP). In two experiments using RSVP, the ability to report two critical characters was found to be impaired when these two characters were identical (Experiment 1) or similar by sharing one repeated component (Experiment 2), as opposed to when they were different characters with no common components. RB for the whole character occurred when the exposure duration was more than 50 ms with one intervening character between the two critical characters (lag=1), whereas RB for subcharacter components was more evident at exposure durations shorter than 50 ms with no intervening character (lag=0). These results provide support for the model of sublexical processing in Chinese character recognition.  相似文献   

7.
Working memory (WM) capacity limit has been extensively studied in the domains of visual and verbal stimuli. Previous studies have suggested a fixed WM capacity of typically about three or four items, on the basis of the number of items in working memory reaching a plateau after several items as the set size increases. However, the fixed WM capacity estimate appears to rely on categorical information in the stimulus set (Olsson & Poom Proceedings of the National Academy of Sciences 102:8776-8780, 2005). We designed a series of experiments to investigate nonverbal auditory WM capacity and its dependence on categorical information. Experiments 1 and 2 used simple tones and revealed capacity limit of up to two tones following a 6-s retention interval. Importantly, performance was significantly higher at set sizes 2, 3, and 4 when the frequency difference between target and test tones was relatively large. In Experiment 3, we added categorical information to the simple tones, and the effect of tone change magnitude decreased. Maximal capacity for each individual was just over three sounds, in the range of typical visual procedures. We propose that two types of information, categorical and detailed acoustic information, are kept in WM and that categorical information is critical for high WM performance.  相似文献   

8.
本研究探讨重复知盲发生在知觉阶段还是在记忆阶段。采用快速视觉系列呈现(rapid serial visual representation, 简称RSVP)任务, 让被试即时回忆80个词表中呈现的词。操作了词表中目标词的重复数, 分为无重复、单刺激重复(重复一对)和双刺激重复(重复两对)三种, 和目标词的情绪效价, 分为中性词和情绪词两种。结果发现, 对目标词回忆的正确率是:当目标词和非目标词都是中性词汇, 即二者的情绪效价强度相当时(实验1), 在无重复条件与双刺激重复条件下无差异, 二者均大于单刺激重复条件; 当目标词是消极词汇, 非目标词是中性词汇, 即目标刺激的情绪效价强度大于非目标刺激时(实验2), 无重复条件与单刺激重复条件无差异, 二者均大于双刺激重复条件。结果表明:(1)RSVP 任务下词表中有双刺激重复时, 如果刺激的效价强度相当, 出现重复优势; 如果目标刺激的效价强度高于非目标刺激, 出现重复劣势; (2)人们会主动分配更多的注意资源给效价高的刺激, 重复知盲产生在记忆阶段, 支持注意资源分配最优化假设。  相似文献   

9.
Four experiments tested whether repetition blindness (RB; reduced accuracy reporting repetitions of briefly displayed items) is a perceptual or a memory-recall phenomenon. RB was measured in rapid serial visual presentation (RSVP) streams, with the task altered to reduce memory demands. In Experiment 1 only the number of targets (1 vs. 2) was reported, eliminating the need to remember target identities. Experiment 2 segregated repeated and nonrepeated targets into separate blocks to reduce bias against repeated targets. Experiments 3 and 4 required immediate "online" buttonpress responses to targets as they occurred. All 4 experiments showed very strong RB. Furthermore, the online response data showed clearly that the 2nd of the repeated targets is the one missed. The present results show that in the RSVP paradigm, RB occurs online during initial stimulus encoding and decision making. The authors argue that RB is indeed a perceptual phenomenon.  相似文献   

10.
Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.  相似文献   

11.
Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect was observed in participants who could not identify the crossmodal congruency in an open-ended interview (Experiment 1) or detect it in a simple 2AFC task (Experiment 2), suggesting that the effect was not due to voluntary selective attention or response bias. These results suggest that sound can automatically disambiguate the contents of visual awareness by facilitating perception of audiovisually congruent stimuli.  相似文献   

12.
Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre‐familiarized sounds, etc.). The current study extends this research by examining how auditory input affects 8‐ and 14‐month‐olds’ performance on individuation tasks. The results of the current study indicate that both unfamiliar sounds and words interfered with infants’ performance on an individuation task, with cross‐modal interference effects being numerically stronger for unfamiliar sounds. The effects of auditory input on a variety of lexical tasks are discussed.  相似文献   

13.
In hybrid search, observers memorize a number of possible targets and then search for any of these in visual arrays of items. Wolfe (2012) has previously shown that the response times in hybrid search increase with the log of the memory set size. What enables this logarithmic search of memory? One possibility is a series of steps in which subsets of the memory set are compared to all items in the visual set simultaneously. In the present experiments, we presented single visual items sequentially in a rapid serial visual presentation (RSVP) display, eliminating the possibility of simultaneous testing of all items. We used a staircasing procedure to estimate the time necessary to effectively detect the target in the RSVP stream. Processing time increased in a log–linear fashion with the number of potential targets. This finding eliminates the class of models that require simultaneous comparison of some memory items to all (or many) items in the visual display. Experiment 3 showed that, similar to visual search, memory search efficiency in this paradigm is influenced by the similarity between the target set and the distractors. These results indicate that observers perform separate memory searches on each eligible item in the visual display. Moreover, it appears that memory search for one item can proceed while other items are being categorized as “eligible” or “not eligible.”  相似文献   

14.
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1–3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.  相似文献   

15.
It is well known that the nervous system combines information from different cues within and across sensory modalities to improve performance on perceptual tasks. In this article, we present results showing that in a visual motion-detection task, concurrent auditory motion stimuli improve accuracy even when they do not provide any useful information for the task. When participants judged which of two stimulus intervals contained visual coherent motion, the addition of identical moving sounds to both intervals improved accuracy. However, this enhancement occurred only with sounds that moved in the same direction as the visual motion. Therefore, it appears that the observed benefit of auditory stimulation is due to auditory-visual interactions at a sensory level. Thus, auditory and visual motion-processing pathways interact at a sensory-representation level in addition to the level at which perceptual estimates are combined.  相似文献   

16.
Pattern redundancy is a key concept for representing the amount of internal mental load (encoding efficiency) needed for pattern perception/recognition. The present study investigated how pattern redundancy influences encoding and memory processes in the visual system using a rapid serial visual presentation (RSVP) paradigm. With RSVP, it is well known that participants often fail to detect repetitions of words (repetition blindness, RB). We used this phenomenon as an index of the encoding and storage of visual patterns. In three experiments, we presented patterns with higher and lower redundancy, as defined by Garner’s equivalent set size (ESS). The results showed that RB occurred more frequently for higher redundancy patterns when the temporal distance between the targets was less than 500 ms; this tendency was reversed with longer temporal distances of over 500 ms. Our results suggest that pattern redundancy modulates both the early encoding and subsequent memory processes of a representation.  相似文献   

17.
The relative efficacy of auditory feedback, varying in the amount of information contained in the feedback signal, for the self-control of heart rate (HR) was determined by comparing groups of 10 Ss who received either: (a) continuous proportional feedback, (b) discontinuous proportional feedback, (c) binary feedback, (d) heart sounds, or (e) no feedback. At each of two sessions Ss were given eight trials in each direction on which they were to raise or lower their HR.

Without regard to the amount of information contained in the signal, presentation of auditory feedback aided Ss in raising HR relative to Ss who received no feedback; however, feedback did not yield an advantage in lowering it. These results suggest that perhaps the informing quality of feedback is multidimensional and also that perhaps the mechanisms involved in acceleration and deceleration of HR may be different.  相似文献   

18.
Experimental studies in nonhuman primates and functional imaging studies in humans have underlined the critical role played by the prefrontal cortex (PFC) in working memory. However, the precise organization of the frontal lobes with respect to the different types of information operated upon is a point of controversy, and several models of functional organizations have been proposed. One model, developed by Goldman-Rakic and colleagues, postulates a modular organization of working memory based on the type of information processing (the domain specificity hypothesis). Evidence to date has focused on the encoding of the locations of visual objects by the dorsolateral PFC, whereas the ventrolateral PFC is suggested to be involved in processing the features and identity of objects. In this model, domain should refer to any sensory modality that registers information relevant to that domain—for example, there would be visual and auditory input to a spatial information processing region and a feature analysis system. In support of this model, recent studies have described pathways from the posterior and anterior auditory association cortex that target dorsolateral spatial-processing regions and ventrolateral object-processing regions, respectively. In addition, physiological recordings from the ventrolateral PFC indicate that some cells in this region are responsive to the features of complex sounds. Finally, recordings in adjacent ventrolateral prefrontal regions have shown that the features of somatosensory stimuli can be discriminated and encoded by ventrolateral prefrontal neurons. These discoveries argue that two domains, differing with respect to the type of information being processed, and not with respect to the sensory modality of the information, are specifically localized to discrete regions of the PFC and embody the domain specificity hypothesis, first proposed by Patricia Goldman-Rakic.  相似文献   

19.
Repeated and orthographically similar words are vulnerable in RSVP, as observed using the repetition blindness (RB) paradigm. Prior researchers have claimed that RB is increased for emotion words, but the mechanism for this was unclear. We argued that RB should be reduced for words with properties that capture attention, such as emotion words. Employing orthographic repetition blindness, our data showed that words with negative emotional valence had a report advantage when they were the second of two similar words (e.g., less RB occurred with HORSE curse than with HORSE purse). This renders emotion RB similar to the use of emotion words in the attentional blink phenomenon. The findings demonstrate the neglected role of competition in conscious recognition of multiple words under conditions of brief display and masking.  相似文献   

20.
Previous research suggests that understanding the gist of a scene relies on global structural cues that enable rapid scene categorization. This study used a repetition blindness (RB) paradigm to interrogate the nature of the scene representations used in such rapid categorization. When stimuli are repeated in a rapid serial visual presentation (RSVP) sequence (~10 items/sec), the second occurrence of the repeated item frequently goes unnoticed, a phenomenon that is attributed to a failure to consolidate two conscious episodes (tokens) for a repeatedly activated type. We tested whether RB occurs for different exemplars of the same scene category, which share conceptual and broad structural properties, as well as for identical and mirror-reflected repetitions of the same scene, which additionally share the same local visual details. Across 2 experiments, identical and mirror-image scenes consistently produced a repetition facilitation, rather than RB. There was no convincing evidence of either RB or repetition facilitation for different members of a scene category. These findings indicate that in the first 100–150 ms of processing scenes are represented in terms of local visual features, rather than more abstract category-general features, and that, unlike other kinds of stimuli (words or objects), scenes are not susceptible to token individuation failure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号