首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.  相似文献   

2.
This study tested the hypothesis that even the simplest cognitive tasks require the storage of information in working memory (WM), distorting any information that was previously stored in WM. Experiment 1 tested this hypothesis by requiring observers to perform a simple letter discrimination task while they were holding a single orientation in WM. We predicted that performing the task on the interposed letter stimulus would cause the orientation memory to become less precise and more categorical compared to when the letter was absent or when it was present but could be ignored. This prediction was confirmed. Experiment 2 tested the modality specificity of this effect by replacing the visual letter discrimination task with an auditory pitch discrimination task. Unlike the interposed visual stimulus, the interposed auditory stimulus produced little or no disruption of WM, consistent with the use of modality‐specific representations. Thus, performing a simple visual discrimination task, but not a simple auditory discrimination task, distorts information about a single feature being maintained in visual WM. We suggest that the interposed task eliminates information stored within the focus of attention, leaving behind a WM representation outside the focus of attention that is relatively imprecise and categorical.  相似文献   

3.
This study investigated whether individual differences in cognitive functions, attentional abilities in particular, were associated with individual differences in the quality of phonological representations, resulting in variability in speech perception and production. To do so, we took advantage of a tone merging phenomenon in Cantonese, and identified three groups of typically developed speakers who could differentiate the two rising tones (high and low rising) in both perception and production [+Per+Pro], only in perception [+Per–Pro], or in neither modalities [–Per–Pro]. Perception and production were reflected, respectively, by discrimination sensitivity d′ and acoustic measures of pitch offset and rise time differences. Components of event-related potential (ERP)—the mismatch negativity (MMN) and the ERPs to amplitude rise time—were taken to reflect the representations of the acoustic cues of tones. Components of attention and working memory in the auditory and visual modalities were assessed with published test batteries. The results show that individual differences in both perception and production are linked to how listeners encode and represent the acoustic cues (pitch contour and rise time) as reflected by ERPs. The present study has advanced our knowledge from previous work by integrating measures of perception, production, attention, and those reflecting quality of representation, to offer a comprehensive account for the underlying cognitive factors of individual differences in speech processing. Particularly, it is proposed that domain-general attentional switching affects the quality of perceptual representations of the acoustic cues, giving rise to individual differences in perception and production.  相似文献   

4.
The effects of feature identity in an operant serial feature-negative discrimination (F1 T1−, T1+) were examined in two experiments with rats. In Experiment 1, rats were trained with two operant serial feature-negative discriminations in which different operants were reinforced during two auditory target cues (T1 and T2). The features (F1 and F2) were two neutral cues (visual or auditory stimuli), two motivationally significant cues (flavored sucrose solutions, also used as the operant reinforcers), or one neutral and one motivationally significant cue. Experiment 1 showed that discrimination acquisition, transfer performance, and feature–target interval testing were facilitated with a flavored sucrose feature. Experiment 2 showed that flavored sucrose-alone presentations, more than flavored sucrose trained in a pseudodiscrimination (F1 T1+, T1+), shared several similarities with a standard flavored sucrose feature. The results suggest flavored sucrose rapidly acquires inhibitory properties, which facilitates operant serial feature-negative discrimination performance.  相似文献   

5.
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory—but not necessarily other aspects of working memory.  相似文献   

6.
Four experiments examined the effects of encoding multiple standards in a temporal generalization task in the visual and auditory modalities both singly and cross-modally, using stimulus durations ranging, across different experiments, from 100 to 1,400 ms. Previous work has shown that encoding and storing multiple auditory standards of different durations resulted in systematic interference with the memory of the standard, characterized by a shift in the location of peak responding, and this result, from Ogden, Wearden, and Jones (2008), was replicated in the present Experiment 1. Experiment 2 employed the basic procedure of Ogden et al. using visual stimuli and found that encoding multiple visual standards did not lead to performance deterioration or any evidence of systematic interference between the standards. Experiments 3 and 4 examined potential cross-modal interference. When two standards of different modalities and durations were encoded and stored together there was also no evidence of interference between the two. Taken together, these results, and those of Ogden et al., suggest that, in humans, visual temporal reference memory may be more permanent than auditory reference memory and that auditory temporal information and visual temporal information do not mutually interfere in reference memory.  相似文献   

7.
The second of two targets is often missed when presented shortly after the first target--a phenomenon referred to as the attentional blink (AB). Whereas the AB is a robust phenomenon within sensory modalities, the evidence for cross-modal ABs is rather mixed. Here, we test the possibility that the absence of an auditory-visual AB for visual letter recognition when streams of tones are used is due to the efficient use of echoic memory, allowing for the postponement of auditory processing. However, forcing participants to immediately process the auditory target, either by presenting interfering sounds during retrieval or by making the first target directly relevant for a speeded response to the second target, did not result in a return of a cross-modal AB. Thefindings argue against echoic memory as an explanation for efficient cross-modal processing. Instead, we hypothesized that a cross-modal AB may be observed when the different modalities use common representations, such as semantic representations. In support of this, a deficit for visual letter recognition returned when the auditory task required a distinction between spoken digits and letters.  相似文献   

8.
采用注意捕获范式, 通过行为和事件相关脑电位(ERP)实验, 探讨工作记忆表征精度加工需求对注意引导的影响, 行为结果发现, 在低精度加工需求条件下, 只有一个工作记忆表征引导注意, 且处于高激活状态的工作记忆表征产生的注意捕获大于低激活状态; 而在高精度加工需求条件下, 有两个工作记忆表征引导注意, 且处于高、低激活状态的工作记忆表征产生的注意捕获没有差异。ERP结果显示, 高精度加工需求条件下诱发的NSW和LPC大于低精度加工需求条件; 在高精度加工需求条件下, 干扰项与记忆项匹配比不匹配时, 诱发更大的N2和更小的N2pc, 而在低精度加工需求条件下, 干扰项与记忆项匹配和不匹配时诱发的N2、N2pc没有差异。研究表明, 工作记忆表征精度加工需求影响注意引导的机制可能是高精度加工需求下, 工作记忆表征消耗的认知资源增加, 搜索目标获得的资源减少, 干扰项捕获的注意增加。  相似文献   

9.
The equivalence of visual and auditory graphical displays was examined in two experiments. In Experiment 1, multidimensional scaling techniques were applied to paired comparison similarity judgments of both auditory and visual displays of simple periodic wave forms. In Experiment 2, a subset of perceptually similar pairs of wave forms was selected as the stimulus set for an AB-X discrimination task in both auditory and visual modalities. Results suggest much greater apparent visual-auditory equivalence for the similarity rating task than for the more difficult discrimination task, implying that one should consider the demands of the task when deciding whether auditory graphic displays are suitable alternatives to more traditional visual displays.  相似文献   

10.
Previous research has demonstrated a potent, stimulus-specific form of interference in short-term auditory memory. This effect has been interpreted in terms of interitem confusion and grouping, but the present experiments suggested that interference might be a feature-specific phenomenon. Participants compared standard and comparison tones over a 10-sec interval and were required to determine whether they differed in timbre. A single interfering distractor tone was presented either 50 msec or 8 sec after the offset of the standard (Experiment 1) or 2 sec prior to its onset (Experiment 2). The distractor varied in the number of features it shared with the standard and comparison, and this proved critical, since performance on the task was greatly impaired when the distractor either consisted of novel, unshared features (Experiment 1) or contained the distinguishing feature of the comparison tone (Experiments 1 and 2). These findings were incompatible with earlier accounts of forgetting but were fully explicable by the recent timbre memory model, which associates interference in short-term auditory memory with an "updating" process and feature overwriting. These results suggest similarities with the mechanisms that underlie forgetting in verbal short-term memory.  相似文献   

11.
Understanding how the human brain integrates features of perceived events calls for the examination of binding processes within and across different modalities and domains. Recent studies of feature-repetition effects have demonstrated interactions between shape, color, and location in the visual modality and between pitch, loudness, and location in the auditory modality: repeating one feature is beneficial if other features are also repeated, but detrimental if not. These partial-repetition costs suggest that co-occurring features are spontaneously bound into temporary event files. Here, we investigated whether these observations can be extended to features from different sensory modalities, combining visual and auditory features in Experiment 1 and auditory and tactile features in Experiment 2. The same types of interactions, as for unimodal feature combinations, were obtained including interactions between stimulus and response features. However, the size of the interactions varied with the particular combination of features, suggesting that the salience of features and the temporal overlap between feature-code activations plays a mediating role.  相似文献   

12.
Several lines of evidence suggest that during processing of events, the features of these events become connected via episodic bindings. Such bindings have been demonstrated for a large number of visual and auditory stimulus features, like color and orientation, or pitch and loudness. Importantly, most visual and auditory events typically also involve temporal features, like onset time or duration. So far, however, whether temporal stimulus features are also bound into event representations has never been tested directly. The aim of the present study was to investigate possible binding between stimulus duration and other features of auditory events. In Experiment 1, participants had to respond with two keys to a low or high pitch sinus tone. Critically, the tones were presented with two different presentation durations. Sequential analysis of RT data indicated binding of stimulus duration into the event representation: at pitch repetitions, performance was better when both pitch and duration repeated, relative to when only pitch repeated and duration switched. This finding was replicated with loudness as relevant stimulus feature in Experiment 2. In sum, the results demonstrate that temporal features are bound into auditory event representations. This finding is an important advancement for binding theory in general, and raises several new questions for future research.  相似文献   

13.
Experiments 1 and 2 compared, with a single-stimulus procedure, the discrimination of filled and empty intervals in both auditory and visual modalities. In Experiment 1, in which intervals were about 250 msec, the discrimination was superior with empty intervals in both modalities. In Experiment 2, with intervals lasting about 50 msec, empty intervals showed superior performance with visual signals only. In Experiment 3, for the auditory modality at 250 msec, the discrimination was easier with empty intervals than with filled intervals with both the forced-choice (FC) and the single stimulus (SS) modes of presentation, and the discrimination was easier with the FC than with the SS method. Experiment 4, however, showed that at 50 and 250 msec, with a FC-adaptive procedure, there were no differences between filled and empty intervals in the auditory mode; the differences observed with the visual mode in Experiments 1 and 2 remained significant. Finally, Experiment 5 compared differential thresholds for four marker-type conditions, filled and empty intervals in the auditory and visual modes, for durations ranging from .125 to 4 sec. The results showed (1) that the differential threshold differences among marker types are important for short durations but decrease with longer durations, and (2) that a generalized Weber’s law generally holds for these conditions. The results as a whole are discussed in terms of timing mechanisms.  相似文献   

14.
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals.  相似文献   

15.
It has been proposed that the perception of very short duration is governed by sensory mechanisms, whereas the perception of longer duration depends on cognitive capacities. Four duration discrimination tasks (modalities: visual, auditory; base duration: 100 ms, 1000 ms) were used to study the relation between time perception, age, sex, and cognitive abilities (alertness, visual and verbal working memory, general fluid reasoning) in 100 subjects aged between 21 and 84 years. Temporal acuity was higher (Weber fractions are lower) for longer stimuli and for the auditory modality. Age was related to the visual 100 ms condition only, with lower temporal acuity in elder participants. Alertness was significantly related to auditory and visual Weber fractions for shorter stimuli only. Additionally, visual working memory was a significant predictor for shorter visual stimuli. These results indicate that alertness, but also working memory, are associated with temporal discrimination of very brief duration.  相似文献   

16.
人们在交互过程中会同时表征自我和他人相关的信息,形成社会性联合表征.然而,自我和他人相关信息是以整合还是分离的形式存在,这一问题尚未厘清.本研究以工作记忆蓬佐错觉为指标展开了探讨.实验中要求两名被试分别记忆部分构成蓬佐错觉的线段并回忆.结果发现,竞争和合作场景下的个体不仅表征了他人相关的信息,还将其和自身相关的信息进行了整合,形成工作记忆蓬佐错觉效应,表明社会交互条件下会自动整合成社会性共同表征.  相似文献   

17.
Auditory redundancy gains were assessed in two experiments in which a simple reaction time task was used. In each trial, an auditory stimulus was presented to the left ear, to the right ear, or simultaneously to both ears. The physical difference between auditory stimuli presented to the two ears was systematically increased across experiments. No redundancy gains were observed when the stimuli were identical pure tones or pure tones of different frequencies (Experiment 1). A clear redundancy gain and evidence of coactivation were obtained, however, when one stimulus was a pure tone and the other was white noise (Experiment 2). Experiment 3 employed a two-alternative forced choice localization task and provided evidence that dichotically presented pure tones of different frequencies are apparently integrated into a single percept, whereas a pure tone and white noise are not fused. The results extend previous findings of redundancy gains and coactivation with visual and bimodal stimuli to the auditory modality. Furthermore, at least within this modality, the results indicate that redundancy gains do not emerge when redundant stimuli are integrated into a single percept.  相似文献   

18.
Two experiments were conducted to determine whether the auditory and visual systems process simultaneously presented pairs of alphanumeric information differently. In Experiment 1, different groups of subjects were given extensive practice recalling pairs of superimposed visual or auditory digits in simultaneous order (the order of arrival) or successive order (one member of each digit pair in turn, followed by the other pair member). For auditory input, successive order of recall was more accurate, particularly for the last two of three pairs presented, whereas for visual input, simultaneous order of recall was more accurate. In Experiment 2, subjects were cued to recall in one or the other order either immediately before or after stimulus input. Recall order results were the same as for Experiment 1, and precuing did not facilitate recall in either order for both modalities. These results suggest that processing in the auditory system can only occur successively across time, whereas as in the visual system processing can only occur simultaneously in space.  相似文献   

19.
Dual-process accounts of working memory have suggested distinct encoding processes for verbal and visual information in working memory, but encoding for nonspeech sounds (e.g., tones) is not well understood. This experiment modified the sentence–picture verification task to include nonspeech sounds with a complete factorial examination of all possible stimulus pairings. Participants studied simple stimuli–pictures, sentences, or sounds–and encoded the stimuli verbally, as visual images, or as auditory images. Participants then compared their encoded representations to verification stimuli–again pictures, sentences, or sounds–in a two-choice reaction time task. With some caveats, the encoding strategy appeared to be as important or more important than the external format of the initial stimulus in determining the speed of verification decisions. Findings suggested that: (1) auditory imagery may be distinct from verbal and visuospatial processing in working memory; (2) visual perception but not visual imagery may automatically activate concurrent verbal codes; and (3) the effects of hearing a sound may linger for some time despite recoding in working memory. We discuss the role of auditory imagery in dual-process theories of working memory.  相似文献   

20.
研究采用单探测变化检测范式,考察了两维特征图形在视觉客体和视觉空间工作记忆中的存储机制,并对其容量进行测定。40名被试(平均年龄20.56±1.73岁)随机分为两个等组,分别完成实验一和实验二。实验一的刺激图形由颜色和形状两基本特征组成,实验二的刺激为由不同颜色和开口朝向组成的兰道环。两个实验结果均显示:(1)特征交换变化条件下的记忆成绩与单特征变化条件下最差的记忆成绩差异不显著;(2)空间工作记忆任务的成绩显著优于客体工作记忆任务;(3)被试在视觉工作记忆中能存储2~3个客体和3~4个空间位置。这表明,由两种不同维度特征组成的图形在视觉客体和视觉空间工作记忆中均以整合方式进行存储,空间工作记忆的容量大于客体工作记忆。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号