首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
为探讨1——2年级儿童在听觉和视觉处理方式与策略训练之间的关系,本研究利用图片语句记忆测验来决定控制处理方式并评价其训练效果.首先在听觉或视觉干扰后出示图片任务,如果儿童更多受视觉干扰而分散精力,则被归为视觉处理者,而受听觉干扰而分散精力的儿童则被归为听觉处理者.将这两类儿童分至三种不同的情境之中:以想象的方式把呈现的图片组成整涔来进行记忆的内部想象策略组、以语言的形式来记忆图片的句子策略组和控制组.研究结果发现,运用句子策路对于听觉和视觉分析者来说都是有效的,而内部想象策略只对听觉分析者有效.  相似文献   

2.
超常与常态儿童记忆和记忆监控的比较研究   总被引:7,自引:0,他引:7  
施建农 《心理学报》1990,23(3):101-107
本实验以超常儿童(20名,平均年龄为11岁2个月)和常态儿童(20名,平均年龄为11岁3个月)为被试,以数字和图形为实验材料对超常儿童和常态儿童的记忆、记忆组织和记忆监控的特点和差异作了初步的比较研究,结果是:(1)超常儿童不仅在回忆量和记忆速度上比常态儿童优异,而且还表现在元记忆上比常态儿童发展得更好;(2)儿童的记忆和记忆监控之间的关系较为复杂,但作为记忆效果的一个重要方面,记忆速度与记忆监控之间有显著相关;(3)作为元记忆的组成元素,记忆组织和记忆监控之间有显著相关。  相似文献   

3.
学习成绩差与成绩好学生短时记忆特点的比较研究   总被引:6,自引:1,他引:5  
徐芬  蒋锋 《心理科学》1999,22(5):411-414
本研究比较了小学1、3、5年级学习成绩好/差学生在图片、数字、词汇记忆上的特点。结果表明:(1)在图片记忆上,两组儿童的成绩没有差异。1、3年级优差生间的差异主要在数字与具体词汇记忆中;5年级两组学生的差异主要在数字与抽象词汇记忆中。(2)从提示/非提示及击中虚假刺激的结果看,两组儿童在记忆上的差异部分地是由于策略运用上的差异。经过提示,学习不良儿童在数字和词汇记忆上的成绩有了提高。  相似文献   

4.
谭亮 《美与时代》2013,(7):73-75
随着艺术和科学的交叉发展,图像与声音这两种媒介的结合会越来越紧密。声音视觉化是将声音、图像、动画合理综合运用的结果,能充分发挥多种信息传播方式的优势,综合了多个感官的感受,增强了观众的艺术体验。当声音的空间与视觉的空间互相交错,水乳交融时,它们会为受众呈现出从听觉到视觉的双重感官刺激,从而实现声音视觉化的过程。  相似文献   

5.
采用2×3的被试内实验设计,将注意条件和目标刺激类型作为实验变量,考察了指向不同感觉通道的注意对视听语义整合加工的不同影响。结果发现,只有在同时注意视觉和听觉刺激时,被试对语义一致的视听刺激反应最快,即产生冗余信号效应。而在选择性注意一个感觉通道时,语义一致的视听刺激并不具有加工优势。进一步分析发现,在同时注意视觉和听觉时语义一致视听刺激的加工优势源自于其视觉和听觉成分产生了整合。也就是说,只有在同时注意视觉和听觉时,语义一致视听刺激才会产生整合,语义不一致视听刺激不会产生整合。而在选择性注意一个感觉通道时,不论语义是否一致,视听刺激均不会产生整合。  相似文献   

6.
以简单图形为视觉刺激,以短纯音作为听觉刺激,通过指导被试注意不同通道(注意视觉、注意听觉、注意视听)以形成不同注意状态(选择性注意和分配性注意),考察了注意对多感觉整合的影响,发现只有在分配性注意时被试对双通道目标的反应最快最准确。通过竞争模型分析发现,这种对双通道目标的加工优势源自于视听双通道刺激的整合。上述结果表明,只有在分配性注意状态下才会产生多感觉整合。  相似文献   

7.
Smith和Hunt于1998年首次采用DRM范式提出了错误记忆通道效应,即视觉学习通道较之听觉学习通道会降低错误再认或错误回忆,这不仅引发了一系列关于错误记忆通道效应的研究,而且提出了许多关于错误记忆通道效应产生机制的不同观点。本实验采用DRM范式,运用ERP技术,考察了视觉与听觉通道在提取阶段的错误记忆新旧效应,以从更深层面认识错误记忆的内在加工机制。结果发现,视觉与听觉通道在300-500ms和500-700ms两个时间窗都发现了ERP新旧效应,但在脑区位置以及错误记忆的新旧效应方面存在差异,这表明视觉通道与听觉通道在提取阶段的新旧效应存在不同的脑神经活动,不同学习通道在提取阶段的监测加工差异在导致错误记忆通道效应方面起着非常重要的作用。  相似文献   

8.
林云强 《心理科学》2014,37(2):349-356
以30名自闭症谱系障碍(ASD)儿童为被试,通过环境图片视觉搜索方式,借助Tobii X120眼动仪记录被试的注视时间和注视次数探讨ASD儿童威胁知觉的特点。结果发现:(1)相比于非威胁目标,ASD儿童对威胁目标对象存在知觉优势,其威胁知觉受刺激类型及矩阵大小的影响。(2)ASD儿童对于全威胁刺激(蛇)的负性图片存在注意固着现象,表现在对全威胁刺激环境图片的注视时间显著增加。(3)眼动技术能够有效地应用于部分ASD儿童的威胁知觉研究。  相似文献   

9.
潘禄  钱秀莹 《心理科学进展》2015,23(11):1910-1919
节奏感知是人类独有的认知现象, 听觉在节奏加工中具有优势, 声音节奏可以更好地与肢体运动同步, 但视觉和触觉通道在节奏感知上也有着各自特点并存在广泛的交互作用。视觉通道的节奏感知较弱, 与听觉节奏同时呈现时会受到时间定位上的拉扯, 但通过加入运动信息和增强后天经验可以得到强化; 节奏刺激可以调节注意在时间上的分配使其同步化, 这种调节作用可以单通道或者跨通道地出现; 触动觉与听觉联系紧密, 人们可以通过听触通道的整合对节奏进行高级加工。  相似文献   

10.
声音诱发闪光错觉效应是典型的视听整合错觉现象, 是指当视觉闪光刺激与间隔100 ms内的听觉声音刺激不等数量呈现时, 被试知觉视觉闪光的数量与听觉声音的数量相等。声音诱发闪光错觉的影响因素既包括自下而上和自上而下的被试内差异因素, 也包括视听刺激依赖程度、视听整合的发展和视听刺激知觉敏感性等被试间差异因素。该效应的产生在时程上主要体现在早期加工阶段, 在脑区上主要涉及多处皮层及皮层下相关脑区。未来研究应进一步考察注意、奖赏和视听整合方式等认知加工对声音诱发闪光错觉的影响, 同时也应该关注声音诱发闪光错觉对记忆和学习的影响以及结合计算模型和神经科学的手段进一步探讨其认知神经机制。  相似文献   

11.
It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.  相似文献   

12.
Selective attention to visual and auditory stimuli and reflection-impulsivity were studied in normal and learning-disabled 8- and 12-year-old boys. Multivariate analyses, followed by univariate and paired-comparison tests, indicated that the normal children increased in selective attention efficiency with age to both visual and auditory stimuli. Learning-disabled children increased in selective attention efficiency with age to auditory, but not to visual, stimuli. Both groups increased with age in reflection as measured by Kagan's Matching Familiar Figures Test (MFF). The 8-year-old learning-disabled children were more impulsive than the 8-year-old normals on MFF error scores, but not on MFF latency scores. No difference occurred between the 12-year-old learning-disabled and normal children on either MFF error or MFF latency scores. Correlations between the selective attention scores and MFF error and latency scores were not significant.This research was supported in part by BEH grant G007507227. The authors are indebted to Eleanor McCandless for her assistance in securing the learning-disabled subjects and to James McLeskey and Michael Popkin for their assistance in collecting and analyzing data.  相似文献   

13.
We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically-congruent sound improved, whereas a semantically-incongruent sound impaired, participants’ picture identification performance, as compared to performance in the white-noise control condition. A significant facilitatory effect was also observed at SOAs of around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest that audiovisual semantic interactions likely occur in a short-term buffer which rapidly accesses, and temporarily retains, the semantic representations of multisensory stimuli in order to form a coherent multisensory object representation. These results are explained in terms of Potter’s (1993) notion of conceptual short-term memory.  相似文献   

14.
The effects of rehearsing actions by source (slideshow vs. story) and of test modality (picture vs. verbal) on source monitoring were examined. Seven- to 8-year-old children (N = 30) saw a slideshow event and heard a story about a similar event. One to 2 days later, they recalled the events by source (source recall), recalled the events without reference to source (no-source-cue recall), or engaged in no recall. Seven to 8 days later, all children received verbal and picture source-monitoring tests. Children in the source recall group were less likely than children in the other groups to claim they saw actions merely heard in the story. No-source-cue recall impaired source identification of story actions. The picture test enhanced recognition, but not source monitoring, of slide actions. Increasing the distinctiveness of the target events (Experiment 2) allowed the picture test to facilitate slideshow action discrimination by children in the no-recall group.  相似文献   

15.
The aim of the present study was to determine how interhemispheric collaboration and visual attention in basic lexical tasks develop during early childhood. Two- to 6-year-old children were asked to name two different pictures presented simultaneously either one in each visual hemifield (bilateral condition) or both in a single hemifield (either right or left, unilateral condition). In the bilateral condition, children were overall more accurate in naming right visual field than left visual field pictures. This difference was significant for 2- and 3- to 4-year-old children, but not for 5- to 6-year-old children. These results show that the right and left cerebral hemispheres do not develop naming competencies equally well in early childhood. A second analysis, based on the order of report, showed that when 2- and 3- to 4-year-old children named both the left and the right visual field pictures, they named the right visual field picture first. In contrast, at the age of 5-6 years, children named the left visual field picture first and overall naming performance reached a ceiling level. Several interpretations are proposed to explain this shift of visual attention at the age of 5-6 years. In the unilateral condition, no difference was found between naming accuracy in the right and left visual fields, presumably because interhemispheric pathways are functional: visual stimuli presented to the right hemisphere can be processed by the most competent left hemisphere without degradation of information. This result confirms previous findings on the development of interhemispheric collaboration.  相似文献   

16.
The purpose of the study was to develop a battery of tests for use in evaluation of intra- and intersensory development of young children. A battery of 15 tests (4 visual, 4 auditory, 4 tactile-kinesthetic, and 3 intersensory) was administered to 109 normally developing 6- and 8-year-old and 32 slowly developing or learning disabled children. Interdependence of test items within each intrasensory and the intersensory category was determined; intercorrelations ranged from .00 to .78. Reliability estimates were also determined. Face validity was claimed for each item. The effects of age or developmental level on test performance were established. Based upon the interdependence of the tests, reliability estimates, and the capacity of the tests to discriminate among groups classified according to age or developmental level, a battery of 10 intra- and intersensory tests was proposed. The battery has 3 tests of visual perception-visual memory, dynamic depth perception, and size discrimination; 3 tests of auditory perception-auditory discrimination, auditory memory of related syllables, and auditory sequential memory of numbers; 2 tests of tactile-kinesthetic perception-tactile integration and movement awareness; and 2 tests of intersensory integration-auditory-tactile intergration and auditory-visual integration.  相似文献   

17.
PurposeRecent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS).MethodsParticipants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC.ResultsThere were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS.ConclusionsThe results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology.Educational objectives: The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter.  相似文献   

18.
A forced-choice reaction time (RT) task was used to assess developmental changes in filtering and the concomitant ability to narrow the focus of the attentional lens. Participants included 20 children in each of four age groups (4, 5, 7, and 9 years), as well as 20 adults between the ages of 21 and 29 years. Conditions varied with regard to the presence or absence of distractors and their proximity to a target stimulus, and the presence or absence of a visual window within which the target stimulus was presented. Age-related differences in the ability to filter task-irrelevant stimuli were found. The performance of 4-year-old children was adversely affected with the presence of distractors located at both 5.7° and 0.95° of visual angle from target stimuli, whereas that of children aged 5, 7, and 9 was negatively affected only with distractors 0.95° of visual angle from the target. Adults' performance was not adversely affected by the presence of distractors. Developmental differences in focusing attention were further highlighted by the finding that the presence of a visual window cue was only associated with faster RTs among 4-year-old children. These results are discussed in terms of the zoom-lens metaphor of visual attention, and the development of the ability to vary the size of an attentive zoom-lens in response to task requirements.  相似文献   

19.
In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded ‘Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual ‘Morse-code’ sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities.  相似文献   

20.
Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号