首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary Two experiments were performed to determine whether a physiological correlate of visual imagery could be measured from visually-evoked-responses (VERs). High and Low imagery groups were used. There was no direct effect of imagery, although some differences between the groups emerged. These differences are thought to be due to factors associated with imaging, but not imagery per se.  相似文献   

2.
Visual imagery and hypnotic susceptibility   总被引:1,自引:0,他引:1  
  相似文献   

3.
Research in dream recall frequency has failed to isolate psychological variables which clearly and reliably differentiate frequent dream recallers from infrequent recallers. The present study tested the hypothesis that frequent recallers have a greater capacity for visual imagery than infrequent recallers. Subjects selected on the basis of reported dream recall frequency were administered a Paired-Associate Learning task designed to measure visual imagery, a rating scale of imagery clarity and vividness, and a subjective measure of imagery controllability. The results provide support for the hypothesis and, together with other evidence, suggest that a generalized capacity for visualization may contribute to the quality of the dreaming experience and, consequently, to its recallability.  相似文献   

4.
言语想象不仅在大脑预处理机制方面起到重要的作用,还是目前脑机接口领域研究的热点。与正常言语产生过程相比,言语想象的理论模型、激活脑区、神经传导路径等均与其有较多相似之处。而言语障碍群体的言语想象、想象有意义的词语和句子时的脑神经机制与正常言语产生存在差异。鉴于人类言语系统的复杂性,言语想象的神经机制研究还面临一系列挑战,未来研究可在言语想象质量评价工具及神经解码范式、脑控制回路、激活通路、言语障碍群体的言语想象机制、词语和句子想象的脑神经信号等方面进一步探索,为有效提高脑机接口的识别率提供依据,为言语障碍群体的沟通提供便利。  相似文献   

5.
Observers viewed briefly presented target dot patterns, either at low contrast without a mask (no mask, or NM) or at high contrast and followed by a long-lasting patterned mask (backward masking, or BM). Experiment 1 demonstrated independent processing of NM target dots but limited capacity processing of BM target dots. Experiments 2 and 3 showed that visual images may radically change sensitivity (d′) in BM but not in NM. Results suggest that d′ is reduced if the image suppresses dots relevant for the detection task, but that d′ is raised if the image suppresses dots that compete for processing with those the observer must detect.  相似文献   

6.
Although it is natural to suppose that visual mental imagery is important in human deductive reasoning, the evidence is equivocal. This article argues that reasoning studies have not distinguished between ease of visualization and ease of constructing spatial models. Rating studies show that these factors can be separated. Their results yielded four sorts of relations: (1) visuospatial relations that are easy to envisage visually and spatially, (2) visual relations that are easy to envisage visually but hard to envisage spatially, (3) spatial relations that are hard to envisage visually but easy to envisage spatially, and (4) control relations that are hard to envisage both visually and spatially. Three experiments showed that visual relations slow down the process of reasoning in comparison with control relations, whereas visuospatial and spatial relations yield inferences comparable with those of control relations. We conclude that irrelevant visual detail can be a nuisance in reasoning and can impede the process.  相似文献   

7.
This paper describes the performance of a subject who, when presented with a word or a sentence, is abnormally proficient at spelling this material in reverse order. She reports that she does this by visualizing this material and reading off from this visual image. Her tachistoscopic performance is also abnormally good. It is suggested that her superiority in these two tasks is achieved principally because her internal visual representations are extremely resistant to disruption by other mental activities.  相似文献   

8.
Two experiments, designed to compare the perception and retention of tachisto-scopic displays of four block capital letters and four simple “nonsense” figures are described. The results show that the letters were much better remembered and suggest that this was mainly due to the ease with which they were verbalized. The nonsense figures usually gave rise either to rapidly fading “iconic” images, or to an unstable kind of mixed imagery which was difficult to describe or remember, but in which inadequate verbalization was often a source of error. Subsidiary experiments illustrate the importance, not only of verbalization, but also of symmetry and simplicity in remembering visual display.  相似文献   

9.
People naturally move their heads when they speak, and our study shows that this rhythmic head motion conveys linguistic information. Three-dimensional head and face motion and the acoustics of a talker producing Japanese sentences were recorded and analyzed. The head movement correlated strongly with the pitch (fundamental frequency) and amplitude of the talker's voice. In a perception study, Japanese subjects viewed realistic talking-head animations based on these movement recordings in a speech-in-noise task. The animations allowed the head motion to be manipulated without changing other characteristics of the visual or acoustic speech. Subjects correctly identified more syllables when natural head motion was present in the animation than when it was eliminated or distorted. These results suggest that nonverbal gestures such as head movements play a more direct role in the perception of speech than previously known.  相似文献   

10.
Two experiments test whether isolated visible speech movements can be used for face matching. Visible speech information was isolated with a point-light methodology. Participants were asked to match articulating point-light faces to a fully illuminated articulating face in an XAB task. The first experiment tested single-frame static face stimuli as a control. The results revealed that the participants were significantly better at matching the dynamic face stimuli than the static ones. Experiment 2 tested whether the observed dynamic advantage was based on the movement itself or on the fact that the dynamic stimuli consisted of many more static and ordered frames. For this purpose, frame rate was reduced, and the frames were shown in a random order, a correct order with incorrect relative timing, or a correct order with correct relative timing. The results revealed better matching performance with the correctly ordered and timed frame stimuli, suggesting that matches were based on the actual movement itself. These findings suggest that speaker-specific visible articulatory style can provide information for face matching.  相似文献   

11.
Speech perception is audiovisual, as demonstrated by the McGurk effect in which discrepant visual speech alters the auditory speech percept. We studied the role of visual attention in audiovisual speech perception by measuring the McGurk effect in two conditions. In the baseline condition, attention was focused on the talking face. In the distracted attention condition, subjects ignored the face and attended to a visual distractor, which was a leaf moving across the face. The McGurk effect was weaker in the latter condition, indicating that visual attention modulated audiovisual speech perception. This modulation may occur at an early, unisensory processing stage, or it may be due to changes at the stage where auditory and visual information is integrated. We investigated this issue by conventional statistical testing, and by fitting the Fuzzy Logical Model of Perception (Massaro, 1998) to the results. The two methods suggested different interpretations, revealing a paradox in the current methods of analysis.  相似文献   

12.
Dias JW  Rosenblum LD 《Perception》2011,40(12):1457-1466
Speech alignment describes the unconscious tendency to produce speech that shares characteristics with perceived speech (eg Goldinger, 1998 Psychological Review 105 251-279). In the present study we evaluated whether seeing a talker enhances alignment over just hearing a talker. Pairs of participants performed an interactive search task which required them to repeatedly utter a series of keywords. Half of the pairs performed the task while hearing each other, while the other half could see and hear each other. Alignment was assessed by naive judges rating the similarity of interlocutors' keywords recorded before, during, and after the interactive task. Results showed that interlocutors aligned more when able to see one another suggesting that visual information enhances speech alignment.  相似文献   

13.
An experiment is reported, the results of which confirm and extend an earlier observation that visual information for the speaker’s lip movements profoundly modifies the auditorv perception of natural speech by normally hearing subjects. The effect is most pronounced when there is auditory information for a bilabial utterance combined with visual information for a nonlabial utterance. However, the effect is also obtained with the reverse combination, although to a lesser extent. These findings are considered for their relevance to auditory theories of speech perception.  相似文献   

14.
We studied the correlation of one measure of imagery ability, the Visual Elaboration Scale, with two others, absorption of image and effort required to form a mental image. Significant correlations were obtained between the Visual Elaboration Scale and the other scales, with the exception of Absorption for women.  相似文献   

15.
16.
17.
18.
Groups of vivid and poor visualizers were given a picture memory task. and horizontal and vertical components of the electro-oculogram were recorded. This allowed a detailed investigation of each S’s eye movements in the perception. imagery, and recall phases of the task. The vivid visualizers gave a higher accuracy of recall Eye movement rate was lower in visual imagery than it was in perception, especially in the goup of vivid visualizers. There was some evidence of scanning activity prior to recall, but only if positional cues were provided or if recall was incorrect. No scanning occurred prior to accurate recall unprompted by a positional cue. These results provide no support to the theories of image construction proposed by Hebb (1949, 1968) and Neisser (1967). As suggested by Singer (1966), an absence of eye movement may be a necessary condition for vivid visual imagery.  相似文献   

19.
20.
The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号