首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Inconsistency in the spelling-to-sound mapping hurts visual word perception and reading aloud (i.e., the traditional consistency effect). In the present experiment, we found a consistency effect in auditory word perception: Words with phonological rimes that could be spelled in multiple ways produced longer auditory lexical decision latencies and more errors than did words with rimes that could be spelled only one way. This finding adds strong support to the claim that orthography affects the perception of spoken words. This effect was predicted by a model that assumes a coupling between orthography and phonology that is functional in both visual and auditory word perception.  相似文献   

3.
Abstract.— A theory of veridical space perception based on the principles of movement parallax is proposed. Since optical changes are ambiguous with regard to veridical distances it is suggested that veridicality is obtained on the basis of an interaction between optical information and information from the body-state system. The optical system generates size and shape constancy on the basis of proximal common motions described by a vector-derivative model. The body-state system is thought to register information in a similar manner, and the interaction between the two subsystems is assumed to function according to the vector-derivative principle.  相似文献   

4.
5.
6.
Using different warning signals and threshold stimuli, the thresholds, as determined by a method of limits, were found to rise monotonically as the interval between warning signal and threshold stimulus increased from I to 9 sec. It was found that the variability of the threshold did not increase as the threshold increased. Similar results were obtained for phosphene and auditory thresholds and with visual and auditory warnings; therefore the effect was considered to be central. Motokawa's finding of a minimum in the phosphene threshold 2 sec. after a flash of white light was not repeated. The rise in threshold was not obtained when the warning intervals were randomized and so seemed to depend on the use of fixed warning intervals. A model was developed relating threshold level to accuracy of anticipation of the end of the warning interval.  相似文献   

7.
To comprehend speech in most environments, listeners must combine some but not all sounds from across a wide range of frequencies. Three experiments were conducted to examine the role of amplitude comodulation in performing an essential part of this function: the grouping together of the simultaneous components of a speech signal. Each of the experiments used time-varying sinusoidal (TVS) sentences (Remez, Rubin, Pisoni, & Carrell, 1981) as base stimuli because their component tones are acoustically unrelated. The independence of the three tones reduced the number of confounding grouping cues available compared with those found in natural or computersynthesized speech (e.g., fundamental frequency and simultaneity of harmonic onset). In each of the experiments, the TVS base stimuli were amplitude modulated to determine whether this modulation would lead to appropriate grouping of the three tones as reflected by sentence intelligibility. Experiment 1 demonstrated that amplitude comodulation at 100 Hz did improve the intelligibility of TVS sentences. Experiment 2 showed that the component tones of a TVS sentence must be comodulated (as opposed to independently modulated) for improvements in intelligibility to be found. Experiment 3 showed that the comodulation rates that led to intelligibility improvements were consistent with the effective rates found in experiments that examined the grouping of complex nonspeech sounds by common temporal envelopes(e.g., comodulation masking release; Hall, Haggard, & Fernandes, 1984). The results of these experiments support the claim that certain basic temporal-envelope processing capabilities of the liunian auditory system contribute to the perception of fluent speech.  相似文献   

8.
9.
To comprehend speech in most environments, listeners must combine some but not all sounds from across a wide range of frequencies. Three experiments were conducted to examine the role of amplitude comodulation in performing an essential part of this function: the grouping together of the simultaneous components of a speech signal. Each of the experiments used time-varying sinusoidal (TVS) sentences (Remez, Rubin, Pisoni, & Carrell, 1981) as base stimuli because their component tones are acoustically unrelated. The independence of the three tones reduced the number of confounding grouping cues available compared with those found in natural or computer-synthesized speech (e.g., fundamental frequency and simultaneity of harmonic onset). In each of the experiments, the TVS base stimuli were amplitude modulated to determine whether this modulation would lead to appropriate grouping of the three tones as reflected by sentence intelligibility. Experiment 1 demonstrated that amplitude comodulation at 100 Hz did improve the intelligibility of TVS sentences. Experiment 2 showed that the component tones of a TVS sentence must be comodulated (as opposed to independently modulated) for improvements in intelligibility to be found. Experiment 3 showed that the comodulation rates that led to intelligibility improvements were consistent with the effective rates found in experiments that examined the grouping of complex nonspeech sounds by common temporal envelopes (e.g., comodulation masking release; Hall, Haggard, & Fernandes, 1984). The results of these experiments support the claim that certain basic temporal-envelope processing capabilities of the human auditory system contribute to the perception of fluent speech.  相似文献   

10.
11.
A controlled experiment under genuine therapy conditions demonstrates that auditive group feedback once a week had the effect of improving the effectivity of the therapy. The individual's experience of his own speech behavior was dependent on the phase of therapy. In general, members of the group adopted a more critical attitude to themselves after listening to the recording. The possibility of generalizing from these results is considered.  相似文献   

12.
We investigated plane rotation effects on the minimum presentation duration that is required in order to recognize pictures of familiar objects, using the method of ascending limits. Subjects made unspeeded verification responses, selecting from 126 written alternatives. Replicating similar identification studies in which brief, masked pictures (Lawson & Jolicoeur, 1998) were presented, disorientation reduced the efficiency of recognition. Mirroring the findings in speeded picture naming studies (e.g., Jolicoeur, 1985; Jolicoeur & Milliken, 1989), but in contrast to those of Lawson and Jolicoeur (1998), orientation effects were found over a wide range of views and were attenuated but not eliminated with experience with a given object. The results bridge the findings from unspeeded verification and speeded naming tasks. They suggest that the same orientation-sensitive processes are tapped in both cases, and that practice effects on these processes are object specific.  相似文献   

13.
When head-movement parallax functioned as the sole veridical distance cue during exposure to spectacles that altered the eyes’ oculomotor adjustments, sizable adaptation was obtained. This result showed that a discrimination of the distances of 60 and 30 cm can be based on head-movement parallax. Using adaptation in demonstrating that head-movement parallax can serve as a distance cue circumvents the problem that the presence of accommodation normally presents when such a demonstration is attempted. The usual contamination of head-movement parallax with accommodation is avoided, because accommodation is altered by the spectacles and does pot function as a veridical cue along with head-movement parallax.  相似文献   

14.
When a formant transition and the remainder of a syllable are presented to subjects' opposite ears, most subjects perceive two simultaneous sounds: a syllable and a nonspeech chirp. It has been demonstrated that, when the remainder of the syllable (base) is kept unchanged, the identity of the perceived syllable will depend on the kind of transition presented at the opposite ear. This phenomenon, called duplex perception, has been interpreted as the result of the independent operation of two perceptual systems or modes, the phonetic and the auditory mode. In the present experiments, listeners were required to identify and discriminate such duplex syllables. In some conditions, the isolated transition was embedded in a temporal sequence of capturing transitions sent to the same ear. This streaming procedure significantly weakened the contribution of the transition to the perceived phonetic identity of the syllable. It is likely that the sequential integration of the isolated transition into a sequence of capturing transitions affected its fusion with the contralateral base. This finding contrasts with the idea that the auditory and phonetic processes are operating independently of each other. The capturing effect seems to be more consistent with the hypothesis that duplex perception occurs in the presence of conflicting cues for the segregation and the integration of the isolated transition with the base.  相似文献   

15.
In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.  相似文献   

16.
Both psychological stress and predictive signals relating to expected sensory input are believed to influence perception, an influence which, when disrupted, may contribute to the generation of auditory hallucinations. The effect of stress and semantic expectation on auditory perception was therefore examined in healthy participants using an auditory signal detection task requiring the detection of speech from within white noise. Trait anxiety was found to predict the extent to which stress influenced response bias, resulting in more anxious participants adopting a more liberal criterion, and therefore experiencing more false positives, when under stress. While semantic expectation was found to increase sensitivity, its presence also generated a shift in response bias towards reporting a signal, suggesting that the erroneous perception of speech became more likely. These findings provide a potential cognitive mechanism that may explain the impact of stress on hallucination‐proneness, by suggesting that stress has the tendency to alter response bias in highly anxious individuals. These results also provide support for the idea that top‐down processes such as those relating to semantic expectation may contribute to the generation of auditory hallucinations.  相似文献   

17.
冯杰  徐娟  伍新春 《心理科学进展》2021,29(12):2131-2146
听觉词汇识别包含复杂的认知加工过程。视觉通道受阻的盲人在听觉词汇加工中具有一定听觉补偿优势; 但由于视觉经验的缺失, 盲人对一些视觉相关词(比如颜色词)的语义加工和理解比明眼人弱。未来的研究应对词汇的视觉相关性进行分类讨论; 对音、形、义等多层面及其神经生理机制进行深入探究, 发展符合盲人感知特点的听觉词汇加工模型; 并拓展不同年龄段的发展性研究。最终, 揭示视觉经验缺失对盲人听觉词汇识别影响机制的全貌。  相似文献   

18.
Vibrotactile thresholds were determined at 250 and 400 Hz in the presence of (1) the sounds emitted by the vibrator, (2) continuous tonal or narrow-band masking noise, or (3) a pulsed tone synchronized with the vibrator signal. The measure of a cross-modality effect was the threshold shift occurring between each condition and the control condition, in which earmuff silencers eliminated the vibrator sounds. Continuous tones or noise had no effect upon vibrotactile thresholds. However, auditory signals synchronized with the vibrator signals did significantly elevate vibrotactile thresholds.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号