首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The principle of arbitrariness in language assumes that there is no intrinsic relationship between linguistic signs and their referents. However, a growing body of sound-symbolism research suggests the existence of some naturally-biased mappings between phonological properties of labels and perceptual properties of their referents (Maurer, Pathman, & Mondloch, 2006). We present new behavioural and neurophysiological evidence for the psychological reality of sound-symbolism. In a categorisation task that captures the processes involved in natural language interpretation, participants were faster to identify novel objects when label–object mappings were sound-symbolic than when they were not. Moreover, early negative EEG-waveforms indicated a sensitivity to sound-symbolic label–object associations (within 200 ms of object presentation), highlighting the non-arbitrary relation between the objects and the labels used to name them. This sensitivity to sound-symbolic label–object associations may reflect a more general process of auditory–visual feature integration where properties of auditory stimuli facilitate a mapping to specific visual features.  相似文献   

2.
Sound symbolism refers to non-arbitrary mappings between the sounds of words and their meanings and is often studied by pairing auditory pseudowords such as “maluma” and “takete” with rounded and pointed visual shapes, respectively. However, it is unclear what auditory properties of pseudowords contribute to their perception as rounded or pointed. Here, we compared perceptual ratings of the roundedness/pointedness of large sets of pseudowords and shapes to their acoustic and visual properties using a novel application of representational similarity analysis (RSA). Representational dissimilarity matrices (RDMs) of the auditory and visual ratings of roundedness/pointedness were significantly correlated crossmodally. The auditory perceptual RDM correlated significantly with RDMs of spectral tilt, the temporal fast Fourier transform (FFT), and the speech envelope. Conventional correlational analyses showed that ratings of pseudowords transitioned from rounded to pointed as vocal roughness (as measured by the harmonics-to-noise ratio, pulse number, fraction of unvoiced frames, mean autocorrelation, shimmer, and jitter) increased. The visual perceptual RDM correlated significantly with RDMs of global indices of visual shape (the simple matching coefficient, image silhouette, image outlines, and Jaccard distance). Crossmodally, the RDMs of the auditory spectral parameters correlated weakly but significantly with those of the global indices of visual shape. Our work establishes the utility of RSA for analysis of large stimulus sets and offers novel insights into the stimulus parameters underlying sound symbolism, showing that sound-to-shape mapping is driven by acoustic properties of pseudowords and suggesting audiovisual cross-modal correspondence as a basis for language users' sensitivity to this type of sound symbolism.  相似文献   

3.
We evaluated whether synaesthetic colour experiences (i.e., photisms) guide or attract attention. An alphanumeric-colour synaesthete, J, and seven non-synaesthetes searched for target digits presented against backgrounds that were either congruent or incongruent with the colours of J's photisms for the target digits. For J, the slope of the search function for detecting the target digits on incongruent trials was shallower than the slope of the search function for detecting the target digits on congruent trials. In contrast, for the seven non-synaesthetes, the slopes of the search functions for detecting the target digits on congruent and incongruent trials were equivalent. These findings suggest that synaesthetic colour experiences influence the efficiency of visual search by guiding or attracting attention.  相似文献   

4.
Rhythmic auditory stimuli presented before a goal-directed movement have been found to improve temporal and spatial movement outcomes. However, little is known about the mechanisms mediating these benefits. The present experiment used three types of auditory stimuli to probe how improved scaling of movement parameters, temporal preparation and an external focus of attention may contribute to changes in movement performance. Three types of auditory stimuli were presented for 1200 ms before movement initiation; three metronome beats (RAS), a tone that stayed the same (tone-same), a tone that increased in pitch (tone-change) and a no sound control, were presented with and without visual feedback for a total of eight experimental conditions. The sound was presented before a visual go-signal, and participants were instructed to reach quickly and accurately to one of two targets randomly identified in left and right hemispace. Twenty-two young adults completed 24 trials per blocked condition in a counterbalanced order. Movements were captured with an Optotrak 3D Investigator, and a 4(sound) by 2(vision) repeated measures ANOVA was used to analyze dependant variables. All auditory conditions had shorter reaction times than no sound. Tone-same and tone-change conditions had shorter movement times and higher peak velocities, with no change in trajectory variability or endpoint error. Therefore, rhythmic and non-rhythmic auditory stimuli impacted movement performance differently. Based on the pattern of results we propose multiple mechanisms impact movement planning processes when rhythmic auditory stimuli are present.  相似文献   

5.
Evidence that audition dominates vision in temporal processing has come from perceptual judgment tasks. This study shows that this auditory dominance extends to the largely subconscious processes involved in sensorimotor coordination. Participants tapped their finger in synchrony with auditory and visual sequences containing an event onset shift (EOS), expected to elicit an involuntary phase correction response (PCR), and also tried to detect the EOS. Sequences were presented in unimodal and bimodal conditions, including one in which auditory and visual EOSs of opposite sign coincided. Unimodal results showed greater variability of taps, smaller PCRs, and poorer EOS detection in vision than in audition. In bimodal conditions, variability of taps was similar to that for unimodal auditory sequences, and PCRs depended more on auditory than on visual information, even though attention was always focused on the visual sequences.  相似文献   

6.
Common and modality-specific processes in the mental lexicon   总被引:1,自引:0,他引:1  
Eight experiments were conducted to resolve: (1) empirical inconsistencies in repetition effects under intermodality conditions in word identification and lexical decision, and (2) an associated theoretical conflict concerning lexical organization. The results demonstrated that although more facilitation occurs under visual-prime/visual-test (VV) conditions than under auditory-prime/visual-test (AV) conditions, significant repetition facilitation also occurs under AV conditions. The results also indicated that: repetition effects observed for the VV and AV conditions apply to high- as well as to low-frequency words; they are insensitive to a variety of encoding tasks designed to emphasize different properties of words; and they are unaffected by differences in the ease of encoding of isolated auditory and visual words. The results are consistent with the existence of both modality-specific and common or modality-free processes in word recognition, in which word-frequency effects are restricted to the second and, by implication, lexical stage.  相似文献   

7.
The mappings from grapheme to phoneme are much less consistent in English than they are for most other languages. Therefore, the differences found between English-speaking dyslexics and controls on sensory measures of temporal processing might be related more to the irregularities of English orthography than to a general deficit affecting reading ability in all languages. However, here we show that poor readers of Norwegian, a language with a relatively regular orthography, are less sensitive than controls to dynamic visual and auditory stimuli. Consistent with results from previous studies of English-readers, detection thresholds for visual motion and auditory frequency modulation (FM) were significantly higher in 19 poor readers of Norwegian compared to 22 control readers of the same age. Over two-thirds (68.4%) of the children identified as poor readers were less sensitive than controls to either or both of the visual coherent motion or auditory 2Hz FM stimuli.  相似文献   

8.
唐晓雨  孙佳影  彭姓 《心理学报》2020,52(3):257-268
本研究基于线索-靶子范式, 操纵目标刺激类型(视觉、听觉、视听觉)与线索有效性(有效线索、中性条件、无效线索)两个自变量, 通过3个实验来考察双通道分配性注意对视听觉返回抑制(inhibition of return, IOR)的影响。实验1 (听觉刺激呈现在左/右侧)结果发现, 在双通道分配性注意条件下, 视觉目标产生显著IOR效应, 而视听觉目标没有产生IOR效应; 实验2 (听觉刺激呈现在左/右侧)与实验3 (听觉刺激呈现在中央)结果发现, 在视觉通道选择性注意条件下, 视觉与视听觉目标均产生显著IOR效应但二者无显著差异。结果表明:双通道分配性注意减弱视听觉IOR效应。  相似文献   

9.
Processing multiple complex features to create cohesive representations of objects is an essential aspect of both the visual and auditory systems. It is currently unclear whether these processes are entirely modality specific or whether there are amodal processes that contribute to complex object processing in both vision and audition. We investigated this using a dual-stream target detection task in which two concurrent streams of novel visual or auditory stimuli were presented. We manipulated the degree to which each stream taxed processing conjunctions of complex features. In two experiments, we found that concurrent visual tasks that both taxed conjunctive processing strongly interfered with each other but that concurrent auditory and visual tasks that both taxed conjunctive processing did not. These results suggest that resources for processing conjunctions of complex features within vision and audition are modality specific.  相似文献   

10.
The effects of sensory signal characteristics on the duration discrimination of intermodal intervals was investigated in three experiments. Temporal intervals were marked by either the successive presentation of a visual then auditory signal (VA), or by the successive presentation of an auditory then visual signal (AV). The results indicated that (1) VA intervals are generally easier to discriminate than are AV intervals, but this effect depends on the range of duration studied; (2) AV intervals are perceived as longer than VA intervals for durations ranging from 250 to 750 msec; (3) the intensity of the visual markers for both AV and VA intervals does not affect the discrimination; and (4) the perceived duration of an intermodal interval is influenced by the length of the first and second markers. The results are mainly interpreted in terms of (1) a sensory trace left by visual and auditory signals and (2) the detection of these signals.  相似文献   

11.
Load theory predictions for the effects of task coordination between and within sensory modalities (vision and hearing or vision only) on the level of distraction were tested. Response competition effects in a visual flanker task when it was coordinated with an auditory discrimination task (between-modality conditions) or a visual discrimination task (within-modality conditions) were compared with single-task conditions. In the between-modality conditions, response competition effects were greater in the two- (vs. single-) task conditions irrespective of the level of discrimination task difficulty. In the within-modality conditions, response competition effects were greater in the two-task (vs. single-task) conditions only when these involved a more difficult visual discrimination task. The results provided support for the load theory prediction that executive control load leads to greater distractor interference while highlighting the effects of task modality.  相似文献   

12.
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross‐modal word‐learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning (‘same modality’ condition: auditory test after auditory learning, visual test after visual learning) or in the other modality (‘cross‐modality’ condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross‐modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross‐modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross‐modal representation of visually learned words.  相似文献   

13.
Three experiments investigated cross-modal links between touch, audition, and vision in the control of covert exogenous orienting. In the first two experiments, participants made speeded discrimination responses (continuous vs. pulsed) for tactile targets presented randomly to the index finger of either hand. Targets were preceded at a variable stimulus onset asynchrony (150,200, or 300 msec) by a spatially uninformative cue that was either auditory (Experiment 1) or visual (Experiment 2) on the same or opposite side as the tactile target. Tactile discriminations were more rapid and accurate when cue and target occurred on the same side, revealing cross-modal covert orienting. In Experiment 3, spatially uninformative tactile cues were presented prior to randomly intermingled auditory and visual targets requiring an elevation discrimination response (up vs. down). Responses were significantly faster for targets in both modalities when presented ipsilateral to the tactile cue. These findings demonstrate that the peripheral presentation of spatially uninforrnative auditory and visual cues produces cross-modal orienting that affects touch, and that tactile cues can also produce cross-modal covert orienting that affects audition and vision.  相似文献   

14.
Recent research has revealed that the presence of irrelevant visual information during retrieval of long-term memories diminishes recollection of task-relevant visual details. Here, we explored the impact of irrelevant auditory information on remembering task-relevant visual details by probing recall of the same previously viewed images while participants were in complete silence, exposed to white noise, or exposed to ambient sounds recorded at a busy café. The presence of auditory distraction diminished objective recollection of goal-relevant details, relative to the silence and white noise conditions. Critically, a comparison with results from a previous study using visual distractors showed equivalent effects for auditory and visual distraction. These findings suggest that disruption of recollection by external stimuli is a domain-general phenomenon produced by interference between resource-limited, top-down mechanisms that guide the selection of mnemonic details and control processes that mediate our interactions with external distractors.  相似文献   

15.
We investigated whether functional brain networks are different in coloured-hearing synaesthetes compared with non-synaesthetes. Based on resting state electroencephalographic (EEG) activity, graph-theoretical analysis was applied to functional connectivity data obtained from different frequency bands (theta, alpha1, alpha2, and beta) of 12 coloured-hearing synaesthetes and 13 non-synaesthetes. The analysis of functional connectivity was based on estimated intra-cerebral sources of brain activation using standardized low-resolution electrical tomography. These intra-cerebral sources of brain activity were subjected to graph-theoretical analysis yielding measures representing small-world network characteristics (cluster coefficients and path length). In addition, brain regions with strong interconnections were identified (so-called hubs), and the interconnectedness of these hubs were quantified using degree as a measure of connectedness. Our analysis was guided by the two-stage model proposed by Hubbard and Ramachandran (2005). In this model, the parietal lobe is thought to play a pivotal role in binding together the synaesthetic perceptions (hyperbinding). In addition, we hypothesized that the auditory cortex and the fusiform gyrus would qualify as strong hubs in synaesthetes. Although synaesthetes and non-synaesthetes demonstrated a similar small-world network topology, the parietal lobe turned out to be a stronger hub in synaesthetes than in non-synaesthetes supporting the two-stage model. The auditory cortex was also identified as a strong hub in these coloured-hearing synaesthetes (for the alpha2 band). Thus, our a priori hypotheses receive strong support. Several additional hubs (for which no a priori hypothesis has been formulated) were found to be different in terms of the degree measure in synaesthetes, with synaesthetes demonstrating stronger degree measures indicating stronger interconnectedness. These hubs were found in brain areas known to be involved in controlling memory processes (alpha1: hippocampus and retrosplenial area), executive functions (alpha1 and alpha2: ventrolateral prefrontal cortex; theta: inferior frontal cortex), and the generation of perceptions (theta: extrastriate cortex; beta: subcentral area). Taken together this graph-theoretical analysis of the resting state EEG supports the two-stage model in demonstrating that the left-sided parietal lobe is a strong hub region, which is stronger functionally interconnected in synaesthetes than in non-synaesthetes. The right-sided auditory cortex is also a strong hub supporting the idea that coloured-hearing synaesthetes demonstrate a specific auditory cortex. A further important point is that these hub regions are even differently operating at rest supporting the idea that these hub characteristics are predetermining factors of coloured-hearing synaesthesia.  相似文献   

16.
为了探讨视听双通道下的音乐情绪加工机制及音乐情绪类型和音乐训练背景对加工机制的影响,本研究采用表达开心和悲伤的音乐表演视频为材料,比较音乐组被试和非音乐组被试在单听觉通道、单视觉通道和视听双通道三种情境下的情绪评定速度、正确率和强度。结果发现:1)视听双通道与单视觉通道差异显著,与单听觉通道差异不显著。2)非音乐组被试对悲伤的评定正确率高于音乐组被试,对开心的评定正确率低于音乐组被试。说明音乐情绪加工的视听双通道整合优势仅相对单视觉通道存在;非音乐组被试对视觉通道情绪信息的变化更敏感,音乐组被试更依赖音乐经验;可在音乐表演时加入协调的视觉通道情绪信息帮助没有音乐训练经验的听赏者。  相似文献   

17.
Jeesun Kim 《Visual cognition》2013,21(7):1017-1033
The study examined the effect that auditory information (speaker language/accent: Japanese or French) had on the processing of visual information (the speaker's race: Asian or Caucasian) in two forced-choice tasks: Classification and perceptual judgement on animated talking characters. Two (male and female) sets of facial morphs were constructed such that a 3-D head of Caucasian appearance was gradually morphed (in 11 steps) into one of Asian appearance. Each facial morph was animated in association with spoken French/Japanese or English with a French/Japanese accent. To examine the auditory effect, each animation was played with or without sound. Experiment 1 used an Asian or Caucasian classification task. Results showed that faces heard in conjunction with Japanese or a Japanese accent were more likely to be classified as Asian compared to those presented without sound. Experiment 2 used a same or different judgement task. Results showed that accuracy was improved by hearing a Japanese accent compared to without sound. These results were discussed in terms of the voice information acting as a cue to assist in organizing and attending to face features.  相似文献   

18.
Several studies have shown that the direction in which a visual apparent motion stream moves can influence the perceived direction of an auditory apparent motion stream (an effect known as crossmodal dynamic capture). However, little is known about the role that intramodal perceptual grouping processes play in the multisensory integration of motion information. The present study was designed to investigate the time course of any modulation of the cross-modal dynamic capture effect by the nature of the perceptual grouping taking place within vision. Participants were required to judge the direction of an auditory apparent motion stream while trying to ignore visual apparent motion streams presented in a variety of different configurations. Our results demonstrate that the cross-modal dynamic capture effect was influenced more by visual perceptual grouping when the conditions for intramodal perceptual grouping were set up prior to the presentation of the audiovisual apparent motion stimuli. However, no such modulation occurred when the visual perceptual grouping manipulation was established at the same time as or after the presentation of the audiovisual stimuli. These results highlight the importance of the unimodal perceptual organization of sensory information to the manifestation of multisensory integration.  相似文献   

19.
Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.  相似文献   

20.
The authors examined potential mechanisms underlying motor coordination in children with developmental coordination disorder (DCD). Because children with DCD experience difficulty processing visual, auditory, and vibrotactile information, the authors explored patterns of choice reaction time (RT) in young (6-7 years) and older (9-10 years) children with and without DCD by using a compatibility-incompatibility paradigm and different sensory modalities. Young children responded more slowly than older children to visual, auditory, and vibrotactile stimuli. Children with DCD took longer than typical children to process visual and vibrotactile stimuli under more complex stimulus-response mappings. Young children with DCD responded more slowly than typical children to visual and vibrotactile information under incompatible conditions. Children with DCD responded faster than unaffected children to auditory stimuli. The results suggest that there is a developmental nature in the processing of visual and auditory input and imply that the vibrotactile sensory modality may be key to the motor coordination difficulties of children with DCD.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号