首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Autonomous sensory meridian response (ASMR) is a sensory-emotional phenomenon in which specific sensory stimuli (“ASMR triggers”) reliably elicit feelings of relaxation and tingling sensations on the head, neck, and shoulders. However, there are individual differences in which stimuli elicit ASMR and in the intensity of these responses. In the current research, we used resting-state fMRI to examine the functional connectivity associated with these differences. Fifteen individuals with self-reported ASMR completed the ASMR Checklist, which measures sensitivity to different ASMR triggers, and a resting-state fMRI scan. Checklist scores were entered as covariates to determine whether the functional connectivity of eight resting-state networks differed as a function of participants’ sensitivity to five categories of triggers. The results indicated unique patterns of functional connectivity associated with sensitivity to each ASMR trigger category. Sensitivity to two trigger categories was positively correlated with the dorsal attention network, suggesting that ASMR may involve atypical attentional processing.  相似文献   

2.
Autonomous Sensory Meridian Response (ASMR) is a pleasurable, head-oriented tingling sensation, typically induced by exposure to audiovisual triggers, producing feelings of relaxation and euphoria. This article explores the induction of ASMR experiences in a laboratory setting amongst non-specialised participants, as well as the relationship between ASMR and frisson, or ‘musical chills’. In previous work, the ASMR-15 was found to be a reliable measure of ASMR propensity, however, the predictive validity of the measure has yet to be determined. The aim of this study was to assess whether ASMR-15 scores predict greater ASMR induction in an experimental setting. To address this, N = 100 undergraduate psychology students completed the ASMR-15 and a measure of frisson, before viewing ASMR stimuli under controlled conditions. Mixed-methods analyses indicated the successful induction of ASMR amongst some participants, convergence between ASMR-15 scores and video ratings, as well as divergence between ASMR and frisson scores.  相似文献   

3.
自发性知觉经络反应(autonomous sensory meridian response,ASMR)是指在特定的视听刺激下,某些个体(ASMR敏感个体)在头皮后部、颈部乃至全身体验到一种令人极度愉快和放松的刺麻感的现象。其中,刺麻感的产生可能是个体大脑中负责感觉和肌肉运动的脑区高度激活引起的; 而与情绪和奖赏有关脑区的高度激活以及心率和呼吸频率的下降可能是产生愉快和放松感的重要原因。相比普通个体,ASMR敏感个体具有较高的神经质、共情特质、感觉受暗示性和特质正念。这可能说明ASMR敏感个体的感觉敏感性较高,情绪稳定性较弱,且比较关注自己身体的内外感受。这些个性特质可能导致ASMR敏感个体对某些视听刺激中所包含的一些感觉和情绪信息更加敏感,对其反应也更加强烈。目前,ASMR已经被用于抑郁,压力,失眠和慢性疼痛等的临床治疗以及商业广告之中。但ASMR可能会干扰个体的执行功能,在认知控制需求较高的情景下应尽量避免接触ASMR刺激。  相似文献   

4.
The main purpose of the present study was to investigate the functional plasticity of sensorimotor representations for dominant versus non-dominant hands following short-term upper-limb sensorimotor deprivation. All participants were right-handed. A splint was placed either on the right hand or on the left hand of the participants during a brief period of 48 h and was used for the input/output signal restrictions. The participants were divided into 3 groups: right hand immobilization, left hand immobilization and control (without immobilization). The immobilized participants performed the hand laterality task before (pre-test) and immediately after (post-test) splint removal. The pre-/post-test procedure was similar for the control group. The main results showed a significant response time improvement when judging the laterality of hand stimuli in the control group. In contrast, the results showed a weaker response time improvement for the left-hand immobilization group and no significant improvement for the right-hand immobilization group. Overall, these results revealed that immobilization-induced effects were lower for the non-dominant hand and also suggested that 48 h of upper-limb immobilization led to an inter-limb transfer phenomenon regardless of the immobilized hand. The immobilization-induced effects were highlighted by the slowdown of the sensorimotor processes related to manual actions, probably due to an alteration in a general cognitive representation of hand movements.  相似文献   

5.
Ongoing debate in the literature concerns whether there is a link between contagious yawning and the human mirror neuron system (hMNS). One way of examining this issue is with the use of the electroencephalogram (EEG) to measure changes in mu activation during the observation of yawns. Mu oscillations are seen in the alpha bandwidth of the EEG (8–12 Hz) over sensorimotor areas. Previous work has shown that mu suppression is a useful index of hMNS activation and is sensitive to individual differences in empathy. In two experiments, we presented participants with videos of either people yawning or control stimuli. We found greater mu suppression for yawns than for controls over right motor and premotor areas, particularly for those scoring higher on traits of empathy. In a third experiment, auditory recordings of yawns were compared against electronically scrambled versions of the same yawns. We observed greater mu suppression for yawns than for the controls over right lateral premotor areas. Again, these findings were driven by those scoring highly on empathy. The results from these experiments support the notion that the hMNS is involved in contagious yawning, emphasise the link between contagious yawning and empathy, and stress the importance of good control stimuli.  相似文献   

6.
Contrasting the traditional focus on alcohol‐related visual images, this study examined the impact of both alcohol‐related auditory cues and visual stimuli on inhibitory control (IC). Fifty‐eight participants completed a Go/No‐Go Task, with alcohol‐related and neutral visual stimuli presented with or without short or continuous auditory bar cues. Participants performed worse when presented with alcohol‐related images and auditory cues. Problematic alcohol consumption and higher effortful control (EC) were associated with better IC performance for alcohol images. It is postulated that those with higher EC may be better able to ignore alcohol‐related stimuli, while those with problematic alcohol consumption are unconsciously less attuned to these. This runs contrary to current dogma and highlights the importance of examining both auditory and visual stimuli when investigating IC.  相似文献   

7.
The relation between intelligence and speed of auditory discrimination was investigated during an auditory oddball task with backward masking. In target discrimination conditions that varied in the interval between the target and the masking stimuli and in the tonal frequency of the target and masking stimuli, higher ability participants (HA) displayed more accurate discriminations, faster response time, larger P300 amplitude, and shorter P300 and mismatch negativity (MMN) latency than lower ability participants (LA). Task difficulty effects demonstrated with variation in mask type indicate that the mask does not interfere with the detection of the deviant target stimulus, but rather that the target and mask are integrated as a single compound stimulus. The temporal effects suggest that the speed of accessing short-term memory is faster for HA than LA and, on the basis of the MMN latency, the effect is accomplished automatically, without focused attention. Moreover, the pattern of results obtained with these data support the view that the accuracy effects are determined by processing speed rather than discrimination ability. Comparator models that accommodate these effects are discussed.  相似文献   

8.
为考察听觉失匹配负波是否反映自动加工,实验改进了视觉和听觉刺激同时呈现的感觉道间选择性注意实验模式,更好地控制了非注意听觉条件。结果发现,在注意与非注意听觉条件下,听觉偏离刺激均诱发出失匹配负波;注意听觉刺激时140-180ms的偏离相关负波与非注意时该时程负波的平均波幅之间无显著差异,而注意时180-220ms的偏离相关负波的平均波幅大于非注意时同一时程之负波;非注意听觉时失匹配负波的平均波幅和峰潜伏期不受视觉道任务难度的影响,该结果为听觉失匹配负波反映自动加工的观点提供了进一步证据。  相似文献   

9.
10.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

11.
When people synchronize taps with isochronously presented stimuli, taps usually precede the pacing stimuli [negative mean asynchrony (NMA)]. One explanation of NMA [sensory accumulation model (SAM), Aschersleben in Brain Cogn 48:66–79, 2002] is that more time is needed to generate a central code for kinesthetic-tactile information than for auditory or visual stimuli. The SAM predicts that raising the intensity of the pacing stimuli shortens the time for their sensory accumulation, thereby increasing NMA. This prediction was tested by asking participants to synchronize finger force pulses with target isochronous stimuli with various intensities. In addition, participants performed a simple reaction-time task, for comparison. Higher intensity led to shorter reaction times. However, intensity manipulation did not affect NMA in the synchronization task. This finding is not consistent with the predictions based on the SAM. Discrepancies in sensitivity to stimulus intensity between sensorimotor synchronization and reaction-time tasks point to the involvement of different timing mechanisms in these two tasks.  相似文献   

12.
The effects of auditory stimuli in the form of synthetic speech output on the learning of graphic symbols were evaluated. Three adults with severe to profound mental retardation and communication impairments were taught to point to lexigrams when presented with words under two conditions. In the first condition, participants used a voice output communication aid to receive synthetic speech as antecedent and consequent stimuli. In the second condition, with a nonelectronic communications board, participants did not receive synthetic speech. A parallel treatments design was used to evaluate the effects of the synthetic speech output as an added component of the augmentative and alternative communication system. The 3 participants reached criterion when not provided with the auditory stimuli. Although 2 participants also reached criterion when not provided with the auditory stimuli, the addition of auditory stimuli resulted in more efficient learning and a decreased error rate. Maintenance results, however, indicated no differences between conditions. Finding suggest that auditory stimuli in the form of synthetic speech contribute to the efficient acquisition of graphic communication symbols.  相似文献   

13.
Prior work has shown that judgments of learning (JOLs) are prone to an auditory metacognitive illusion such that loud words are given higher predictions than quiet words despite no differences in recall as a function of auditory intensity. The current study investigated whether judgments of remembering and knowing (JORKs)—judgments that focus participants on whether or not recollective details will be remembered—are less susceptible to such an illusion. In Experiment 1, participants studied single words, making item-by-item JOLs or JORKs immediately after study. Indeed, although increased volume elevated judgement magnitude for both JOLs and JORKs, the effect was significantly attenuated when JORKs were elicited. Experiment 2 replicated this finding and additionally demonstrated that participants making JORKs were less likely than participants making JOLs to choose to restudy quiet words relative to loud words. Taken together, these results suggest that JORKs are impacted less—in terms of both metacognitive monitoring and control—by irrelevant perceptual information than JOLs. More generally, these data support the contention that metacognitive illusions can be attenuated by simply changing the way metacognitive judgments are solicited, an important finding given that subjective experiences guide self-regulated learning.  相似文献   

14.
An experiment is reported comparing the effectiveness of auditory and visual stimuli in eliciting the tip-of-the-tongue phenomenon. 30 participants were asked to name the titles of 27 television shows. Half of the participants were given segments of the theme song for each show (auditory cue), and half were shown the cast photographs for each show (visual cue). Participants were asked to report whenever they experienced the tip-of-the-tongue state. There were no significant differences between the auditory and visual stimuli in terms of the incidence rate for the tip-of-the-tongue state, the amount of partial information that participants provided in their responses, or the frequency of interlopers (alternative responses that persistently come to mind). These findings suggest that the characteristics of the tip-of-the-tongue state are determined more by the nature of the response set than by the type of stimuli used as cues. The results are inconsistent with inferential theories of the tip-of-the-tongue phenomenon, such as the cue familiarity hypothesis and, instead, tend to support direct-access hypotheses.  相似文献   

15.
While anxiety is typically thought to increase distractibility, this notion mostly derives from studies using emotionally loaded distractors presented in the same modality as the target stimuli and tasks involving crosstalk interference. We examined whether pathological anxiety might also increase distractibility for emotionally neutral irrelevant sounds presented prior to target stimuli in a task where these stimuli do not compete for selection. Patients with anxiety and control participants categorized visual digits preceded by task-irrelevant sounds that changed on rare trials (auditory deviance). Both groups exhibited an equivalent increase in response times following a deviant sound but patients showed a reduction of response accuracy, which was entirely due to an increase in response omissions. We conclude that the involuntary capture of attention by unexpected stimuli may, in patients with anxiety, result in a temporary suspension of cognitive activity.  相似文献   

16.
Sensorimotor models suggest that understanding the emotional content of a face recruits a simulation process in which a viewer partially reproduces the facial expression in their own sensorimotor system. An important prediction of these models is that disrupting simulation should make emotion recognition more difficult. Here we used electroencephalogram (EEG) and facial electromyogram (EMG) to investigate how interfering with sensorimotor signals from the face influences the real-time processing of emotional faces. EEG and EMG were recorded as healthy adults viewed emotional faces and rated their valence. During control blocks, participants held a conjoined pair of chopsticks loosely between their lips. During interference blocks, participants held the chopsticks horizontally between their teeth and lips to generate motor noise on the lower part of the face. This noise was confirmed by EMG at the zygomaticus. Analysis of EEG indicated that faces expressing happiness or disgust—lower face expressions—elicited larger amplitude N400 when they were presented during the interference than the control blocks, suggesting interference led to greater semantic retrieval demands. The selective impact of facial motor interference on the brain response to lower face expressions supports sensorimotor models of emotion understanding.  相似文献   

17.
Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound–Sound, Word–Sound, Sound–Word and Word–Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words.  相似文献   

18.
19.
When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking.  相似文献   

20.
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target–distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号