首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study was designed to assess the potential benefits of using spatial auditory warning signals in a simulated driving task. In particular, the authors assessed the possible facilitation of responses (braking or accelerating) to potential emergency driving situations (the rapid approach of a car from the front or from behind) seen through the windshield or the rearview mirror. Across 5 experiments, the authors assessed the efficacy of nonspatial-nonpredictive (neutral), spatially nonpredictive (50% valid), and spatially predictive (80% valid) car horn sounds, as well as symbolic predictive and spatially presented symbolic predictive verbal cues (the words "front" or "back") in directing the participant's visual attention to the relevant direction. The results suggest that spatially predictive semantically meaningful auditory warning signals may provide a particularly effective means of capturing attention.  相似文献   

2.
Metacognitive evaluations refer to the processes by which people assess their own cognitive operations with respect to their current goal. Little is known about whether this process is susceptible to social influence. Here we investigate whether nonverbal social signals spontaneously influence metacognitive evaluations. Participants performed a two-alternative forced-choice task, which was followed by a face randomly gazing towards or away from the response chosen by the participant. Participants then provided a metacognitive evaluation of their response by rating their confidence in their answer. In Experiment 1, the participants were told that the gaze direction was irrelevant to the task purpose and were advised to ignore it. The results revealed an effect of implicit social information on confidence ratings even though the gaze direction was random and therefore unreliable for task purposes. In addition, nonsocial cues (car) did not elicit this effect. In Experiment 2, the participants were led to believe that cue direction (face or car) reflected a previous participant's response to the same question—that is, the social information provided by the cue was made explicit, yet still objectively unreliable for the task. The results showed a similar social influence on confidence ratings, observed with both cues (car and face) but with an increased magnitude relative to Experiment 1. We additionally showed in Experiment 2 that social information impaired metacognitive accuracy. Together our results strongly suggest an involuntary susceptibility of metacognitive evaluations to nonverbal social information, even when it is implicit (Experiment 1) and unreliable (Experiments 1 and 2).  相似文献   

3.
Presenting an auditory or tactile cue in temporal synchrony with a change in the color of a visual target can facilitate participants’ visual search performance. In the present study, we compared the magnitude of unimodal auditory, vibrotactile, and bimodal (i.e., multisensory) cuing benefits when the nonvisual cues were presented in temporal synchrony with the changing of the target’s color (Experiments 1 and 2). The target (a horizontal or vertical line segment) was presented among a number of distractors (tilted line segments) that also changed color at various times. In Experiments 3 and 4, the cues were also made spatially informative with regard to the location of the visual target. The unimodal and bimodal cues gave rise to an equivalent (significant) facilitation of participants’ visual search performance relative to a no-cue baseline condition. Making the unimodal auditory and vibrotactile cues spatially informative produced further performance improvements (on validly cued trials), as compared with cues that were spatially uninformative or otherwise spatially invalid. A final experiment was conducted in order to determine whether cue location (close to versus far from the visual display) would influence participants’ visual search performance. Auditory cues presented close to the visual search display were found to produce significantly better performance than cues presented over headphones. Taken together, these results have implications for the design of nonvisual and multisensory warning signals used in complex visual displays.  相似文献   

4.
Multiple studies have shown an increased accident risk due to telephoning while driving. On the other hand, driving with passengers leads to a decreased accident risk. One explanation is a conversation modulation by passengers in cars which leads to a different conversation pattern which is not so detrimental to driving as that when phoning. A driving simulator study was conducted in order to examine this conversation modulation more closely and to find out more about the factors involved in this modulation, especially about the role of visual information available to the passenger. In a within-subject design the conversational patterns of 33 drivers and passengers in different in-car settings (passenger as usual, passenger without front view or passenger without view of the driver) were compared to a hands-free cell phone and to a hands-free cell phone with additional visual information either about the driving situation or the driver. Participants were instructed to have a naturalistic small-talk with a friend. Results of the drivers’ speaking behavior showed a reduction of speaking while driving. Compared to a conversation partner on the cell phone, a passenger in the car varies his speaking rhythm by speaking more often but shorter. Further analyses showed that this effect is also found with a cell phone when providing the conversation partner additional visual information either about the driving situation or the driver. This latter finding supports the idea that conversation modulation is not triggered by being in the car but by the visual information about the driver’s state and the driving situation.  相似文献   

5.
A haptic pedal has been designed to emulate the behaviour of a common vehicle pedal and render superimposed vibrations with different characteristics. It was installed in a driving simulator, as an accelerator pedal with the secondary function of a vibrotactile Frontal Collision Warning (FCW). The efficacy and feeling of this solution was tested with 30 subjects using vibrotactile signals with 0.50, 1.05, and 1.60 Nm, at 2.5, 5, and 10 Hz, against a baseline visual FCW. Participants had to match the speed of a leading vehicle when the FCW was triggered. Their braking response was evaluated in terms of brake reaction time, matching speed time, control of velocity and headway reduction. Driver’s feelings were assessed with Kansei methodologies. Haptic stimuli were found to be more effective than visual signals, and the characteristics of the vibration also influenced the results. The best performance was achieved at the maximum amplitude, and in the range between 5 and 10 Hz. The perceived functionality and discomfort followed a trend coherent with the objective measurements. The conclusions of this study may be applied to develop effective and safe warning systems in vehicles, limiting the annoyance that they might cause to drivers.  相似文献   

6.
7.
Spence C  Walton M 《Acta psychologica》2005,118(1-2):47-70
We investigated the extent to which people can selectively ignore distracting vibrotactile information when performing a visual task. In Experiment 1, participants made speeded elevation discrimination responses (up vs. down) to a series of visual targets presented from one of two eccentricities on either side of central fixation, while simultaneously trying to ignore task-irrelevant vibrotactile distractors presented independently to the finger (up) vs. thumb (down) of either hand. Participants responded significantly more slowly, and somewhat less accurately, when the elevation of the vibrotactile distractor was incongruent with that of the visual target than when they were presented from the same (i.e., congruent) elevation. This crossmodal congruency effect was significantly larger when the visual and tactile stimuli appeared on the same side of space than when they appeared on different sides, although the relative eccentricity of the two stimuli within the hemifield (i.e., same vs. different) had little effect on performance. In Experiment 2, participants who crossed their hands over the midline showed a very different pattern of crossmodal congruency effects to participants who adopted an uncrossed hands posture. Our results suggest that both the relative external location and the initial hemispheric projection of the target and distractor stimuli contribute jointly to determining the magnitude of the crossmodal congruency effect when participants have to respond to vision and ignore touch.  相似文献   

8.
This study addressed the role of proprioceptive and visual cues to body posture during the deployment of tactile spatial attention. Participants made speeded elevation judgments (up vs. down) to vibrotactile targets presented to the finger or thumb of either hand, while attempting to ignore vibrotactile distractors presented to the opposite hand. The first two experiments established the validity of this paradigm and showed that congruency effects were stronger when the target hand was uncertain (Experiment 1) than when it was certain (Experiment 2). Varying the orientation of the hands revealed that these congruency effects were determined by the position of the target and distractor in external space, and not by the particular skin sites stimulated (Experiment 3). Congruency effects increased as the hands were brought closer together in the dark (Experiment 4), demonstrating the role of proprioceptive input in modulating tactile selective attention. This spatial modulation was also demonstrated when a mirror was used to alter the visually perceived separation between the hands (Experiment 5). These results suggest that tactile, spatially selective attention can operate according to an abstract spatial frame of reference, which is significantly modulated by multisensory contributions from both proprioception and vision.  相似文献   

9.
Across three experiments, participants made speeded elevation discrimination responses to vibrotactile targets presented to the thumb (held in a lower position) or the index finger (upper position) of either hand, while simultaneously trying to ignore visual distractors presented independently from either the same or a different elevation. Performance on the vibrotactile elevation discrimination task was slower and less accurate when the visual distractor was incongruent with the elevation of the vibrotactile target (e.g., a lower light during the presentation of an upper vibrotactile target to the index finger) than when they were congruent, showing that people cannot completely ignore vision when selectively attending to vibrotactile information. We investigated the attentional, temporal, and spatial modulation of these cross-modal congruency effects by manipulating the direction of endogenous tactile spatial attention, the stimulus onset asynchrony between target and distractor, and the spatial separation between the vibrotactile target, any visual distractors, and the participant’s two hands within and across hemifields. Our results provide new insights into the spatiotemporal modulation of crossmodal congruency effects and highlight the utility of this paradigm for investigating the contributions of visual, tactile, and proprioceptive inputs to the multisensory representation of peripersonal space.  相似文献   

10.
A remarkable ability of the human visual system is the implementation of attentional control settings (ACSs) that govern what stimuli capture or hold attention. We provide evidence that ACSs can be specified by episodic long-term memory representations. In all experiments, participants memorized 30 images of objects that they then monitored for in an attention task, inducing an episodic-based ACS. In Experiments 1a and 1b, only studied cues in a cueing task captured attention. We confirmed these cueing effects reflect capture by testing for inhibition of return in Experiment 2a, and controlled for perceptual masking by cues in Experiment 2b. In Experiment 3 we determined that ACSs are specifically supported by episodic retrieval, by dividing studied images into two sets and designating one as the targets in a rapid serial visual presentation task: Only target-set matching distractors produced a spatial blink (captured attention). These results extend our understanding of the representations specifying ACSs.  相似文献   

11.
In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial attention towards a location where to-be-remembered visual stimuli were or were not presented (cued/uncued trials, respectively). The results suggest that the effect of peripheral visual cues in biasing the access of information into VSWM depend on the size of the attentional focus, while auditory cues did not have direct effects in biasing VSWM. Finally, spatially congruent multisensory cues showed an enlarged attentional effect in VSWM as compared to unimodal visual cues, as a likely consequence of multisensory integration. This latter result sheds new light on the interplay between spatial attention and VSWM, pointing to the special role exerted by multisensory (audiovisual) cues.  相似文献   

12.
Seeing one's own body (either directly or indirectly) can influence visuotactile crossmodal interactions. Recently, it has been shown that even viewing a simple line drawing of a hand can also modulate such crossmodal interactions, as if the picture of the hand somehow corresponds to (or primes) the participants' own hand. Alternatively, however, it could be argued that the modulatory effects of viewing the picture of a hand on visuotactile interactions might simply be attributed to cognitive processes such as the semantic referral to the relevant body part or to the orientation cues provided by the hand picture instead. In the present study, we evaluated these various different interpretations of the hand picture effect. Participants made speeded discrimination responses to the location of brief vibrotactile targets presented to either the tip or base of their forefinger, while trying to ignore simultaneously-presented visual distractors presented to either side of central fixation. We compared the modulatory effect of the picture of a hand with that seen when the visual distractors were presented next to words describing the tip and base of the forefinger (Experiment 1), or were superimposed over arrows which provided another kind of directional cue (Experiment 2). Tactile discrimination performance was modulated in the hand picture condition, but not in the word or arrow conditions. These results therefore suggest that visuotactile interactions are specifically modulated by the image of the hand rather than by cognitive cues such as simply semantic referral to the relevant body sites and/or any visual orientation cues provided by the picture of a hand.  相似文献   

13.
The present study combined exogenous spatial cueing with masked repetition priming to study attentional influences on the processing of subliminal stimuli. Participants performed an alphabetic decision task (letter versus pseudo-letter classification) with central targets and briefly presented peripherally located primes that were either cued or not cued by an abrupt onset. A relatively long delay between cue and prime was used to investigate the effect of inhibition of return (IOR) on the processing of subliminal masked primes. Primes presented to the left visual field showed standard effects of Cue Validity and no IOR (significant priming with valid cues only). Primes presented to the right visual field showed no priming from valid cues (an IOR effect), and priming with invalid cues that depended on hand of response to letter targets (right-hand in Experiment 1, left-hand in Experiment 2). The results are interpreted in terms of a differential speed of engagement and disengagement of attention to the right and left visual fields for alphabetic stimuli, coupled with a complex interaction that arises between Prime Relatedness and response-hand.  相似文献   

14.
In 3 experiments, we investigate how anxiety influences interpretation of ambiguous facial expressions of emotion. Specifically, we examine whether anxiety modulates the effect of contextual cues on interpretation. Participants saw ambiguous facial expressions. Simultaneously, positive or negative contextual information appeared on the screen. Participants judged whether each expression was positive or negative. We examined the impact of verbal and visual contextual cues on participants' judgements. We used 3 different anxiety induction procedures and measured levels of trait anxiety (Experiment 2). Results showed that high state anxiety resulted in greater use of contextual information in the interpretation of the facial expressions. Trait anxiety was associated with mood-congruent effects on interpretation, but not greater use of contextual information.  相似文献   

15.
Delvenne JF  Holt JL 《Cognition》2012,122(2):258-263
Humans have the ability to attentionally select the most relevant visual information from their extrapersonal world and to retain it in a temporary buffer, known as visual short-term memory (VSTM). Research suggests that at least two non-contiguous items can be selected simultaneously when they are distributed across the two visual hemifields. In two experiments, we show that attention can also be split between the left and right sides of internal representations held in VSTM. Participants were asked to remember several colors, while cues presented during the delay instructed them to orient their attention to a subset of memorized colors. Experiment 1 revealed that orienting attention to one or two colors strengthened equally participants' memory for those colors, but only when they were from separate hemifields. Experiment 2 showed that in the absence of attentional cues the distribution of the items in the visual field per se had no effect on memory. These findings strongly suggest the existence of independent attentional resources in the two hemifields for selecting and/or consolidating information in VSTM.  相似文献   

16.
To date, numerosity judgments have been studied only under conditions of unimodal stimulus presentation. It is therefore unclear whether the same limitations on correctly reporting the number of unimodal visual or tactile stimuli presented in a display might be expected under conditions in which participants have to count stimuli presented simultaneously in two or more different sensory modalities. In Experiment 1, we investigated numerosity judgments using both unimodal and bimodal displays consisting of one to six vibrotactile stimuli (presented over the body surface) and one to six visual stimuli (seen on the body via mirror reflection). Participants had to count the number of stimuli regardless of their modality of presentation. Bimodal numerosity judgments were significantly less accurate than predicted on the basis of an independent modality-specific resources account, thus showing that numerosity judgments might rely on a unitary amodal system instead. The results of a second experiment demonstrated that divided attention costs could not account for the poor performance in the bimodal conditions of Experiment 1. We discuss these results in relation to current theories of cross-modal integration and to the cognitive resources and/or common higher order spatial representations possibly accessed by both visual and tactile stimuli.  相似文献   

17.
18.
Cvejic E  Kim J  Davis C 《Cognition》2012,122(3):442-453
Prosody can be expressed not only by modification to the timing, stress and intonation of auditory speech but also by modifying visual speech. Studies have shown that the production of visual cues to prosody is highly variable (both within and across speakers), however behavioural studies have shown that perceivers can effectively use such visual cues. The latter result suggests that people are sensitive to the type of prosody expressed despite cue variability. The current study investigated the extent to which perceivers can match visual cues to prosody from different speakers and from different face regions. Participants were presented two pairs of sentences (consisting of the same segmental content) and were required to decide which pair had the same prosody. Experiment 1 tested visual and auditory cues from the same speaker and Experiment 2 from different speakers. Experiment 3 used visual cues from the upper and the lower face of the same talker and Experiment 4 from different speakers. The results showed that perceivers could accurately match prosody even when signals were produced by different speakers. Furthermore, perceivers were able to match the prosodic cues both within and across modalities regardless of the face area presented. This ability to match prosody from very different visual cues suggests that perceivers cope with variation in the production of visual prosody by flexibly mapping specific tokens to abstract prosodic types.  相似文献   

19.
Are spatial and temporal attention independent?   总被引:1,自引:0,他引:1  
Participants searched for one of two target letters in a rapid serial visual presentation (RSVP) sequence of 17 successive frames, each containing four letters arranged into a box around a central fixation point. In control trial blocks, the participants had no information about when or where one of the target letters would appear. In other trial blocks, visual cues were given to indicate with 100% validity either the spatial location of the target, the time at which it would be presented, or both where and when it would appear. The results showed that both types of cues were effective on their own in speeding target identification, and their effects combined additively when the cues were presented and used together. These results support a growing body of evidence indicating that early attentional selection of information in vision is independently attuned to spatial and temporal properties of the environment.  相似文献   

20.
In this study, users’ acceptance of an on-bike system that warns about potential collisions with motorized vehicles as well as its influence on cyclists’ behavior was evaluated. Twenty-five participants took part in a field study that consisted of three different experimental tasks. All participants also completed a follow-up questionnaire at the completion of the three-task series to elicit information about the acceptance of the on-bike system. In the experiment phase, participants were asked to ride the bicycle throughout a circuit and to interact with a car at an intersection. Participants completed three laps of the circuit. The first lap involved no interaction with the car and served the purpose of habituation. In the second and third laps participants experienced a conflict with an incoming car at an intersection. In the second lap, the on-bike device was not activated, while in the third lap, participants received a warning message signaling the imminent conflict with the car. We compared the difference in user’s behavior between the second lap (conflict with a car without the warning of the on-bike system) and the third lap (conflict with a car with the warning of the on-bike system). Results showed that, when entering the crossroad, participants were more likely to decrease their speed in case of warning of the on-bike system. Further, the on-bike system was relatively well accepted by the participants. In particular, participants did not report negative emotions when using the system, while they trusted it and believed that using such technology would be free from effort. Participants were willing to spend on average 57.83 € for the system. This study highlights the potential of the on-bike system for promoting bicycle safety.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号