首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Persuasive health messages can be framed to emphasize the benefits of adopting a health behavior (gains) or the risks of not adopting it (losses). This study examined the effects of message framing on beliefs, attitudes, and behaviors relevant to cigarette smoking. In video presentations about tobacco smoking, visual images and auditory voiceover content were framed either as gains or losses, yielding 4 message conditions. Undergraduates (N= 437) attending a public university in New England were assigned randomly to view one of these messages. Gain‐framed messages about smoking in visual and auditory modalities shifted smoking‐related beliefs, attitudes, and behaviors in the direction of avoidance and cessation. Health‐communication experts, when promoting prevention behaviors like smoking avoidance or cessation, may wish to diverge from the tradition of using loss‐framed messages and fear appeals in this domain, and instead consider using gain‐framed appeals that present the advantages of not smoking.  相似文献   

2.
The sensitivity of the scalp-recorded, auditory evoked potential to selective attention was examined while subjects monitored one of two dichotically presented speech passages for content. Evoked potentials were elicited to irrelevant probe stimuli (vowel sounds) embedded in both the right- and the leftear’s message. The amplitude of the evoked potential was larger to probe stimuli embedded in the attended message than to probe stimuli in the unattended message. Recall performance was unaffected by the presence of the probes. The results are interpreted as supporting the hypothesis that this evoked potential sensitivity reflects an initial “input selection” stage of attention.  相似文献   

3.
If a S is asked to monitor two simultaneous auditory speech messages and to report only on the occurrence of target words appearing at random in either message, then it is shown that he witt fail to detect all of them but will detect significantly more than half. The targets used in these experiments were immediate repeats of text words. The results reject theories that part of the sensory input is blocked or that all is recognized. Detection performance was a function of rate of speech and of intertarget interval; there was a small, not significant, effect of instruction to recognize message content.  相似文献   

4.
In humans, direct gaze typically signals a deliberate attempt to communicate with an observer. An auditory signal with similar signal value is calling someone's name. We investigated whether the presence of this personally relevant signal in the auditory modality would influence perception of another individual's gaze. Participants viewed neutral faces displaying different gaze deviations while hearing someone call their own name or the name of another person. Results were consistent with our predictions, as participants judged faces with a wider range of gaze deviations as looking directly at them when they simultaneously heard their own name. The influence of this personally relevant signal was present only at ambiguous gaze deviations; thus, an overall response bias to categorize gaze as direct when hearing one's own name cannot account for the results. This study provides the first evidence that communicative intent signaled via the auditory modality influences the perception of another individual's gaze.  相似文献   

5.
Seventeen-month-old infants were presented with pairs of images, in silence or with the non-directive auditory stimulus ‘look!’. The images had been chosen so that one image depicted an item whose name was known to the infant, and the other image depicted an image whose name was not known to the infant. Infants looked longer at images for which they had names than at images for which they did not have names, despite the absence of any referential input. The experiment controlled for the familiarity of the objects depicted: in each trial, image pairs presented to infants had previously been judged by caregivers to be of roughly equal familiarity. From a theoretical perspective, the results indicate that objects with names are of intrinsic interest to the infant. The possible causal direction for this linkage is discussed and it is concluded that the results are consistent with Whorfian linguistic determinism, although other construals are possible. From a methodological perspective, the results have implications for the use of preferential looking as an index of early word comprehension.  相似文献   

6.
Two experiments with 5- and 7-year-old children tested the hypotheses that auditory attention is used to (a) monitor a TV program for important visual content, and (b) semantically process program information through language to enhance comprehension and visual attention. A direct measure of auditory attention was the latency of the child's restoration of gradually degraded sound quality. Restoration of auditory clarity did not vary as a function of looking. Restoration of visual clarity was faster when looking than when not looking. Restoration was faster for visual than auditory degrades, but audiovisual degrades were restored most rapidly of all, suggesting that dual modality presentation maximizes children's attention. Narration enhanced visual attention and comprehension including comprehension of visually presented material. Auditory comprehension did not depend on looking, suggesting that children can semantically process verbal content without looking at the TV. Auditory attention did not differ with the presence or absence of narration, but did predict auditory comprehension best while visual attention predicted visual comprehension best. In the absence of narration, auditory attention predicted visual comprehension, suggesting its monitoring function. Visual attention indexed overall interest and appeared to be most critical for comprehension in the absence of narration.  相似文献   

7.
Orienting to a target by looking and pointing is examined for parallels between the control of the two systems and interactions due to movement of the eyes and limb to the same target. Parallels appear early in orienting and may be due to common processing of spatial information for the ocular and manual systems. The eyes and limb both have shorter response latency to central visual and peripheral auditory targets. Each movement also has shorter latency and duration when the target presentation is short enough (200 msec) that no analysis of feedback of the target position is possible during the movement. Interactions appear at many stages of information processing for movement. Latency of ocular movement is much longer when the subject also points, and the eye and limb movement latencies are highly correlated for orienting to auditory targets. Final position of eyes and limb are significantly correlated only when target duration is short (200 msec). This illustrates that sensory information obtained before the movement begins is an important, but not the only, source of input about target position. Additional information that assists orienting may be passed from one system to another, since visual information gained by looking aided pointing to lights and proprioceptive information from the pointing hand seemed to assist the eyes in looking to sounds. Thus the production of this simple set of movements may be partly described by a cascade-type process of parallel analysis of spatial information for eye and hand control, but is also, later in the movement, assisted by cross-system interaction.  相似文献   

8.
Important social information can be gathered from the direction of another person's gaze, such as their intentions and aspects of the environment that are relevant to those intentions. Previous work has examined the effect of gaze on attention through the gaze-cueing effect: an enhancement of performance in detecting targets that appear where another person is looking. The present study investigated whether the physical self-similarity of a face could increase its impact on attention. Self-similarity was manipulated by morphing participants' faces with those of strangers. The effect of gaze direction on target detection was strongest for faces morphed with the participant's face. The results support previous work suggesting that self-similar faces are processed differently from dissimilar faces. The data also demonstrate that a face's similarity to one's own face influences the degree to which that face guides our attention in the environment.  相似文献   

9.
This study was designed to assess the potential benefits of using spatial auditory warning signals in a simulated driving task. In particular, the authors assessed the possible facilitation of responses (braking or accelerating) to potential emergency driving situations (the rapid approach of a car from the front or from behind) seen through the windshield or the rearview mirror. Across 5 experiments, the authors assessed the efficacy of nonspatial-nonpredictive (neutral), spatially nonpredictive (50% valid), and spatially predictive (80% valid) car horn sounds, as well as symbolic predictive and spatially presented symbolic predictive verbal cues (the words "front" or "back") in directing the participant's visual attention to the relevant direction. The results suggest that spatially predictive semantically meaningful auditory warning signals may provide a particularly effective means of capturing attention.  相似文献   

10.
There are ample data suggesting that a spatial difference between two competing auditory messages provides a better basis for selective attention and listening than other differences. The present experiments show that the spatial difference (left vs. right ear) between two auditory inputs leads to their faster discrimination than even large differences in frequency (in the same ear). These results might explain the afore-mentioned privileged status of the spatial dimension as a basis of selective attention: spatial differences lead in faster identification of relevant or attended input than differences in other dimensions.  相似文献   

11.
The ability to process simultaneously presented auditory and visual information is a necessary component underlying many cognitive tasks. While this ability is often taken for granted, there is evidence that under many conditions auditory input attenuates processing of corresponding visual input. The current study investigated infants' processing of visual input under unimodal and cross-modal conditions. Results of the three reported experiments indicate that different auditory input had different effects on infants' processing of visual information. In particular, unfamiliar auditory input slowed down visual processing, whereas more familiar auditory input did not. These results elucidate mechanisms underlying auditory overshadowing in the course of cross-modal processing and have implications on a variety of cognitive tasks that depend on cross-modal processing.  相似文献   

12.
We evaluated the importance of early visual input for the later development of expertise in face processing by studying 17 patients, aged 10 to 38 years, treated for bilateral congenital cataracts that deprived them of patterned visual input for the first 7 weeks or more after birth. We administered five computerized tasks that required matching faces on the basis of identity (with changed facial expression or head orientation), facial expression, gaze direction and lip reading. Compared to an age–matched control group, patients’ recognition of facial identity was impaired significantly when there was a change in head orientation (e.g. from frontal to tilted up), and tended to be impaired when there was a change in facial expression (e.g. from happy to surprised). Patients performed normally when matching facial expression and direction of gaze (e.g. looking left or right), and in reading lips (e.g. pronouncing ‘u’ or ‘a’). The results indicate that visual input during early infancy is necessary for the normal development of some aspects of face processing, and are consistent with theories postulating the importance of early visual experience (de Schonen & Mathivet, 1989; Johnson & Morton, 1991) and separate neural mediation of different components of face processing (Bruce & Young, 1986).  相似文献   

13.
Communicators' tuning of a message about a social target to their audience's evaluation can shape their representation of the target. This audience‐tuning effect has been demonstrated with ambiguous text passages as input material. We examined whether the effect also occurs when communicators learn about the target's behaviours from visual (nonverbal) input material. In Experiment 1, participants watched a soundless video depicting ambiguous behaviours of a target, described the video to an audience who liked (vs. disliked) the target, and subsequently recalled the video. Both message and recall were biased towards the audience's judgement. In Experiment 2, the video depicted a forensically relevant event, specifically ambiguous behaviours of two persons involved in a bar brawl. Participants tuned their event retellings to their audience's responsibility judgement and remembered the event accordingly. In both experiments, the effect of the audience's judgement on recall was statistically mediated by the extent to which the message was tuned to the audience. The more participants experienced a shared reality with their audience the stronger was the message‐recall correlation (Experiment 2). We conclude that the audience‐tuning effect for visually perceived information depends on the communicators' creation of a shared reality with their audience. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
The aim of this article is to find neural correlates of attention allocated to processing mediated messages. Event-related potentials (ERPs) for auditory distractors were recorded while subjects were engaged in watching a movie telling a short story (audio-video condition) or listening to a radio program describing the same events (audio condition). The amplitudes of the N1 and P3a components for distractors were larger in the audio than in the audio-video condition. The results indicate a stronger orienting response to auditory distractors when listening to the radio than when listening to and watching television. It confirmed predictions of the limited capacity model of motivated mediated message processing (LC4MP), which assumes that the less complex the encoded message, the more attentional resources are left for additional tasks. The largest amplitude of the P3a was observed during the first stage of encoding the message compared to the next stages. P3a amplitude to repeated auditory distractors seems to be a strong indicator of habituation. Results are discussed in the context of LC4MP and perceptual load theory of attention.  相似文献   

15.
When Ss attend to one auditory message, they have no permanent memory for a second auditory message received simultaneously. Generally, it has been argued that a similar effect would occur crossmodally. This hypothesis was tested in the present experiment for messages presented to visual and auditory modalities. All Ss were tested for recognition of information presented either while shadowing or while hearing but not shadowing a passage of prose presented to one ear. One group heard a list of concrete nouns in their other ear. Three other groups received (1) printed words. (2) pictures of objects easily labeled, or (3) pictures of objects difficult to label. The shadowing task produced a decrement m recognition scores for the first three groups but not for the group receiving pictures of objects difficult to label. Further, the shadowing task interfered more with information received auditorily than with any form of visual information. These results suggest that information received visually is stored in a long-term modality-specific memory that may operate independently of the auditory modality.  相似文献   

16.
Do readers “see” the words that story characters read and “hear” the words that they hear? Just as priming effects are reduced when stimuli are presented cross-modally on two different occasions, we found reduced transfer effects when story characters were described as experiencing stimuli cross-modally. In Experiment 1, a repeated phrase was described as being part of a spoken message in both Story A and Story B, and transfer effects were found. In Experiment 2, in contrast, when the phrase was described as a written note in one story and a spoken message in the other, reading-time results indicated that readers did not retrieve the meaning of the repeated phrase. The results are consistent with findings indicating that visual imagery simulates visual processing and that auditory imagery simulates auditory processing. We conclude that readers mentally simulate the perceptual details involved in story characters’ linguistic exchanges.  相似文献   

17.
Two experiments investigated the nature of the code in which lip-read speech is processed. In Experiment 1 subjects repeated words, presented with lip-read and masked auditory components out of synchrony by 600 ms. In one condition the lip-read input preceded the auditory input, and in the second condition the auditory input preceded the lip-read input. Direction of the modality lead did not affect the accuracy of report. Unlike auditory/graphic letter matching (Wood, 1974), the processing code used to match lip-read and auditory stimuli is insensitive to the temporal ordering of the input modalities. In Experiment 2, subjects were presented with two types of lists of colour names: in one list some words were heard, and some read; the other list consisted of heard and lip-read words. When asked to recall words from only one type of input presentation, subjects confused lip-read and heard words more frequently than they confused heard and read words. The results indicate that lip-read and heard speech share a common, non-modality specific, processing stage that excludes graphically presented phonological information.  相似文献   

18.
Three experiments examined the effect of gaze shifts on overall performance and ear differences in dichotic listening. In the first two experiments, lights were switched on and off so as to induce rightward, leftward, or upward gaze during dichotic stimulation. The dichotic material consisted of musical passages in Experiment 1 and two kinds of verbal material in Experiment 2. Vertical eye movements enhanced the accuracy of identification of music but not verbal material. The lateral direction of eye movements affected subjects' ability to localize targets in Experiment 1: localization was more accurate in the direction toward which subjects were looking. In the third experiment it was found that optokinetic nystagmus (OKN) influenced the asymmetry of performance on a dichotic consonant-vowel (CV) test. The right-ear advantage was greatest when the OKN drum rotated from left to right and least when it rotated from right to left. The effect was due to corresponding variation in left-ear scores. Possible mechanisms through which shifts of gaze affect auditory identification and localization are proposed.  相似文献   

19.
Single neuron recording studies have demonstrated the existence of hippocampal spatial view neurons which encode information about the spatial location at which a primate is looking in the environment. These neurons are able to maintain their firing even in the absence of visual input. The standard neuronal network approach to model networks with memory that represent continuous spaces is that of continuous attractor networks. It has recently been shown how idiothetic (self-motion) inputs could update the activity packet of neuronal firing for a one-dimensional case (head direction cells), and for a two-dimensional case (place cells which represent the place where a rat is located). In this paper, we describe three models of primate hippocampal spatial view cells, which not only maintain their spatial firing in the absence of visual input, but can also be updated in the dark by idiothetic input. The three models presented in this paper represent different ways in which a continuous attractor network could integrate a number of different kinds of velocity signal (e.g., head rotation and eye movement) simultaneously. The first two models use velocity information from head angular velocity and from eye velocity cells, and make use of a continuous attractor network to integrate this information. A fundamental feature of the first two models is their use of a 'memory trace' learning rule which incorporates a form of temporal average of recent cell activity. Rules of this type are able to build associations between different patterns of neural activities that tend to occur in temporal proximity, and are incorporated in the model to enable the recent change in the continuous attractor to be associated with the contemporaneous idiothetic input. The third model uses positional information from head direction cells and eye position cells to update the representation of where the agent is looking in the dark. In this case the integration of idiothetic velocity signals is performed in the earlier layer of head direction cells.  相似文献   

20.
We report the case of a patient, LEW, who presents with modality-specific naming deficits. He is seriously impaired in naming pictures of both objects and actions. His naming to auditory verbal definitions and of actions carried out by the experimenter is, however, relatively well preserved. He has no visual perceptual deficits and his access to the semantics of pictures is as good as that to the semantics of spoken words. While LEW is not an optic aphasic patient, his pattern of performance is relevant to the debate that has taken place of the organization of the semantic system. We discuss his case from this perspective and argue that LEW's selective deficits support the multiple semantics position. We also argue that the "preverbal message" level in the speech production model of Levelt (1989) is the equivalent of "verbal semantics." We provide additional constraints and principles to the concept of the preverbal message and we term the system so constrained the "restricted preverbal message." Copyright 2000 Academic Press.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号