首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Over the last decade, researchers have debated whether anchoring effects are the result of semantic or numeric priming. The present study tested both hypotheses. In four experiments involving a sensory detection task, participants first made a relative confidence judgment by deciding whether they were more or less confident than an anchor value in the correctness of their decision. Subsequently, they expressed an absolute level of confidence. In two of these experiments, the relative confidence anchor values represented the midpoints between the absolute confidence scale values, which were either explicitly numeric or semantic, nonnumeric representations of magnitude. In two other experiments, the anchor values were drawn from a scale modally different from that used to express the absolute confidence (i.e., nonnumeric and numeric, respectively, or vice versa). Regardless of the nature of the anchors, the mean confidence ratings revealed anchoring effects only when the relative and absolute confidence values were drawn from identical scales. Together, the results of these four experiments limit the conditions under which both numeric and semantic priming would be expected to lead to anchoring effects.  相似文献   

2.
Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5).  相似文献   

3.
The context effect in implicit memory is the finding that presentation of words in meaningful context reduces or eliminates repetition priming compared to words presented in isolation. Virtually all of the research on the context effect has been conducted in the visual modality but preliminary results raise the question of whether context effects are less likely in auditory priming. Context effects in the auditory modality were systematically examined in five experiments using the auditory implicit tests of word-fragment and word-stem completion. The first three experiments revealed the classical context effect in auditory priming: Words heard in isolation produced substantial priming, whereas there was little priming for the words heard in meaningful passages. Experiments 4 and 5 revealed that a meaningful context is not required for the context effect to be obtained: Words presented in an unrelated audio stream produced less priming than words presented individually and no more priming than words presented in meaningful passages. Although context effects are often explained in terms of the transfer-appropriate processing (TAP) framework, the present results are better explained by Masson and MacLeod's (2000) reduced-individuation hypothesis.  相似文献   

4.
Confidence-accuracy calibration was examined for both absolute (recognizing single faces as old or new) and relative (selecting which of pairs of faces is old) judgments, using both full- (0%-100%) and half-range (50%-100%) confidence scales. The half-range confidence scale demonstrated superior calibration to the full-range scale, for which a confidence-accuracy association was evident only for the upper half (i.e., 50%-100%) of the scale. Good calibration was observed for the absolute judgment conditions, but the relative judgment conditions evidenced marked underconfidence. Also, in the absolute judgment conditions, good calibration for positive recognition decisions and poorer calibration for negative decisions was observed. These results are discussed in the context of theories of confidence and accuracy in face recognition memory and also of eyewitness identification research.  相似文献   

5.
Research has shown that judgments tend to assimilate to irrelevant "anchors." We extend anchoring effects to show that anchors can even operate across modalities by, apparently, priming a general sense of magnitude that is not moored to any unit or scale. An initial study showed that participants drawing long "anchor" lines made higher numerical estimates of target lengths than did those drawing shorter lines. We then replicated this finding, showing that a similar pattern was obtained even when the target estimates were not in the dimension of length. A third study showed that an anchor's length relative to its context, and not its absolute length, is the key to predicting the anchor's impact on judgments. A final study demonstrated that magnitude priming (priming a sense of largeness or smallness) is a plausible mechanism underlying the reported effects. We conclude that the boundary conditions of anchoring effects may be much looser than previously thought, with anchors operating across modalities and dimensions to bias judgment.  相似文献   

6.
It is often of theoretical interest to know if implicit memory (repetition priming) develops across childhood under a given circumstance. Methodologically, however, it is difficult to determine whether development is present when baseline performance for unstudied items improves with age. Calculation of priming in absolute (priming=studied – unstudied) or relative‐to‐baseline terms can lead to different conclusions. In first noting this problem, Parkin (1993) suggested using the Snodgrass (1989a) calculation of relative priming [priming=(studied – unstudied)/(maximum – unstudied)], and most developmental studies have since adopted this procedure. Here, we question the Snodgrass method because the Snodgrass method's results are not replicated in the picture identification task when baselines are equated experimentally across age groups. Instead, results support an absolute measure of priming. Theoretically, we argue against its core assumption; namely, that children and adults always lie on the same learning curve, with an equal maximum performance level and equal rate of learning.  相似文献   

7.
Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking.  相似文献   

8.
Frequency of behaviour is often assessed by scales using relative frequencies such as ‘often’ or ‘rarely’, so‐called vague quantifiers. Previous research showed that respondents calibrate such scales according to subjective standards. Here, it is argued that respondents follow conversational norms and if possible try to figure out which calibration the researcher had in mind and adapt their responses accordingly. They may use the survey context to infer a relevant anchor for such vague quantifiers. In the present study, although respondents did not differ in absolute behaviour frequencies, their reports of relative frequencies were influenced by information about the target population and the topic of the survey. Apparently, respondents anchored the scale according to the estimated frequency in the target population and the frequency of other behaviours relevant to the topic of the survey. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
The authors report 4 studies exploring a self-report strategy for measuring explicit attitudes that uses "relative" ratings, in which respondents indicate how favorable or unfavorable they are compared with other people. Results consistently showed that attitudes measured with relative scales predicted relevant criterion variables (self-report of behavior, measures of knowledge, peer ratings of attitudes, peer ratings of behavior) better than did attitudes measured with more traditional "absolute" scales. The obtained pattern of differences in prediction by relative versus absolute measures of attitudes did not appear to be attributable to differential variability, social desirability effects, the clarity of scale-point meanings, the number of scale points, or overlap with subjective norms. The final study indicated that relative measures induce respondents to consider social comparison information and behavioral information when making their responses more than do absolute measures, which may explain the higher correlations between relative measures of attitudes and relevant criteria.  相似文献   

10.
The notion of a mental time-line (i.e., past corresponds to left and future corresponds to right) supports the conceptual metaphor view assuming that abstract concepts like “time” are grounded in cognitively more accessible concepts like “space.” In five experiments, we further investigated the relationship between temporal and spatial representations and examined whether or not the spatial correspondents of time are unintentionally activated. We employed a priming paradigm, in which visual or auditory prime words (i.e., temporal adverbs such as yesterday, tomorrow) preceded a colored square. In all experiments, participants discriminated the color of this square by responding with the left or the right hand. Although the temporal reference of the priming adverb was task irrelevant in Experiment 1, visually presented primes facilitated responses to the square in correspondence with the direction of the mental time-line. This priming effect was absent in Experiments 2, 3, and 5, in which the primes were presented auditorily and the temporal reference of the words could be ignored. The effect, however, emerged when attention was oriented to the temporal content of the auditory prime words in Experiment 4. The results suggest that task demands differentially modulate the activation of the mental time-line within the visual and auditory modality and support a flexible association between conceptual codes.  相似文献   

11.
Recent studies on the conceptualization of abstract concepts suggest that the concept of time is represented along a left-right horizontal axis, such that left-to-right readers represent past on the left and future on the right. Although it has been demonstrated with strong consistency that the localization (left or right) of visual stimuli could modulate temporal judgments, results obtained with auditory stimuli are more puzzling, with both failures and successes at finding the effect in the literature. The present study supports an account based on the relative relevance of visual versus auditory-spatial information in the creation of a frame of reference to map time: The auditory location of words interacted with their temporal meaning only when auditory information was made more relevant than visual spatial information by blindfolding participants.  相似文献   

12.
Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, but auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.  相似文献   

13.
IMPLICIT MEMORY IN AMNESIC PATIENTS:   总被引:1,自引:0,他引:1  
Abstract— Amnesic patients generally exhibit spared priming effects on implicit memory tasks despite poor explicit memory In a previous study, we demonstrated normal auditory priming in amnesic patients on an identification-in-noise test in which the magnitude of priming is independent of whether the speaker's voice is the same or different at study and test In the present experiment, we examined auditory priming on a filter identification test in which the magnitude of priming in control subjects is higher when the speaker's voice is the same at study and test than when it is different Amnesic patients, by contrast, failed to exhibit more priming in a same-voice condition than in a different-voice condition Voice-specific priming may depend on a memory system that is impaired in amnesia  相似文献   

14.
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.  相似文献   

15.
Previous research with words read in context at encoding showed little if any long-term repetition priming. In Experiment 1, 96 Spanish–English bilinguals translated words in isolation or in sentence contexts at encoding. At test, they translated words or named pictures corresponding to words produced at encoding and control words not previously presented. Repetition priming was reliable in all conditions, but priming effects were generally smaller for contextualized than for isolated words. Repetition priming in picture naming indicated priming from production in context. A componential analysis indicated priming from comprehension in context, but only in the less fluent language. Experiment 2 was a replication of Experiment 1 with auditory presentation of the words and sentences to be translated. Repetition priming was reliable in all conditions, but priming effects were again smaller for contextualized than for isolated words. Priming in picture naming indicated priming from production in context, but the componential analysis indicated no detectable priming for auditory comprehension. The results of the two experiments taken together suggest that repetition priming reflects the long-term learning that occurs with comprehension and production exposures to words in the context of natural language.  相似文献   

16.
Integrating face and voice in person perception   总被引:4,自引:0,他引:4  
Integration of information from face and voice plays a central role in our social interactions. It has been mostly studied in the context of audiovisual speech perception: integration of affective or identity information has received comparatively little scientific attention. Here, we review behavioural and neuroimaging studies of face-voice integration in the context of person perception. Clear evidence for interference between facial and vocal information has been observed during affect recognition or identity processing. Integration effects on cerebral activity are apparent both at the level of heteromodal cortical regions of convergence, particularly bilateral posterior superior temporal sulcus (pSTS), and at 'unimodal' levels of sensory processing. Whether the latter reflects feedback mechanisms or direct crosstalk between auditory and visual cortices is as yet unclear.  相似文献   

17.
Contextual similarity between learning and test phase has been shown to be beneficial for memory retrieval. Negative priming is known to be caused by multiple processes; one of which is episodic retrieval. Therefore, the contextual similarity of prime and probe presentations should influence the size of the negative priming effect. This has been shown for the visual modality. In Experiment 1, an auditory four-alternative forced choice reaction time task was used to test the influence of prime-probe contextual similarity on negative priming and the processes underlying the modulation by context. The negative priming effect was larger when the auditory context was repeated than when it was changed from prime to probe. The modulation by context was exclusively caused by an increase in prime response retrieval errors in ignored repetition trials with context repetition, whereas repeating only the context but not the prime distractor did not lead to an increase in prime response retrieval. This exact pattern of results was replicated in Experiment 2. The findings suggest that contextual information is integrated with prime distractor and response information. Retrieval of the previous episode, including prime distractor, prime response, and context (event file), can be triggered when the former prime distractor is repeated, whereas a context cue alone does not retrieve the event file. This suggests an event file structure that is more complicated than its usually assumed binary structure.  相似文献   

18.
Auditory, visual, and cross-modal negative priming was investigated using a task in which participants judged whether stimuli were animals or musical instruments. Negative priming was observed, but only if the attended and the ignored primes evoked different responses. This pattern—negative priming after conflict, but not after nonconflict, primes—was demonstrated with visual stimuli and replicated with auditory stimuli, as well as across modalities, both auditory to visual and visual to auditory. Implications for theories of negative priming are discussed.  相似文献   

19.
On the cross-modal perception of intensity   总被引:2,自引:0,他引:2  
Are cross-modality matches based on absolute equivalences between the intensities of perceptual experiences in different senses, or are they based on relative positions within the respective sets of stimuli? To help answer this question, we conducted a series of three experiments; in each the levels of stimulus magnitude in one modality stayed constant while the levels in the other changed from session to session. Results obtained by two methods--magnitude matching and cross-modal difference estimation--agreed in revealing the following: First, the cross-modality matches seem to represent in all cases a compromise between absolute equivalence and relative (contextual) comparison, the compromise being about 50-50 for both auditory loudness versus vibratory loudness and auditory loudness versus visual brightness, but more nearly, though not wholly, absolute for perceived auditory duration versus visual duration. Second, individual variations abounded, with some subjects evidencing totally absolute matching, others totally relative matching (with little consistency, however, between tasks or between comparisons of different pairs of modalities). Third, the judgments of cross-modal difference were consistent with a model of linear subtraction, and in the case of loudness, the underlying scale was roughly compatible with Stevens's sone scale. Finally, a model designed to describe sequential dependencies in response can account for at least part of the context-induced changes in cross-modal equivalence.  相似文献   

20.
When making decisions as to whether or not to bind auditory and visual information, temporal and stimulus factors both contribute to the presumption of multimodal unity. In order to study the interaction between these factors, we conducted an experiment in which auditory and visual stimuli were placed in competitive binding scenarios, whereby an auditory stimulus was assigned to either a primary or a secondary anchor in a visual context (VAV) or a visual stimulus was assigned to either a primary or secondary anchor in an auditory context (AVA). Temporal factors were manipulated by varying the onset of the to-be-bound stimulus in relation to the two anchors. Stimulus factors were manipulated by varying the magnitudes of the visual (size) and auditory (intensity) signals. The results supported the dominance of temporal factors in auditory contexts, in that effects of time were stronger in AVA than in VAV contexts, and stimulus factors in visual contexts, in that effects of magnitude were stronger in VAV than in AVA contexts. These findings indicate the precedence for temporal factors, with particular reliance on stimulus factors when the to-be-assigned stimulus was temporally ambiguous. Stimulus factors seem to be driven by high-magnitude presentation rather than cross-modal congruency. The interactions between temporal and stimulus factors, modality weighting, discriminability, and object representation highlight some of the factors that contribute to audio–visual binding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号