首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One possible way of haptically perceiving length is to trace a path with one’s index finger and estimate the distance traversed. Here, we present an experiment in which observers judge the lengths of paths across cylindrically curved surfaces. We found that convex and concave surfaces had qualitatively different effects: convex lengths were overestimated, whereas concave lengths were underestimated. In addition, we observed that the index finger moved more slowly across the convex surface than across the concave one. As a result, movement times for convex lengths were longer. The considerable correlation between movement times and length estimates suggests that observers take the duration of movement as their primary measure of perceived length, but disregard movement speeds. Several mechanisms that could underlie observers’ failure to account for speed differences are considered.  相似文献   

2.
In two experiments we studied how motor responses affect stimulus encoding when stimuli and responses are functionally unrelated and merely overlap in time. Such R-S effects across S-R assignments have been reported by Schubö, Aschersleben, and Prinz (2001), who found that stimulus encoding was affected by concurrent response execution in the sense of a contrast (i.e., emphasizing differences). The present study aimed at elucidating the mechanisms underlying this effect. Experiment 1 studied the time course of the R-S effect. Contrast was only obtained for short intertrial intervals (ITIs). With long ITIs contrast turned into assimilation (i.e., emphasizing similarities). Experiment 2 excluded an interpretation of the assimilation effect in terms of motor repetition. Our findings support the notion of a shared representational domain for perception and action control, and suggest that contrast between stimulus and response codes emerges when two S-R assignments compete with each other in perception. When perceptual competition is over, assimilation emerges in memory.  相似文献   

3.
In a two-dimensional display, identical visual targets moving toward and across each other with equal, constant speed can be perceived either to reverse their motion directions at the coincidence point (bouncing percept) or to stream through one another (streaming percept). Although there is a strong tendency to perceive the streaming percept, various factors have been reported to induce the bouncing percept, such as a sound or a visual flash at the moment of the visual target coincidence. By changing duration of the postcoincidence trajectory (PCT), we investigated how long it would take for such bounce-inducing factors to be maximally effective after the visual coincidence. With bounce-inducing factors, the percentage of the bouncing percept did not reach its maximal level immediately after the coincidence but increased as a function of PCT duration up to 150-200 msec. The results clearly reject the possibility of the cognitive-bias hypothesis about the bounce-inducing effect and suggest rather that the bounce-inducing factors have to interact with the PCT for some period after the coincidence to be maximally effective.  相似文献   

4.
Emotional facial expressions are powerful social cues. Here we investigated how emotional expression affects the interpretation of eye gaze direction. Fifty-two observers judged where faces were looking by moving a slider on a measuring bar to the respective position. The faces displayed either an angry, happy, fearful or a neutral expression and were looking either straight at the observer, or were rotated 2°, 4°, 6° or 8° to the left and right. We found that happy faces were interpreted as directed closer to the observer, while fearful and angry faces were interpreted as directed further away. Judgments were most accurate for neutral faces, followed by happy, angry and fearful faces. These findings are discussed on the background of the “self-referential positivity bias”, suggesting that happy faces are preferably interpreted as directed towards the self while negative emotions are interpreted as directed further away.  相似文献   

5.
Psychonomic Bulletin & Review - An accurate perception of the space surrounding us is central for effective and safe everyday functioning. Understanding the factors influencing spatial...  相似文献   

6.
Prior knowledge shapes our experiences, but which prior knowledge shapes which experiences? This question is addressed in the domain of music perception. Three experiments were used to determine whether listeners activate specific musical memories during music listening. Each experiment provided listeners with one of two musical contexts that was presented simultaneously with a melody. After a listener was familiarized with melodies embedded in contexts, the listener heard melodies in isolation and judged the fit of a final harmonic or metrical probe event. The probe event matched either the familiar (but absent) context or an unfamiliar context. For both harmonic (Experiments 1 and 3) and metrical (Experiment 2) information, exposure to context shifted listeners' preferences toward a probe matching the context that they had been familiarized with. This suggests that listeners rapidly form specific musical memories without explicit instruction, which are then activated during music listening. These data pose an interesting challenge for models of music perception which implicitly assume that the listener's knowledge base is predominantly schematic or abstract.  相似文献   

7.
8.
Selective attention protects cognition against intrusions of task-irrelevant stimulus attributes. This protective function was tested in coordinated psychophysical and memory experiments. Stimuli were superimposed, horizontally and vertically oriented gratings of varying spatial frequency; only one orientation was task relevant. Experiment 1 demonstrated that a task-irrelevant spatial frequency interfered with visual discrimination of the task-relevant spatial frequency. Experiment 2 adopted a two-item Sternberg task, using stimuli that had been scaled to neutralize interference at the level of vision. Despite being visually neutralized, the task-irrelevant attribute strongly influenced recognition accuracy and associated reaction times (RTs). This effect was sharply tuned, with the task-irrelevant spatial frequency having an impact only when the task-relevant spatial frequencies of the probe and study items were highly similar to one another. Model-based analyses of judgment accuracy and RT distributional properties converged on the point that the irrelevant orientation operates at an early stage in memory processing, not at a later one that supports decision making.  相似文献   

9.
The ability to make accurate audiovisual synchrony judgments is affected by the "complexity" of the stimuli: We are much better at making judgments when matching single beeps or flashes as opposed to video recordings of speech or music. In the present study, we investigated whether the predictability of sequences affects whether participants report that auditory and visual sequences appear to be temporally coincident. When we reduced their ability to predict both the next pitch in the sequence and the temporal pattern, we found that participants were increasingly likely to report that the audiovisual sequences were synchronous. However, when we manipulated pitch and temporal predictability independently, the same effect did not occur. By altering the temporal density (items per second) of the sequences, we further determined that the predictability effect occurred only in temporally dense sequences: If the sequences were slow, participants' responses did not change as a function of predictability. We propose that reduced predictability affects synchrony judgments by reducing the effective pitch and temporal acuity in perception of the sequences.  相似文献   

10.
In the present study, we investigate whether reading an action-word can influence subsequent visual perception of biological motion. The participant's task was to perceptually judge whether a human action identifiable in the biological motion of a point-light display embedded in a high density mask was present or not in the visual sequence, which lasted for 633 ms on average. Prior to the judgement task, participants were exposed to an abstract verb or an action verb for 500 ms, which was related to the human action according to a congruent or incongruent semantic relation. Data analysis showed that correct judgements were not affected by action verbs, whereas a facilitation effect on response time (49 ms on average) was observed when a congruent action verb primed the judgement of biological movements. In relation with the existing literature, this finding suggests that the perception, the planning and the linguistic coding of motor action are subtended by common motor representations.  相似文献   

11.
Children’s early word production is influenced by the statistical frequency of speech sounds and combinations. Three experiments asked whether this production effect can be explained by a perceptual learning mechanism that is sensitive to word-token frequency and/or variability. Four-year-olds were exposed to nonwords that were either frequent (presented 10 times) or infrequent (presented once). When the frequent nonwords were spoken by the same talker, children showed no significant effect of perceptual frequency on production. When the frequent nonwords were spoken by different talkers, children produced them with fewer errors and shorter latencies. The results implicate token variability in perceptual learning.  相似文献   

12.
It is commonly assumed that perceived distance in full-cue, ecologically valid environments is redundantly specified and approximately veridical. However, recent research has called this assumption into question by demonstrating that distance perception varies in different types of environments even under full-cue viewing conditions. We report five experiments that demonstrate an effect of environmental context on perceived distance. We measured perceived distance in two types of environments (indoors and outdoors) with two types of measures (perceptual matching and blindwalking). We found effects of environmental context for both egocentric and exocentric distances. Across conditions, within individual experiments, all viewer-to-target depth-related variables were kept constant. The differences in perceived distance must therefore be explained by variations in the space beyond the target.  相似文献   

13.
This study investigates the relative contribution of body parts in the elaboration of a whole-body egocentric attraction phenomenon previously observed during earth-based judgments. This was addressed through a particular earth-based task requiring estimating the possibility of passing under a projected line, imagining a forward horizontal displacement. Different postural configurations were tested, involving whole-body tilt, trunk tilt alone or head tilt alone. Two legs positions relative to the trunk were manipulated. Results showed systematic deviations of the subjective “passability” toward the tilt, linearly related to the tilt magnitude. For each postural configuration, the egocentric influence appeared to be highly dependent on the position of trunk and head axes, whereas the legs position appeared not relevant. When compared to the whole-body tilt condition, tilting the trunk alone consistently reduced the amount of the deviation toward the tilt, whereas tilting the head alone consistently increased it. Our results suggest that several specific effects from multiple body parts can account for the global deviation of the estimates observed during whole-body tilt. Most importantly, we support that the relative contribution of the body segments could mainly depend on a reweighting process, probably based on the reliability of sensory information available for a particular postural set.  相似文献   

14.
J R Lackner  P DiZio 《Perception》1988,17(1):71-80
When a limb is used for locomotion, patterns of afferent and efferent activity related to its own motion are present as well as visual, vestibular, and other proprioceptive information about motion of the whole body. A study is reported in which it was asked whether visual stimulation present during whole-body motion can influence the perception of the leg movements propelling the body. Subjects were tested in conditions in which the stepping movements they made were identical but the amount of body displacement relative to inertial space and to the visual surround varied. These test conditions were created by getting the subjects to walk on a rotatable platform centered inside a large, independently rotatable, optokinetic drum. In each test condition, subjects, without looking at their legs, compared, against a standard condition in which the floor and drum were both stationary, their speed of body motion, their stride length and stepping rate, the direction of their steps, and the perceived force they exerted during stepping. When visual surround motion was incompatible with the motion normally associated with the stepping movements being made, changes in apparent body motion and in the awareness of the frequency, extent, and direction of the voluntary stepping movements resulted.  相似文献   

15.
Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology. While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.  相似文献   

16.
We introduce a novel physiologically-based methodology to consumer research—using the glycoprotein miraculin to manipulate the ability to sense and perceive specific taste elements in gustatory experiences. We apply this approach to exploring how information extrinsic (e.g., product reviews) to a product's inherent sensory facets influences reported consumption experiences and experienced utility. Results from two experiments suggest that extrinsic information distorts the basic sensory and perceptual character of consumption experiences, rather than simply biasing self-reports of the experiences or serving as an independent input to overall taste and utility evaluations. Such evaluations are distorted in the direction of extrinsic product information only when the ability to actually perceive the experience as being consistent with the extrinsic signal is not disrupted by miraculin. Conversely, disruption by miraculin of the ability to perceive an experience as being consistent with an extrinsic signal ablates or reverses such effects. Implications, applications to brands and branding, and other possible research directions for the miraculin taste-manipulation methodology are also discussed.  相似文献   

17.
Knowing a word affects the fundamental perception of the sounds within it   总被引:4,自引:0,他引:4  
Understanding spoken language is an exceptional computational achievement of the human cognitive apparatus. Theories of how humans recognize spoken words fall into two categories: Some theories assume a fully bottom-up flow of information, in which successively more abstract representations are computed. Other theories, in contrast, assert that activation of a more abstract representation (e.g., a word) can affect the activation of smaller units (e.g., phonemes or syllables). The two experimental conditions reported here demonstrate the top-down influence of word representations on the activation of smaller perceptual units. The results show that perceptual processes are not strictly bottom-up: Computations at logically lower levels of processing are affected by computations at logically more abstract levels. These results constrain and inform theories of the architecture of human perceptual processing of speech.  相似文献   

18.
19.
We quantitatively investigated the halt and recovery of illusory motion perception in static images. With steady fixation, participants viewed images causing four different motion illusions. The results showed that the time courses of the Fraser-Wilcox illusion and the modified Fraser-Wilcox illusion (i.e., "Rotating Snakes") were very similar, while the Ouchi and Enigma illusions showed quite a different trend. When participants viewed images causing the Fraser-Wilcox illusion and the modified Fraser-Wilcox illusion, they typically experienced disappearance of the illusory motion within several seconds. After a variable interstimulus interval (ISI), the images were presented again in the same retinal position. The magnitude of the illusory motion from the second image presentation increased as the ISI became longer. This suggests that the same adaptation process either directly causes or attenuates both the Fraser-Wilcox illusion and the modified Fraser-Wilcox illusion.  相似文献   

20.
Research has shown that auditory speech recognition is influenced by the appearance of a talker's face, but the actual nature of this visual information has yet to be established. Here, we report three experiments that investigated visual and audiovisual speech recognition using color, gray-scale, and point-light talking faces (which allowed comparison with the influence of isolated kinematic information). Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /va/, and /vi/ were used to produce auditory, visual, congruent, and incongruent audiovisual speech stimuli. Visual speech identification and visual influences on identifying the auditory components of congruent and incongruent audiovisual speech were identical for color and gray-scale faces and were much greater than for point-light faces. These results indicate that luminance, rather than color, underlies visual and audiovisual speech perception and that this information is more than the kinematic information provided by point-light faces. Implications for processing visual and audiovisual speech are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号