首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The idea that faces are represented within a structured face space (Valentine Quarterly Journal of Experimental Psychology 43: 161–204, 1991) has gained considerable experimental support, from both physiological and perceptual studies. Recent work has also shown that faces can even be recognized haptically—that is, from touch alone. Although some evidence favors congruent processing strategies in the visual and haptic processing of faces, the question of how similar the two modalities are in terms of face processing remains open. Here, this question was addressed by asking whether there is evidence for a haptic face space, and if so, how it compares to visual face space. For this, a physical face space was created, consisting of six laser-scanned individual faces, their morphed average, 50 %-morphs between two individual faces, as well as 50 %-morphs of the individual faces with the average, resulting in a set of 19 faces. Participants then rated either the visual or haptic pairwise similarity of the tangible 3-D face shapes. Multidimensional scaling analyses showed that both modalities extracted perceptual spaces that conformed to critical predictions of the face space framework, hence providing support for similar processing of complex face shapes in haptics and vision. Despite the overall similarities, however, systematic differences also emerged between the visual and haptic data. These differences are discussed in the context of face processing and complex-shape processing in vision and haptics.  相似文献   

2.
It is known that stimuli near the hands receive preferential processing. In the present study, we explored changes in early vision near the hands. Participants were more sensitive to low-spatial-frequency information and less sensitive to high-spatial-frequency information for stimuli presented close to the hands. This pattern suggests enhanced processing in the magnocellular visual pathway for such stimuli, and impaired processing in the parvocellular pathway. Consistent with that possibility, we found that the effects of hand proximity in several tasks were eliminated by illumination with red diffuse light—a manipulation known to impair magnocellular processing. These results help clarify how the hands affect vision.  相似文献   

3.
Stimulus orientation discrimination was investigated in visual and haptic modalities under conditions of simultaneous matching and memory. Discrimination of vertical and horizontal was significantly more accurate than discrimination of oblique stimulus orientations (45°, 135°, 225°, and 315°) for both modalities; haptic errors, however, were significantly greater at each orientation. While subjects were reliably more accurate in visually matching oblique stimulus orientations to a standard than producing them from memory, for the haptic modality, differences between memory and matching conditions were less consistent across the orientations sampled.  相似文献   

4.
5.
Hand position in the visual field influences performance in several visual tasks. Recent theoretical accounts have proposed that hand position either (a) influences the allocation of spatial attention, or (b) biases processing toward the magnocellular visual pathway. Comparing these accounts is difficult as some studies manipulate the distance of one hand in the visual field while others vary the distance of both hands, and it is unclear whether single and dual hand manipulations have the same impact on perception. We ask if hand position affects the spatial distribution of attention, with a broader distribution of attention when both hands are near a visual display and a narrower distribution when one hand is near a display. We examined the effects of four hand positions near the screen (left hand, right hand, both hands, no hands) on both temporal and spatial discrimination tasks. Placing two hands near the display compared to two hands distant resulted in improved sensitivity for the temporal task and reduced sensitivity in the spatial task, replicating previous results. However, the single hand manipulations showed the opposite pattern of results. Together these results suggest that visual attention is focused on the graspable space for a single hand, and expanded when two hands frame an area of the visual field.  相似文献   

6.
Research has revealed that haptic perception of parallelity deviates from physical reality. Large and systematic deviations have been found in haptic parallelity matching most likely due to the influence of the hand-centered egocentric reference frame. Providing information that increases the influence of allocentric processing has been shown to improve performance on haptic matching. In this study allocentric processing was stimulated by providing informative vision in haptic matching tasks that were performed using hand- and arm-centered reference frames.  相似文献   

7.
Are visual and verbal processing systems functionally independent? Two experiments (one using line drawings of common objects, the other using faces) explored the relationship between the number of syllables in an object's name (one or three) and the visual inspection of that object. The tasks were short-term recognition and visual search. Results indicated more fixations and longer gaze durations on objects having three-syllable names when the task encouraged a verbal encoding of the objects (i.e., recognition). No effects of syllable length on eye movements were found when implicit naming demands were minimal (i.e., visual search). These findings suggest that implicitly naming a pictorial object constrains the oculomotor inspection of that object, and that the visual and verbal encoding of an object are synchronized so that the faster process must wait for the slower to be completed before gaze shifts to another object. Both findings imply a tight coupling between visual and linguistic processing, and highlight the utility of an oculomotor methodology to understand this coupling.  相似文献   

8.
Three experiments establish the size-weight illusion as a primarily haptic phenomenon, despite its having been more traditionally considered an example of vision influencing haptic processing. Experiment 1 documents, across a broad range of stimulus weights and volumes, the existence of a purely haptic size-weight illusion, equal in strength to the traditional illusion. Experiment 2 demonstrates that haptic volume cues are both sufficient and necessary-for a full-strength illusion. In contrast, visual volume cues are merely sufficient, and produce a relatively weaker effect. Experiment 3 establishes that congenitally blind subjects experience an effect as powerful as that Of blindfolded sighted observers, thus demonstrating that visual imagery is also unnecessary for a robust size-weight illusion. The results are discussed in terms of their implications for both sensory and cognitive theories of the size-weight illusion. Applications of this work to a human factors design and to sensor-based systems for robotic manipulation are also briefly considered.  相似文献   

9.
10.
We examined how closely the underlying cognitive processing in a visual search task guides eye movements by comparing two different search tasks. In the extended search task, participants searched for an O in eight clusters of Landolt Cs with varying gap widths (four characters per cluster, arranged to look like words in text). In the single-cluster task, participants searched a single cluster (identical to the ones in the extended search). The key manipulation was gap size; although gap orientation for the distractors varied within a cluster, gap size was constant within a cluster but differed in size from cluster to cluster. The principal findings were that (1) gaze durations in the extended search were almost completely a function of the difficulty of the cluster (i.e., the gap size of the Cs) and (2) the effect of gap size on gaze durations in the extended search was very similar to its effect on response times in the single-cluster search. Thus, it appears that eye movements in the search task are determined almost exclusively by the ongoing cognitive processing on that cluster.  相似文献   

11.
12.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

13.
Visual information provided by a talker’s mouth movements can influence the perception of certain speech features. Thus, the “McGurk effect” shows that when the syllable /bi/ is presented audibly, in synchrony with the syllable /gi/, as it is presented visually, a person perceives the talker as saying /di/. Moreover, studies have shown that interactions occur between place and voicing features in phonetic perception, when information is presented audibly. In our first experiment, we asked whether feature interactions occur when place information is specified by a combination of auditory and visual information. Members of an auditory continuum ranging from /ibi/ to /ipi/ were paired with a video display of a talker saying /igi/. The auditory tokens were heard as ranging from /ibi/ to /ipi/, but the auditory-visual tokens were perceived as ranging from /idi/ to /iti/. The results demonstrated that the voicing boundary for the auditory-visual tokens was located at a significantly longer VOT value than the voicing boundary for the auditory continuum presented without the visual information. These results demonstrate that place-voice interactions are not limited to situations in which place information is specified audibly. In three follow-up experiments, we show that (1) the voicing boundary is not shifted in the absence of a change in the global percept, even when discrepant auditory-visual information is presented; (2) the number of response alternatives provided for the subjects does not affect the categorization or the VOT boundary of the auditory-visual stimuli; and (3) the original effect of a VOT boundary shift is not replicated when subjects are forced by instruction to \ldrelabel\rd the /b-p/auditory stimuli as/d/or/t/. The subjects successfully relabeled the stimuli, but no shift in the VOT boundary was observed.  相似文献   

14.
Modeling the role of parallel processing in visual search   总被引:6,自引:0,他引:6  
Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.  相似文献   

15.
16.
The relationship between saccadic eye movements and covert orienting of visual spatial attention was investigated in two experiments. In the first experiment, subjects were required to make a saccade to a specified location while also detecting a visual target presented just prior to the eye movement. Detection accuracy was highest when the location of the target coincided with the location of the saccade, suggesting that subjects use spatial attention in the programming and/or execution of saccadic eye movements. In the second experiment, subjects were explicitly directed to attend to a particular location and to make a saccade to the same location or to a different one. Superior target detection occurred at the saccade location regardless of attention instructions. This finding shows that subjects cannot move their eyes to one location and attend to a different one. The results of these experiments suggest that visuospatial attention is an important mechanism in generating voluntary saccadic eye movements.  相似文献   

17.
In order to assess the mode and sequence of interaction of visual-geometric illusion mechanisms, responses to simple and composite overestimation configurations were measured in 72 observers. Analysis suggests that some illusion mechanisms add their distortions to the outputs of others and that other mechanisms average. A logarithmic or near-logarithmic transformation seems to occur when illusory effects combine. A path analysis of the results suggests that some illusory mechanisms combine in a serial manner and that others operate on separate parallel channels. Notions based on simple addition of illusory effects and on serial linear processing are not supported by these analyses.  相似文献   

18.
Preschoolers who explore objects haptically often fail to recognize those objects in subsequent visual tests. This suggests that children may represent qualitatively different information in vision and haptics and/or that children’s haptic perception may be poor. In this study, 72 children (2½-5 years of age) and 20 adults explored unfamiliar objects either haptically or visually and then chose a visual match from among three test objects, each matching the exemplar on one perceptual dimension. All age groups chose shape-based matches after visual exploration. Both 5-year-olds and adults also chose shape-based matches after haptic exploration, but younger children did not match consistently in this condition. Certain hand movements performed by children during haptic exploration reliably predicted shape-based matches but occurred at very low frequencies. Thus, younger children’s difficulties with haptic-to-visual information transfer appeared to stem from their failure to use their hands to obtain reliable haptic information about objects.  相似文献   

19.
Social events can be described from the perspective of either a person in the situation in which the event occurs (e.g., “John came into…”) or that of an outside observer (“John went into…”). We find that when individuals are disposed to form visual images, they have difficulty comprehending both verbal statements and pictures when the perspective from which the event is described differs from the perspective from which they have encountered similar events in daily life. Furthermore, the disposition to form visual images increases the intensity of emotional reactions to an event when the event is described from the perspective of someone in the situation in which it occurs. These effects are not evident, however, among individuals who typically process information semantically without forming visual images.  相似文献   

20.
In the visual world paradigm, participants are more likely to fixate a visual referent that has some semantic relationship with a heard word, than they are to fixate an unrelated referent [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6, 813-839]. Here, this method is used to examine the psychological validity of models of high-dimensional semantic space. The data strongly suggest that these corpus-based measures of word semantics predict fixation behavior in the visual world and provide further evidence that language-mediated eye movements to objects in the concurrent visual environment are driven by semantic similarity rather than all-or-none categorical knowledge. The data suggest that the visual world paradigm can, together with other methodologies, converge on the evidence that may help adjudicate between different theoretical accounts of the psychological semantics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号