首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We investigated the effect of unseen hand posture on cross-modal, visuo-tactile links in covert spatial attention. In Experiment 1, a spatially nonpredictive visual cue was presented to the left or right hemifield shortly before a tactile target on either hand. To examine the spatial coordinates of any cross-modal cuing, the unseen hands were either uncrossed or crossed so that the left hand lay to the right and vice versa. Tactile up/down (i.e., index finger/thumb) judgments were better on the same side of external space as the visual cue, for both crossed and uncrossed postures. Thus, which hand was advantaged by a visual cue in a particular hemifield reversed across the different unseen postures. In Experiment 2, nonpredictive tactile cues now preceded visual targets. Up/down judgments for the latter were better on the same side of external space as the tactile cue, again for both postures. These results demonstrate cross-modal links between vision and touch in exogenous covert spatial attention that remap across changes in unseen hand posture, suggesting a modulatory role for proprioception.  相似文献   

2.
Hemispheric predominance has been well documented in the visual perception of alphabetic words. However, the hemispheric processing of lexical information in Chinese character recognition and its relationship to reading performance are far from clear. In the divided visual field paradigm, participants were required to judge the orthography, phonology, or semantics of Chinese characters, which were presented randomly in the left or right visual field. The results showed a right visual field/left hemispheric superiority in the phonological judgment task, but no hemispheric advantage in the orthographic or semantic task was found. In addition, reaction times in the right visual field for phonological and semantic tasks were significantly correlated with the reading test score. These results suggest that both hemispheres involved in the orthographic and semantic processing of Chinese characters, and that the left lateralized phonological processing is important for Chinese fluent reading.  相似文献   

3.
We examined the hypothesis that angular errors in visually directed pointing, in which an unseen target is pointed to after its direction has been seen, are attributed to the difference between the locations of the visual and kinesthetic egocentres. Experiment 1 showed that in three of four cases, angular errors in visually directed pointing equaled those in kinesthetically directed pointing, in which a visual target was pointed to after its direction had been felt. Experiment 2 confirmed the results of experiment 1 for the targets at two different egocentric distances. Experiment 3 showed that when the kinesthetic egocentre was used as the reference of direction, angular errors in visually directed pointing equaled those in visually directed reaching, in which an unseen target is reached after its location has been seen. These results suggest that in the visually and the kinesthetically directed pointing, the egocentric directions represented in the visual space are transferred to the kinesthetic space and vice versa.  相似文献   

4.
The way in which the semantic information associated with people is organised in the brain is still unclear. Most evidence suggests either bilateral or left hemisphere lateralisation. In this paper we use a lateralised semantic priming paradigm to further examine this neuropsychological organisation. A clear semantic priming effect was found with greater priming occurring when semantically related prime faces were presented to the left visual field than when presented to the right visual field. Possible explanations for this finding are discussed in terms of the bilateral distribution of different classes of semantic information, a possible role of associative processes within semantic priming and interhemispheric transfer.  相似文献   

5.
The present study examined whether semantic memory for newly learned people is structured by visual co-occurrence, shared semantics, or both. Participants were trained with pairs of simultaneously presented (i.e., co-occurring) preexperimentally unfamiliar faces, which either did or did not share additionally provided semantic information (occupation, place of living, etc.). Semantic information could also be shared between faces that did not co-occur. A subsequent priming experiment revealed faster responses for both co-occurrence/no shared semantics and no co-occurrence/shared semantics conditions, than for an unrelated condition. Strikingly, priming was strongest in the co-occurrence/shared semantics condition, suggesting additive effects of these factors. Additional analysis of event-related brain potentials yielded priming in the N400 component only for combined effects of visual co-occurrence and shared semantics, with more positive amplitudes in this than in the unrelated condition. Overall, these findings suggest that both semantic relatedness and visual co-occurrence are important when novel information is integrated into person-related semantic memory.  相似文献   

6.
Developmental coordination disorder (DCD) is a neurodevelopmental condition affecting motor coordination in children and adults. Here, EEG signals elicited by visual and tactile stimuli were recorded while adult participants with and without probable DCD (pDCD) performed a motor task. The task cued reaching movements towards a location in visible peripersonal space as well as an area of unseen personal space. Event-related potentials elicited by visual and tactile stimuli revealed that visual processing was strongly affected by movement preparation in the pDCD group, even more than in controls. However, in contrast to the controls, tactile processing in unseen space was unaffected by movement preparation in the pDCD group. The selective use of sensory information from vision and proprioception is fundamental for the adaptive control of movements, and these findings suggest that this is impaired in DCD. Additionally, the pDCD group showed attenuated motor rhythms (beta: 13–30 Hz) over sensorimotor regions following cues to prepare movements towards unseen personal space. The results reveal that individuals with pDCD exhibit differences in the neural mechanisms of spatial selection and action preparation compared to controls, which may underpin the sustained difficulties they experience. These findings provide new insights into the neural mechanisms potentially disrupted in this highly prevalent disorder.  相似文献   

7.
In recent studies, researchers have discovered a larger neural activation for stimuli that are more extreme exemplars of their stimulus class, compared with stimuli that are more prototypical. This has been shown for faces as well as for familiar and novel shape classes. We used a visual search task to look for a behavioral correlate of these findings regarding both simple geometrical shapes and more complex, novel shape classes. The latter stimulus set enabled us to control for the physical properties of the shapes, establishing that the effects are solely due to the positions of the particular stimuli in a particular shape space (i.e., more extreme versus more central in shape space) and not to specific shape features. The results indicate that finding an atypical instance of a shape class among more prototypical ones is easier and faster than the other way around. The prototypical status of a shape in our experiment could change very quickly, that is, within minutes, depending on the subset of shapes that was shown to the participants. Manipulating the degree of familiarity toward the shapes by selectively increasing familiarity for the extreme shapes did not influence our results. In general, we show that the prototypical status of a stimulus in visual search is a highly dynamic property, depending on the distribution of stimuli within a shape space but not on familiarity with the prototype.  相似文献   

8.
Learning verbal semantic knowledge for objects has been shown to attenuate recognition costs incurred by changes in view from a learned viewpoint. Such findings were attributed to the semantic or meaningful nature of the learned verbal associations. However, recent findings demonstrate surprising benefits to visual perception after learning even noninformative verbal labels for stimuli. Here we test whether learning verbal information for novel objects, independent of its semantic nature, can facilitate a reduction in viewpoint-dependent recognition. To dissociate more general effects of verbal associations from those stemming from the semantic nature of the associations, participants learned to associate semantically meaningful (adjectives) or nonmeaningful (number codes) verbal information with novel objects. Consistent with a role of semantic representations in attenuating the viewpoint-dependent nature of object recognition, the costs incurred by a change in viewpoint were attenuated for stimuli with learned semantic associations relative to those associated with nonmeaningful verbal information. This finding is discussed in terms of its implications for understanding basic mechanisms of object perception as well as the classic viewpoint-dependent nature of object recognition.  相似文献   

9.
Abstract

The role of stimulus similarity as an organising principle in short-term memory was explored in a series of seven experiments. Each experiment involved the presentation of a short sequence of items that were drawn from two distinct physical classes and arranged such that item class changed after every second item. Following presentation, one item was re-presented as a probe for the ‘target’ item that had directly followed it in the sequence. Memory for the sequence was considered organised by class if probability of recall was higher when the probe and target were from the same class than when they were from different classes. Such organisation was found when one class was auditory and the other was visual (spoken vs. written words, and sounds vs. pictures). It was also found when both classes were auditory (words spoken in a male voice vs. words spoken in a female voice) and when both classes were visual (digits shown in one location vs. digits shown in another). It is concluded that short-term memory can be organised on the basis of sensory modality and on the basis of certain features within both the auditory and visual modalities.  相似文献   

10.
Vector-space word representations obtained from neural network models have been shown to enable semantic operations based on vector arithmetic. In this paper, we explore the existence of similar information on vector representations of images. For that purpose we define a methodology to obtain large, sparse vector representations of image classes, and generate vectors through the state-of-the-art deep learning architecture GoogLeNet for 20 K images obtained from ImageNet. We first evaluate the resultant vector-space semantics through its correlation with WordNet distances, and find vector distances to be strongly correlated with linguistic semantics. We then explore the location of images within the vector space, finding elements close in WordNet to be clustered together, regardless of significant visual variances (e.g., 118 dog types). More surprisingly, we find that the space unsupervisedly separates complex classes without prior knowledge (e.g., living things). Afterwards, we consider vector arithmetics. Although we are unable to obtain meaningful results on this regard, we discuss the various problem we encountered, and how we consider to solve them. Finally, we discuss the impact of our research for cognitive systems, focusing on the role of the architecture being used.  相似文献   

11.
The double dissociation between noun and verb processing, well documented in the neuropsychological literature, has not been supported in imaging studies. Recent imaging studies, in fact, suggest that once confounding with semantics is eliminated, grammatical class effects only emerge as a consequence of building frames. Here we assess this hypothesis behaviorally in two visual word recognition experiments. In Experiment 1, participants made lexical decisions on verb targets. We manipulated the grammatical class of the prime words (either nouns or verbs and always introduced in a minimal phrasal context, i.e., “the + N” or “to + V”), and their semantic similarity to a target (related vs. unrelated). We found reliable effects of grammatical class, and no interaction with semantic similarity. Experiment 2 further explored this grammatical class effect, using verb targets preceded by semantically unrelated verb vs. noun primes. In one condition, prime words were presented as bare words; in the other, they were presented in the minimal phrasal context used in Experiment 1. Grammatical class effects only arose in the latter but not in the former condition thus providing evidence that word recognition does not recruit grammatical class information unless it is provided to the system.  相似文献   

12.
Much of the reading that we do occurs near our hands. Previous research has revealed that spatial processing is enhanced near the hands, potentially benefiting several processes involved in reading; however, it is unknown whether semantic processing—another critical aspect of reading—is affected near the hands. While holding their hands either near to or far from a visual display, our subjects performed two tasks that drew on semantic processing: evaluation of the sensibleness of sentences, and the Stroop color-word interference task. We found evidence for impoverished semantic processing near the hands in both tasks. These results suggest a trade-off between spatial processing and semantic processing for the visual space around the hands. Readers are encouraged to be aware of this trade-off when choosing how to read a text, since both kinds of processing can be beneficial for reading.  相似文献   

13.
Recent research has shown that reward influences visual perception and cognition in a way that is distinct from the well-documented goal-directed mechanisms. In the current study we explored how task-irrelevant stimulus-reward associations affect processing of stimuli when attention is constrained and reward no longer delivered. During a training phase, participants performed a choice game, exposing them to different reward schedules for different semantic categories of natural scene pictures. In a separate test session in which no additional reward was provided, the differentially rewarded scene categories were used as task-irrelevant non-targets in a rapid serial visual presentation (RSVP) task. Participants performed a detection task on a previously non-rewarded target category. Results show that participants' sensitivity index (d′) for target detection decreased as a function of whether the reward was coupled to the distractor category during training. The semantic category of natural scenes associated with high reward caused more interference in target detection than the semantic category associated with a low reward. The same was found when new, unseen distractor pictures of the same semantic categories were used. The present findings suggest that reward can be selectively associated with high level scene semantics. We argue that learned stimulus-reward associations persistently bias perceptual processing independently of spatial shifts in attention and the immediate prospect of reward.  相似文献   

14.
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention.  相似文献   

15.
Previous research has suggested that visual short-term memory has a fixed capacity of about four objects. However, we found that capacity varied substantially across the five stimulus classes we examined, ranging from 1.6 for shaded cubes to 4.4 for colors (estimated using a change detection task). We also estimated the information load per item in each class, using visual search rate. The changes we measured in memory capacity across classes were almost exactly mirrored by changes in the opposite direction in visual search rate (r2=.992 between search rate and the reciprocal of memory capacity). The greater the information load of each item in a stimulus class (as indicated by a slower search rate), the fewer items from that class one can hold in memory. Extrapolating this linear relationship reveals that there is also an upper bound on capacity of approximately four or five objects. Thus, both the visual information load and number of objects impose capacity limits on visual short-term memory.  相似文献   

16.
Visual processing breaks the world into parts and objects, allowing us not only to examine the pieces individually, but also to perceive the relationships among them. There is work exploring how we perceive spatial relationships within structures with existing representations, such as faces, common objects, or prototypical scenes. But strikingly, there is little work on the perceptual mechanisms that allow us to flexibly represent arbitrary spatial relationships, e.g., between objects in a novel room, or the elements within a map, graph or diagram. We describe two classes of mechanism that might allow such judgments. In the simultaneous class, both objects are selected concurrently. In contrast, we propose a sequential class, where objects are selected individually over time. We argue that this latter mechanism is more plausible even though it violates our intuitions. We demonstrate that shifts of selection do occur during spatial relationship judgments that feel simultaneous, by tracking selection with an electrophysiological correlate. We speculate that static structure across space may be encoded as a dynamic sequence across time. Flexible visual spatial relationship processing may serve as a case study of more general visual relation processing beyond space, to other dimensions such as size or numerosity.  相似文献   

17.
In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition.  相似文献   

18.
Symes E  Ellis R  Tucker M 《Acta psychologica》2007,124(2):238-255
Five experiments systematically investigated whether orientation is a visual object property that affords action. The primary aim was to establish the existence of a pure physical affordance (PPA) of object orientation, independent of any semantic object-action associations or visually salient areas towards which visual attention might be biased. Taken together, the data from these experiments suggest that firstly PPAs of object orientation do exist, and secondly, the behavioural effects that reveal them are larger and more robust when the object appears to be graspable, and is oriented in depth (rather than just frontally) such that its leading edge appears to point outwards in space towards a particular hand of the viewer.  相似文献   

19.
Abstract: We examined the effect of the stimulus type and semantic categorization of the unexpected stimulus on sustained Inattentional Blindness (IB). Results showed that observers could establish attentional set based on a higher level of semantic categorization, which tuned one's attention to prioritizing semantic content over others. The unexpected stimulus, congruent with the attended objects in semantic categorization, was more likely to be noticed, whereas the incongruent semantic stimulus seemed to be unseen. Semantic category‐level attentional set played a crucial role in breaking through IB. The semantically congruent Chinese characters stimulus was detected and recognized more often than a semantically congruent picture stimulus, indicating that Chinese characters had more power to attract attention to escape sustained IB than pictures involved in visual processing. Presumably the finding of Chinese characters breaking through IB more easily might be due to the fact that Chinese characters look more distinct from pictures, rather than Chinese characters being processed more easily. Further research should be taken to test the semantic processing efficiency between pictures and Chinese characters in sustained IB.  相似文献   

20.
视觉注意选择性的认知心理学理论研究进展   总被引:10,自引:2,他引:8  
对注意选择性及其作用的探讨又具有极大的方法论价值 ,它既涉及许多基于有注意参与的知觉、决策、学习和记忆等重大心理学理论问题 ,又涉及对这些理论的评价以及其他一系列的研究方法的理解 ,本文仅就目前认知心理学关于“基于空间”和“基于客体”两大理论学派的研究进行简要回顾。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号