首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
According to perceptual symbol systems, sensorimotor simulations underlie the representation of concepts. It follows that sensorimotor phenomena should arise in conceptual processing. Previous studies have shown that switching from one modality to another during perceptual processing incurs a processing cost. If perceptual simulation underlies conceptual processing, then verifying the properties of concepts should exhibit a switching cost as well. For example, verifying a property in the auditory modality (e.g., BLENDER-loud) should be slower after verifying a property in a different modality (e.g., CRANBERRIES-tart) than after verifying a property in the same modality (e.g., LEAVES-rustling). Only words were presented to subjects, and there were no instructions to use imagery. Nevertheless, switching modalities incurred a cost, analogous to the cost of switching modalities in perception. A second experiment showed that this effect was not due to associative priming between properties in the same modality. These results support the hypothesis that perceptual simulation underlies conceptual processing.  相似文献   

2.
This research explores the way in which young children (5 years of age) and adults use perceptual and conceptual cues for categorizing objects processed by vision or by audition. Three experiments were carried out using forced-choice categorization tasks that allowed responses based on taxonomic relations (e.g., vehicles) or on schema category relations (e.g., vehicles that can be seen on the road). In Experiment 1 (visual modality), prominent responses based on conceptually close objects (e.g., objects included in a schema category) were observed. These responses were also favored when within-category objects were perceptually similar. In Experiment 2 (auditory modality), schema category responses depended on age and were influenced by both within- and between-category perceptual similarity relations. Experiment 3 examined whether these results could be explained in terms of sensory modality specializations or rather in terms of information processing constraints (sequential vs. simultaneous processing).  相似文献   

3.
In two experiments the effect of object category on event-related potentials (ERPs) was assessed while subjects performed superordinate categorizations with pictures and words referring to objects from natural (e.g., animal) and artifactual (e.g., tool) categories. First, a category probe was shown that was presented as name in Experiment 1 and as picture in Experiment 2. Thereafter, the target stimulus was displayed. In both experiments, analyses of the ERPs to the targets revealed effects of category at about 160 msec after target onset in the pictorial modality, which can be attributed to category-specific differences in perceptual processing. Later, between about 300-500 msec, natural and artifactual categories elicited similar ERP effects across target and category modalities. These findings suggest that perceptual as well as semantic sources contribute to category-specific effects. They support the view that semantic knowledge associated with different categories is represented in multiple subsystems that are similarly accessed by pictures and words.  相似文献   

4.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identification and localization of object features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a “what” task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a “where” task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a “where” interference task was embedded in a “what” primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

5.
Recent models of the conceptual system hold that concepts are grounded in simulations of actual experiences with instances of those concepts in sensory-motor systems (e.g., Barsalou, 1999, 2003; Solomon & Barsalou, 2001). Studies supportive of such a viewhave shown that verifying a property of a concept in one modality, and then switching to verify a property of a different concept in a different modality generates temporal processing costs similar to the cost of switching modalities in perception. In addition to non-emotional concepts, the present experiment investigated switching costs in verifying properties of positive and negative (emotional) concepts. Properties of emotional concepts were taken from vision, audition, and the affective system. Parallel to switching costs in neutral concepts, the study showed that for positive and negative concepts, verifying properties from different modalities produced processing costs such that reaction times were longer and error rates were higher. Importantly, this effect was observed when switching from the affective system to sensory modalities, and vice-versa. These results support the embodied cognition view of emotion in humans.  相似文献   

6.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identificationand localization ofobject features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a "what" task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a "where" task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a "where" interference task was embedded in a "what" primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

7.
Theories of embodied cognition hold that the conceptual system uses perceptual simulations for the purposes of representation. A strong prediction is that perceptual phenomena should emerge in conceptual processing, and, in support, previous research has shown that switching modalities from one trial to the next incurs a processing cost during conceptual tasks. However, to date, such research has been limited by its reliance on the retrieval of familiar concepts. We therefore examined concept creation by asking participants to interpret modality-specific compound phrases (i.e., conceptual combinations). Results show that modality switching costs emerge during the creation of new conceptual entities: People are slower to simulate a novel concept (e.g., auditory jingling onion) when their attention has already been engaged by a different modality in simulating a familiar concept (e.g., visual shiny penny). Furthermore, these costs cannot be accounted for by linguistic factors alone. Rather, our findings support the embodied view that concept creation, as well as retrieval, requires situated perceptual simulation.  相似文献   

8.
Switching from one functional or cognitive operation to another is thought to rely on executive/control processes. The efficacy of these processes may depend on the extent of overlap between neural circuitry mediating the different tasks; more effective task preparation (and by extension smaller switch costs) is achieved when this overlap is small. We investigated the performance costs associated with switching tasks and/or switching sensory modalities. Participants discriminated either the identity or spatial location of objects that were presented either visually or acoustically. Switch costs between tasks were significantly smaller when the sensory modality of the task switched versus when it repeated. This was the case irrespective of whether the pre-trial cue informed participants only of the upcoming task, but not sensory modality (Experiment 1) or whether the pre-trial cue was informative about both the upcoming task and sensory modality (Experiment 2). In addition, in both experiments switch costs between the senses were positively correlated when the sensory modality of the task repeated across trials and not when it switched. The collective evidence supports the independence of control processes mediating task switching and modality switching and also the hypothesis that switch costs reflect competitive interference between neural circuits.  相似文献   

9.
Language processing requires the combination of compatible (auditory-vocal and visual-manual) or incompatible (auditory-manual and visual-vocal) sensory-motor modalities, and switching between these sensory-motor modality combinations is very common in every-day life. Sensory-motor modality compatibility is defined as the similarity of stimulus modality and the modality of response-related sensory consequences. We investigated the influence of sensory-motor modality compatibility during performing language-related cognitive operations on different linguistic levels. More specifically, we used a variant of the task-switching paradigm, in which participants had to switch between compatible or between incompatible sensory-motor modality combinations during a verbal semantic categorization (Experiment 1) or during a word-form decision (Experiment 2). The data show higher switch costs (i.e., higher reaction times and error rates in switch trials compared to repetition trials) in incompatible sensory-motor modality combinations than in compatible sensory-motor modality combinations. This was true for every language-related cognitive operation, regardless of the individual linguistic level. Taken together, the present study demonstrates that sensory-motor modality compatibility plays an important role in modality switching during language processing.  相似文献   

10.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

11.
Parr LA 《Animal cognition》2004,7(3):171-178
The ability of organisms to discriminate social signals, such as affective displays, using different sensory modalities is important for social communication. However, a major problem for understanding the evolution and integration of multimodal signals is determining how humans and animals attend to different sensory modalities, and these different modalities contribute to the perception and categorization of social signals. Using a matching-to-sample procedure, chimpanzees discriminated videos of conspecifics' facial expressions that contained only auditory or only visual cues by selecting one of two facial expression photographs that matched the expression category represented by the sample. Other videos were edited to contain incongruent sensory cues, i.e., visual features of one expression but auditory features of another. In these cases, subjects were free to select the expression that matched either the auditory or visual modality, whichever was more salient for that expression type. Results showed that chimpanzees were able to discriminate facial expressions using only auditory or visual cues, and when these modalities were mixed. However, in these latter trials, depending on the expression category, clear preferences for either the visual or auditory modality emerged. Pant-hoots and play faces were discriminated preferentially using the auditory modality, while screams were discriminated preferentially using the visual modality. Therefore, depending on the type of expressive display, the auditory and visual modalities were differentially salient in ways that appear consistent with the ethological importance of that display's social function.  相似文献   

12.
Manipulating inattentional blindness within and across sensory modalities   总被引:1,自引:0,他引:1  
People often fail to consciously perceive visual events that are outside the focus of attention, a phenomenon referred to as inattentional blindness or IB (i.e., Mack & Rock, 1998). Here, we investigated IB for words within and across sensory modalities (visually and auditorily) in order to assess whether dividing attention across different senses has the same consequences as dividing attention within an individual sensory modality. Participants were asked to monitor a rapid stream of pictures or sounds presented concurrently with task-irrelevant words (spoken or written). A word recognition test was used to measure the processing for unattended words compared to word recognition levels after explicitly monitoring the word stream. We were able to produce high levels of IB for visually and auditorily presented words under unimodal conditions (Experiment 1) as well as under crossmodal conditions (Experiment 2). A further manipulation revealed, however, that IB is less prevalent when attention is divided across modalities than within the same modality (Experiment 3). These findings are explained in terms of the attentional load hypothesis and suggest that, contrary to some claims, attention resources are to a certain extent shared across sensory modalities.  相似文献   

13.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

14.
Two experiments investigated the effect of test modality (visual or auditory) on source memory and event-related potentials (ERPs). Test modality influenced source monitoring such that source memory was better when the source and test modalities were congruent. Test modality had less of an influence when alternative information (i.e., cognitive operations) could be used to inform source judgments in Experiment 2. Test modality also affected ERP activity. Variation in parietal ERPs suggested that this activity reflects activation of sensory information, which can be attenuated when the sensory information is misleading. Changes in frontal ERPs support the hypothesis that frontal systems are used to evaluate source-specifying information present in the memory trace.  相似文献   

15.
We explored the functional organization of semantic memory for music by comparing priming across familiar songs both within modalities (Experiment 1, tune to tune; Experiment 3, category label to lyrics) and across modalities (Experiment 2, category label to tune; Experiment 4, tune to lyrics). Participants judged whether or not the target tune or lyrics were real (akin to lexical decision tasks). We found significant priming, analogous to linguistic associative-priming effects, in reaction times for related primes as compared to unrelated primes, but primarily for within-modality comparisons. Reaction times to tunes (e.g., “Silent Night”) were faster following related tunes (“Deck the Hall”) than following unrelated tunes (“God Bless America”). However, a category label (e.g., Christmas) did not prime tunes from within that category. Lyrics were primed by a related category label, but not by a related tune. These results support the conceptual organization of music in semantic memory, but with potentially weaker associations across modalities.  相似文献   

16.
Theories of grounded cognition propose that modal simulations underlie cognitive representation of concepts [Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22(4), 577-660; Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617-645]. Based on recent evidence of modality-specific resources in perception, we hypothesized that verifying properties of concepts encoded in different modalities are hindered more by perceptual short-term memory load to the same versus different sensory modality as that used to process the property. We manipulated load to visual and auditory modalities by having participants store one or three items in short-term memory during property verification. In the high (but not low) load condition, property verification took longer when the property (e.g., yellow) involved the same modality as that used by the memory load (e.g., pictures). Interestingly, similar interference effects were obtained on the conceptual verification and on the memory task. These findings provide direct support for the view that conceptual processing relies on simulation in modality-specific systems.  相似文献   

17.
Sensory systems are essential for perceiving and conceptualizing our semantic knowledge about the world and the way we interact with it. Despite studies reporting neural changes to compensate for the absence of a given sensory modality, studies focusing on the assessment of semantic processing reveal poor performances by deaf individuals when compared with hearing individuals. However, the majority of those studies were not performed in the linguistic modality considered the most adequate to their sensory capabilities (i.e., sign language). Therefore, this exploratory study was developed focusing on linguistic modality effects during semantic retrieval in deaf individuals in comparison with their hearing peers through a category fluency task. Results show a difference in performance between the two linguistic modalities by deaf individuals as well as in the type of linguistic clusters most chosen by participants, suggesting a complex clustering tendency by deaf individuals.  相似文献   

18.
大脑可以对来自不同感觉通道的信息进行处理与整合。与单一感觉通道相比, 个体对同时呈现在不同感觉通道的目标信号的响应会更快。对于这种现象的一种主要理论解释是共同激活模型, 该模型认为来自不同通道的刺激在特定的脑区汇聚整合, 比如顶叶内沟、颞上沟和前额叶皮层区域。整合后的信号强度更大, 可以更快地触发反应, 但是信号的整合发生在认知加工的哪一阶段目前尚未有明确结论。当个体对出现在不同感觉通道之间的任务转换进行加工时, 产生与感觉通道相关的任务转换的损失小于跨感觉通道转换损失与任务转换损失的总和, 这为与感觉通道相关的转换代价来源于任务设置的惯性和干扰提供了证据。而在单通道和多通道之间发生转换时, 跨通道转换代价会减小甚至消失, 这是由于同时发生的多感觉整合抵消了一部分损失, 这种现象支持了共同激活模型理论。然而, 多感觉信号整合对任务转换的神经加工过程产生怎样的影响并不清楚, 在未来的研究中可以把多感觉整合范式同经典的任务转换范式结合改进, 进而确定跨通道转换的加工机制和多感觉信号整合的发生阶段。  相似文献   

19.
Semantic facilitation with pictures and words   总被引:1,自引:0,他引:1  
The present experiments explored the role of processing level and strategic factors in cross-form (word-picture and picture-word) and within-form (picture-picture and word-word) semantic facilitation. Previous studies have produced mixed results. The findings presented in this article indicate that semantic facilitation depends on the task and on the subjects' strategies. When the task required semantic processing of both picture and word targets (e.g., category verification), equivalent facilitation was obtained across all modality combinations. When the task required name processing (e.g., name verification, naming), facilitation was obtained for the picture targets. In contrast, with word targets, facilitation was obtained only when the situation emphasized semantic processing. The results are consistent with models that propose a common semantic representation for both picture and words but that also include assumptions regarding differential order of access to semantic and phonemic features for these stimulus modalities.  相似文献   

20.
In the present experiments, participants had to verify properties of concepts but, depending on the trial condition, concept-property pairs were presented via headphones or on the screen. The results showed that participants took longer and were less accurate at verifying conceptual properties when the channel used to present the CONCEPT-property pair and the type of property matched in sensory modality (e.g., LEMON-yellow on screen; BLENDER-loud in headphones) compared to when properties and channel did not match (e.g., LEMON-yellow in headphones; BLENDER-loud on screen). Such interference is consistent with theories of embodied cognition holding that knowledge is grounded in modality-specific systems (Barsalou in Behav Brain Sci 22:577–660, 1999). When the resources of one modality are burdened during the task, processing costs are incurred in a conceptual task (Vermeulen et al. in Cognition 109:287–294, 2008).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号