首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
According to perceptual symbol systems, sensorimotor simulations underlie the representation of concepts. It follows that sensorimotor phenomena should arise in conceptual processing. Previous studies have shown that switching from one modality to another during perceptual processing incurs a processing cost. If perceptual simulation underlies conceptual processing, then verifying the properties of concepts should exhibit a switching cost as well. For example, verifying a property in the auditory modality (e.g., BLENDER-loud) should be slower after verifying a property in a different modality (e.g., CRANBERRIES-tart) than after verifying a property in the same modality (e.g., LEAVES-rustling). Only words were presented to subjects, and there were no instructions to use imagery. Nevertheless, switching modalities incurred a cost, analogous to the cost of switching modalities in perception. A second experiment showed that this effect was not due to associative priming between properties in the same modality. These results support the hypothesis that perceptual simulation underlies conceptual processing.  相似文献   

2.
According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. We investigated whether recognition memory for pictures of concepts was facilitated by earlier representation of visual properties of those concepts. During study, concept names (e.g., apple) were presented in a property verification task with a visual property (e.g., shiny) or with a nonvisual property (e.g., tart). Delayed picture recognition memory was better if the concept name had been presented with a visual property than if it had been presented with a nonvisual property. These results indicate that modality-specific simulations are used for concept representation.  相似文献   

3.
Theories of grounded cognition propose that modal simulations underlie cognitive representation of concepts [Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22(4), 577-660; Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617-645]. Based on recent evidence of modality-specific resources in perception, we hypothesized that verifying properties of concepts encoded in different modalities are hindered more by perceptual short-term memory load to the same versus different sensory modality as that used to process the property. We manipulated load to visual and auditory modalities by having participants store one or three items in short-term memory during property verification. In the high (but not low) load condition, property verification took longer when the property (e.g., yellow) involved the same modality as that used by the memory load (e.g., pictures). Interestingly, similar interference effects were obtained on the conceptual verification and on the memory task. These findings provide direct support for the view that conceptual processing relies on simulation in modality-specific systems.  相似文献   

4.
Previous studies demonstrated that the sequential verification of different sensory modality properties for concepts (e.g., BLENDER-loud; BANANA-yellow) brings about a processing cost, known as the modality-switch effect. We report an experiment designed to assess the influence of the mode of presentation (i.e., visual, aural) of stimuli on the modality-switch effect in a property verification and lexical decision priming paradigm. Participants were required to perform a property verification or a lexical decision task on a target sentence (e.g., “a BEE buzzes”, “a DIAMOND glistens”) presented either visually or aurally after having been presented with a prime sentence (e.g., “the LIGHT is flickering”, “the SOUND is echoing”) that could either share both, one or none of the target’s mode of presentation and content modality. Results show that the mode of presentation of stimuli affects the conceptual modality-switch effect. Furthermore, the depth of processing required by the task modulates the complex interplay of perceptual and semantic information. We conclude that the MSE is a task-related, multilevel effect which can occur on two different levels of information processing (i.e., perceptual and semantic).  相似文献   

5.
In two experiments, subjects identified temporal patterns. The patterns consisted of eight dichotomous (left-right) elements, e.g. LLRRLRLR, continuously repeated until the subject was able to identify the pattern. In Experiment 1, one pattern was presented either separately in the auditory, tactual, or visual modalities or one pattern was presented simultaneously in two modalities (compatible presentation). In Experiment 2, one pattern was simultaneously presented in two modalities, but the pattern was presented in one modality and the complement of the pattern (the complement of LLRRLRLR is RRLLRLRL) was presented in the second modality. Therefore, opposite spatial elements appeared in each modality (incompatible presentation).

The results indicated that the rate of pattern identification was the same for compatible and incompatible presentation. These methods produce better performance than individual modality presentation at fast presentation rates (2 elements/sec. and faster) although individual modality presentation was better at slower rates. This suggests that when a pattern is presented in two modalities, the pattern in each modality is integrated, not the particular spatial elements in each modality. Furthermore, the rate of pattern identification using individual modalities did not predict the difficulty using pairs of modalities. These results demonstrate the Gestalt nature of pattern perception; the pattern is perceptually salient and the performance of pairs of modalities depends on the inherent properties of the individual modalities.  相似文献   

6.
Two experiments evaluated change in the perception of an environmental property (object length) in each of 3 perceptual modalities (vision, audition, and haptics) when perceivers were provided with the opportunity to experience the same environmental property by means of an additional perceptual modality (e.g., haptics followed by vision, vision followed by audition, or audition followed by haptics). Experiment 1 found that (a) posttest improvements in perceptual consistency occurred in all 3 perceptual modalities, regardless of whether practice included experience in an additional perceptual modality and (b) posttest improvements in perceptual accuracy occurred in haptics and audition but only when practice included experience in an additional perceptual modality. Experiment 2 found that learning curves in each perceptual modality could be accommodated by a single function in which auditory perceptual learning occurred over short time scales, haptic perceptual learning occurred over middle time scales, and visual perceptual learning occurred over long time scales. Analysis of trial-to-trial variability revealed patterns of long-term correlations in all perceptual modalities regardless of whether practice included experience in an additional perceptual modality.  相似文献   

7.
Recent models of the conceptual system hold that concepts are grounded in simulations of actual experiences with instances of those concepts in sensory-motor systems (e.g., Barsalou, 1999, 2003; Solomon & Barsalou, 2001). Studies supportive of such a viewhave shown that verifying a property of a concept in one modality, and then switching to verify a property of a different concept in a different modality generates temporal processing costs similar to the cost of switching modalities in perception. In addition to non-emotional concepts, the present experiment investigated switching costs in verifying properties of positive and negative (emotional) concepts. Properties of emotional concepts were taken from vision, audition, and the affective system. Parallel to switching costs in neutral concepts, the study showed that for positive and negative concepts, verifying properties from different modalities produced processing costs such that reaction times were longer and error rates were higher. Importantly, this effect was observed when switching from the affective system to sensory modalities, and vice-versa. These results support the embodied cognition view of emotion in humans.  相似文献   

8.
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners’ ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.  相似文献   

9.
Theories of embodied cognition hold that the conceptual system uses perceptual simulations for the purposes of representation. A strong prediction is that perceptual phenomena should emerge in conceptual processing, and, in support, previous research has shown that switching modalities from one trial to the next incurs a processing cost during conceptual tasks. However, to date, such research has been limited by its reliance on the retrieval of familiar concepts. We therefore examined concept creation by asking participants to interpret modality-specific compound phrases (i.e., conceptual combinations). Results show that modality switching costs emerge during the creation of new conceptual entities: People are slower to simulate a novel concept (e.g., auditory jingling onion) when their attention has already been engaged by a different modality in simulating a familiar concept (e.g., visual shiny penny). Furthermore, these costs cannot be accounted for by linguistic factors alone. Rather, our findings support the embodied view that concept creation, as well as retrieval, requires situated perceptual simulation.  相似文献   

10.
We present modality exclusivity norms for 400 randomly selected noun concepts, for which participants provided perceptual strength ratings across five sensory modalities (i.e., hearing, taste, touch, smell, and vision). A comparison with previous norms showed that noun concepts are more multimodal than adjective concepts, as nouns tend to subsume multiple adjectival property concepts (e.g., perceptual experience of the concept baby involves auditory, haptic, olfactory, and visual properties, and hence leads to multimodal perceptual strength). To show the value of these norms, we then used them to test a prediction of the sound symbolism hypothesis: Analysis revealed a systematic relationship between strength of perceptual experience in the referent concept and surface word form, such that distinctive perceptual experience tends to attract distinctive lexical labels. In other words, modality-specific norms of perceptual strength are useful for exploring not just the nature of grounded concepts, but also the nature of form–meaning relationships. These norms will be of benefit to those interested in the representational nature of concepts, the roles of perceptual information in word processing and in grounded cognition more generally, and the relationship between form and meaning in language development and evolution.  相似文献   

11.
Recent work has shown that people routinely use perceptual information during language comprehension and conceptual processing, from single-word recognition to modality-switching costs in property verification. In investigating such links between perceptual and conceptual representations, the use of modality-specific stimuli plays a central role. To aid researchers working in this area, we provide a set of norms for 423 adjectives, each describing an object property, with mean ratings of how strongly that property is experienced through each of five perceptual modalities (visual, haptic, auditory, olfactory, and gustatory). The data set also contains estimates of modality exclusivity—that is, a measure of the extent to which a particular property may be considered unimodal (i.e., perceived through one sense alone). Although there already exists a number of sets of word and object norms, we provide the first set to categorize words describing object properties along the dimensions of the five perceptual modalities. We hope that the norms will be of use to researchers working at the interface between linguistic, conceptual, and perceptual systems. The modality exclusivity norms may be downloaded as supplemental materials for this article from brm.psychonomic-journals.org/ content/supplemental.  相似文献   

12.
If people represent concepts with perceptual simulations, two predictions follow in the property verification task (e.g., Isface a property of GORILLA?). First, perceptual variables such as property size should predict the performance of neutral subjects, because these variables determine the ease of processing properties in perceptual simulations (i.e., perceptual effort). Second, uninstructed neutral subjects should spontaneously construct simulations to verify properties and therefore perform similarly to imagery subjects asked explicitly to use images (i.e., instructional equivalence). As predicted, neutral subjects exhibited both perceptual effort and instructional equivalence, consistent with the assumption that they construct perceptual simulations spontaneously to verify properties. Notably, however, this pattern occurred only when highly associated false properties prevented the use of a word association strategy. In other conditions that used unassociated false properties, the associative strength between concept and property words became a diagnostic cue for true versus false responses, so that associative strength became a better predictor of verification than simulation. This pattern indicates that conceptual tasks engender mixtures of simulation and word association, and that researchers must deter word association strategies when the goal is to assess conceptual knowledge.  相似文献   

13.
Classical cognitive accounts of verbal short-term memory (STM) invoke an abstract, phonological level of representation which, although it may be derived differently via different modalities, is itself amodal. Key evidence for this view is that serial recall of phonologically similar verbal items (e.g., the letter sounds b, c, g, and d) is worse than that of dissimilar items, regardless of modality of presentation. Here we show that the effect of such phonological similarity in STM can be fully accounted for by the joint action of articulatory similarity, leading to errors in speech planning processes, and acoustic similarity within auditorily presented lists, which modulates their perceptual organization. The results indicate that key evidence used to argue for the existence of abstract phonological representation can in fact be fully accounted for by reference to modality-specific perceptual and motor planning mechanisms.  相似文献   

14.
This paper examined conceptual versus perceptual priming in identification of incomplete pictures by using a short-term priming paradigm, in which information that may be useful in identifying a fragmented target is presented just prior to the target’s presentation. The target was a picture that slowly and continuously became complete and the participants were required to press a key as soon as they knew what it was. Each target was preceded by a visual prime. The nature of this prime varied from very conceptual (e.g., the name of the picture’s category) to very perceptual (e.g., a similar-shaped pictorial prime from a different category). Primes also included those that combined perceptual and conceptual information (e.g., names or images of the target picture). Across three experiments, conceptual primes were effective while the purely perceptual primes were not. Accordingly, we conclude that pictures in this type of task are identified primarily by conceptual processing, with perceptual processing contributing relatively little.  相似文献   

15.
According to theories of grounded cognition, conceptual representation and perception share processing mechanisms. We investigated whether this overlap is due to conscious perceptual imagery. Participants filled out questionnaires to assess the vividness of their imagery (Questionnaire on Mental Imagery) and the extent to which their imagery was object oriented and spatially oriented (Object-Spatial Imagery Questionnaire), and they performed a mental rotation task. One week later, they performed a verbal property verification task. In this task, involvement of modality-specific systems is evidenced by the modality-switch effect, the finding that performance on a target trial (e.g., apple—green) is better after a same-modality trial (e.g., diamond—sparkle) than after a different-modality trial (e.g., airplane—noisy). Results showed a modality-switch effect, but there was no systematic relation between imagery scores and modality switch. We conclude that conscious mental imagery is not fundamental to conceptual representation.  相似文献   

16.
Individuals often describe objects in their world in terms of perceptual dimensions that span a variety of modalities; the visual (e.g., brightness: dark–bright), the auditory (e.g., loudness: quiet–loud), the gustatory (e.g., taste: sour–sweet), the tactile (e.g., hardness: soft vs. hard) and the kinaesthetic (e.g., speed: slow–fast). We ask whether individuals use perceptual dimensions to differentiate emotions from one another. Participants in two studies (one where respondents reported on abstract emotion concepts and a second where they reported on specific emotion episodes) rated the extent to which features anchoring 29 perceptual dimensions (e.g., temperature, texture and taste) are associated with 8 emotions (anger, fear, sadness, guilt, contentment, gratitude, pride and excitement). Results revealed that in both studies perceptual dimensions differentiate positive from negative emotions and high arousal from low arousal emotions. They also differentiate among emotions that are similar in arousal and valence (e.g., high arousal negative emotions such as anger and fear). Specific features that anchor particular perceptual dimensions (e.g., hot vs. cold) are also differentially associated with emotions.  相似文献   

17.
Two experiments are reported that addressed the relative involvement and nature of perceptual and conceptual priming in a semantically complex task. Both experiments investigated facilitation from repeated semantic comparison trials in which subjects decided whether two words had the same meaning (e.g.,moist damp). The first experiment compared the magnitude and persistence of perceptual and conceptual priming components. Perceptual priming effects were modest, and contrary to some previous evidence, they did not appear to be more persistent than nonperceptual priming effects. The second experiment investigated the memory processes involved when perceptual priming was eliminated through a modality change between prime and target trials. Evidence suggested that conceptual priming primarily involved memory for the meaning comparison processes rather than better access to existing memory for the stimulus words.  相似文献   

18.
In two experiments the effect of object category on event-related potentials (ERPs) was assessed while subjects performed superordinate categorizations with pictures and words referring to objects from natural (e.g., animal) and artifactual (e.g., tool) categories. First, a category probe was shown that was presented as name in Experiment 1 and as picture in Experiment 2. Thereafter, the target stimulus was displayed. In both experiments, analyses of the ERPs to the targets revealed effects of category at about 160 msec after target onset in the pictorial modality, which can be attributed to category-specific differences in perceptual processing. Later, between about 300-500 msec, natural and artifactual categories elicited similar ERP effects across target and category modalities. These findings suggest that perceptual as well as semantic sources contribute to category-specific effects. They support the view that semantic knowledge associated with different categories is represented in multiple subsystems that are similarly accessed by pictures and words.  相似文献   

19.
Blind and blindfolded sighted observers were presented with auditory stimuli specifying target locations. The stimulus was either sound from a loudspeaker or spatial language (e.g., "2 o'clock, 16 ft"). On each trial, an observer attempted to walk to the target location along a direct or indirect path. The ability to mentally keep track of the target location without concurrent perceptual information about it (spatial updating) was assessed in terms of the separation between the stopping points for the 2 paths. Updating performance was very nearly the same for the 2 modalities, indicating that once an internal representation of a location has been determined, subsequent updating performance is nearly independent of the modality used to specify the representation.  相似文献   

20.
Language processing always involves a combination of sensory (auditory or visual) and motor modalities (vocal or manual). In line with embodied cognition theories, we additionally assume a semantically implied modality (SIM) due to modality references of the underlying concept. Understanding ear-related words (e.g. “noise”), for example, should activate the auditory SIM. In the present study, we investigated the influence of the SIM on sensory-motor modality switching (e.g. switching between the auditory-vocal and visual-manual combination). During modality switching, participants categorised words with regard to their SIM (e.g. ear- versus eye-related words). Overall performance was improved and switch costs were reduced whenever there was concordance between SIMs and sensory-motor modalities (e.g. an auditory presentation of ear-related words). Thus, the present study provides first evidence for semantic effects during sensory-motor modality switching in terms of facilitation effects whenever the SIM was in concordance with sensory-motor modalities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号