首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.  相似文献   

2.
Although some studies have shown that haptic and visual identification seem to rely on similar processes, few studies have directly compared the two. We investigated haptic and visual object identification by asking participants to learn to recognize (Experiments 1, and 3), or to match (Experiment 2) novel objects that varied only in shape. Participants explored objects haptically, visually, or bimodally, and were then asked to identify objects haptically and/or visually. We demonstrated that patterns of identification errors were similar across identification modality, independently of learning and testing condition, suggesting that the haptic and visual representations in memory were similar. We also demonstrated that identification performance depended on both learning and testing conditions: visual identification surpassed haptic identification only when participants explored the objects visually or bimodally. When participants explored the objects haptically, haptic and visual identification were equivalent. Interestingly, when participants were simultaneously presented with two objects (one was presented haptically, and one was presented visually), object similarity only influenced performance when participants were asked to indicate whether the two objects were the same, or when participants had learned about the objects visually—without any haptic input. The results suggest that haptic and visual object representations rely on similar processes, that they may be shared, and that visual processing may not always lead to the best performance.  相似文献   

3.
Working memory (WM) is a cognitive system responsible for actively maintaining and processing relevant information and is central to successful cognition. A process critical to WM is the resolution of proactive interference (PI), which involves suppressing memory intrusions from prior memories that are no longer relevant. Most studies that have examined resistance to PI in a process-pure fashion used verbal material. By contrast, studies using non-verbal material are scarce, and it remains unclear whether the effect of PI is domain-general or whether it applies solely to the verbal domain. The aim of the present study was to examine the effect of PI in visual WM using both objects with high and low nameability. Using a Directed-Forgetting paradigm, we varied discriminability between WM items on two dimensions, one verbal (high-nameability vs. low-nameability objects) and one perceptual (colored vs. gray objects). As in previous studies using verbal material, effects of PI were found with object stimuli, even after controlling for verbal labels being used (i.e., low-nameability condition). We also found that the addition of distinctive features (color, verbal label) increased performance in rejecting intrusion probes, most likely through an increase in discriminability between content–context bindings in WM.  相似文献   

4.
People often learn multiple classification systems that are relevant to some goal or use. We compared conditions in which subclassification within a category hierarchy was predicted by values on either the same (alignable) or different (nonalignable) dimensions between category hierarchies. The results indicated that learning in alignable conditions occurred in fewer blocks and with fewer errors than did learning in nonalignable conditions. This facilitation was not the result of differences between conditions in the representations learned by the participants, the number of dimensions needed for subclassification (Experiment 1), or the objective complexity of the learning task (Experiment 2). The facilitated learning in the alignable conditions appears to reflect a commitment on the part of the learner to alignment: the belief that the structure relevant to the use of one category system will also be relevant to the use of a comparable system.  相似文献   

5.
Previous research on category learning has found that classification tasks produce representations that are skewed toward diagnostic feature dimensions, whereas feature inference tasks lead to richer representations of within-category structure. Yet, prior studies often measure category knowledge through tasks that involve identifying only the typical features of a category. This neglects an important aspect of a category's internal structure: how typical and atypical features are distributed within a category. The present experiments tested the hypothesis that inference learning results in richer knowledge of internal category structure than classification learning. We introduced several new measures to probe learners' representations of within-category structure. Experiment 1 found that participants in the inference condition learned and used a wider range of feature dimensions than classification learners. Classification learners, however, were more sensitive to the presence of atypical features within categories. Experiment 2 provided converging evidence that classification learners were more likely to incorporate atypical features into their representations. Inference learners were less likely to encode atypical category features, even in a “partial inference” condition that focused learners' attention on the feature dimensions relevant to classification. Overall, these results are contrary to the hypothesis that inference learning produces superior knowledge of within-category structure. Although inference learning promoted representations that included a broad range of category-typical features, classification learning promoted greater sensitivity to the distribution of typical and atypical features within categories.  相似文献   

6.
The visual system is remarkably efficient at extracting regularities from the environment through statistical learning. While such extraction has extensive consequences on cognition, it is unclear how statistical learning shapes the representations of the individual objects that comprise the regularities. Here we examine how statistical learning alters object representations. In three experiments, participants were exposed to either random arrays containing objects in a random order, or structured arrays containing object pairs where two objects appeared next to each other in fixed spatial or temporal configurations. After exposure, one object in each pair was briefly presented and participants judged the location or the orientation of the object without seeing the other object in the pair. We found that when an object reliably appeared next to another object in space, it was judged as being closer to the other object in space even though the other object was never presented (Experiments 1 and 2). Likewise, when an object reliably preceded another object in time, its orientation was biased toward the orientation of the other object even though the other object was never presented (Experiment 3). These results demonstrated that statistical learning fundamentally shapes how individual objects are represented in visual memory, by biasing the representation of one object toward its co-occurring partner. Importantly, participants in all experiments were not explicitly aware of the regularities. Thus, the bias in object representations was implicit. The current study reveals a novel impact of statistical learning on object representation: spatially co-occurring objects are represented as being closer in space, and temporally co-occurring objects are represented as having more similar features.  相似文献   

7.
A model proposing error-driven learning of associations between representations of stimulus properties and responses can account for many findings in the literature on object categorization by nonhuman animals. Furthermore, the model generates predictions that have been confirmed in both pigeons and people, suggesting that these learning processes are widespread across distantly related species. The present work reports evidence of a category-overshadowing effect in pigeons' categorization of natural objects, a novel behavioral phenomenon predicted by the model. Object categorization learning was impaired when a second category of objects provided redundant information about correct responses. The same impairment was not observed when single objects provided redundant information, but the category to which they belonged was uninformative, suggesting that this effect is different from simple overshadowing, arising from competition among stimulus categories rather than individual stimuli during learning.  相似文献   

8.
Human infants display complex categoriztion abilities. Results from studies of visual preference, object examination, conditioned leg-kicking, sequential touching, and generalized imitation reveal different patterns of category formation, with different levels of exclusivity in the category representations formed by infants at different ages. We suggest that differences in levels of exclusivity reflect the degree to which the various tasks specify the relevant category distinction to be drawn by the infant. Performance in any given task might reflect prior learning or within-task learning, or both. The extent to which either form of learning is deployed could be determined by task context.  相似文献   

9.
We describe moral cognition as a process occurring in a distinctive cognitive space, wherein moral relationships are defined along several morally relevant dimensions. After identifying candidate dimensions, we show how moral judgments can emerge in this space directly from object perception, without any appeal to moral rules or abstract values. Our reductive “minimal model” (Batterman & Rice, 2014) elaborates Beal’s (2020) claim that moral cognition is determined, at the most basic level, by “ontological frames” defining subjects, objects, and the proper relation between them. We expand this claim into a set of formal hypotheses that predict moral judgments based on how objects are “framed” in the relevant dimensions of “moral space.”  相似文献   

10.
University of Illinois at Urbana-Champaign, Urbana, Illinois Much of our learning comes from interacting with objects. Two experiments investigated whether or not arbitrary actions used during category learning with objects might be incorporated into object representations and influence later recognition judgments. In a virtual-reality chamber, participants used distinct arm movements to make different classification responses. During a recognition test phase, these same objects required arm movements that were consistent or inconsistent with the classification movement. In both experiments, consistent movements were facilitated relative to inconsistent movements, suggesting that arbitrary action information is incorporated into the representations.  相似文献   

11.
Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and adjust their representations to support behavior in future encounters. Many techniques that are available to understand the neural basis of category learning assume that the multiple processes that subserve it can be neatly separated between different trials of an experiment. Model-based functional magnetic resonance imaging offers a promising tool to separate multiple, simultaneously occurring processes and bring the analysis of neuroimaging data more in line with category learning's dynamic and multifaceted nature. We use model-based imaging to explore the neural basis of recognition and entropy signals in the medial temporal lobe and striatum that are engaged while participants learn to categorize novel stimuli. Consistent with theories suggesting a role for the anterior hippocampus and ventral striatum in motivated learning in response to uncertainty, we find that activation in both regions correlates with a model-based measure of entropy. Simultaneously, separate subregions of the hippocampus and striatum exhibit activation correlated with a model-based recognition strength measure. Our results suggest that model-based analyses are exceptionally useful for extracting information about cognitive processes from neuroimaging data. Models provide a basis for identifying the multiple neural processes that contribute to behavior, and neuroimaging data can provide a powerful test bed for constraining and testing model predictions.  相似文献   

12.
We investigated the role of connectedness in the use of part-relation conjunctions for object category learning. Participants learned categories of two-part objects defined by the shape of one part and its location relative to the other (part-relation conjunctions). The topological relationship between the parts (connected, separated, or embedded) varied between participants but was invariant for any given participant. In Experiment 1, category learning was faster and more accurate when an object’s parts were connected than when they were either separated or embedded. Subsequent experiments showed that this effect is not due to conscious strategies, differences in the salience of the individual attributes, or differences in the integrality/separability of dimensions across stimuli. The results suggest that connectedness affects the integration of parts with their relations in object category learning.  相似文献   

13.
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.  相似文献   

14.
Forces are experienced in actions on objects. The mechanoreceptor system is stimulated by proximal forces in interactions with objects, and experiences of force occur in a context of information yielded by other sensory modalities, principally vision. These experiences are registered and stored as episodic traces in the brain. These stored representations are involved in generating visual impressions of forces and causality in object motion and interactions. Kinematic information provided by vision is matched to kinematic features of stored representations, and the information about forces and causality in those representations then forms part of the perceptual interpretation. I apply this account to the perception of interactions between objects and to motions of objects that do not have perceived external causes, in which motion tends to be perceptually interpreted as biological or internally caused. I also apply it to internal simulations of events involving mental imagery, such as mental rotation, trajectory extrapolation and judgment, visual memory for the location of moving objects, and the learning of perceptual judgments and motor skills. Simulations support more accurate judgments when they represent the underlying dynamics of the event simulated. Mechanoreception gives us whatever limited ability we have to perceive interactions and object motions in terms of forces and resistances; it supports our practical interventions on objects by enabling us to generate simulations that are guided by inferences about forces and resistances, and it helps us learn novel, visually based judgments about object behavior.  相似文献   

15.
Subjects in a darkroom saw an array of five phosphorescent objects on a circular table and, after a short delay, indicated which object had been moved. During the delay the subject, the table or a phosphorescent landmark external to the array was moved (a rotation about the centre of the table) either alone or together. The subject then had to indicate which one of the five objects had been moved. A fully factorial design was used to detect the use of three types of representations of object location: (i) visual snapshots; (ii) egocentric representations updated by self-motion; and (iii) representations relative to the external cue. Improved performance was seen whenever the test array was oriented consistently with any of these stored representations. The influence of representations (i) and (ii) replicates previous work. The influence of representation (iii) is a novel finding which implies that allocentric representations play a role in spatial memory, even over short distances and times. The effect of the external cue was greater when initially experienced as stable. Females out-performed males except when the array was consistent with self-motion but not visual snapshots. These results enable a simple egocentric model of spatial memory to be extended to address large-scale navigation, including the effects of allocentric knowledge, landmark stability and gender.  相似文献   

16.
Infants as young as 5 months of age view familiar actions such as reaching as goal-directed (Woodward, 1998), but how do they construe the goal of an actor's reach? Six experiments investigated whether 12-month-old infants represent reaching actions as directed to a particular individual object, to a narrowly defined object category (e.g., an orange dump truck), or to a more broadly defined object category (e.g., any truck, vehicle, artifact, or inanimate object). The experiments provide evidence that infants are predisposed to represent reaching actions as directed to categories of objects at least as broad as the basic level, both when the objects represent artifacts (trucks) and when they represent people (dolls). Infants do not use either narrower category information or spatiotemporal information to specify goal objects. Because spatiotemporal information is central to infants' representations of inanimate object motions and interactions, the findings are discussed in relation to the development of object knowledge and action representations.  相似文献   

17.
D H Warren  E R Strelow 《Perception》1984,13(3):331-350
The relationship between sensory aid research and several areas of perceptual learning has been explored with five experiments on learning the use of the Binaural Sensory Aid, an electronic sensor in which pitch specifies distance and interaural amplitude difference (IAD) specifies direction. The training task required reaching to objects in near space, with tactile error feedback. Perceptual learning for both dimensions was demonstrated within 72 trials, giving a level of performance comparable to the use of a natural sound source, although performance with the direction cue did not reach asymptote until a second training session. Training was unaffected by various kinds of regularity in the spatial target sequences, or by a reduction in the number of spatial target locations until only two locations were used; at this point directional accuracy declines. Training only one dimension at a time did not produce additional improvement of performance on that dimension, but did impair generalization of the direction cue. Learning of the pitch-distance dimension was generally better than that of the IAD dimension, possibly because of its greater discriminability with this device. Generally, the pattern of results indicates that in learning to use such devices subjects readily determine the sensory dimensions of the codes and have considerable ability to generalize to new locations.  相似文献   

18.
Functional neuroimaging studies in which the cortical organization for semantic knowledge has been addressed have revealed interesting dissociations in the recognition of different object categories, such as faces, natural objects, and manufactured objects. The present paper critically reviews these studies and performs a meta-analysis of stereotactic coordinates to determine whether category membership predicts patterns of brain activation across different studies. This meta-analysis revealed that, in the ventral temporal cortex, recognition of manufactured objects activates more medial aspects of the fusiform gyrus, as compared with natural object or face recognition. Face recognition activates more inferior aspects of the ventral temporal cortex, as compared with manufactured object recognition. The recognition task used—viewing, matching, or naming—also predicted brain activation patterns. Specifically, matching tasks recruit more inferior occipital regions than do either naming or viewing tasks, whereas naming tasks recruit more anterior ventral temporal sites than do either viewing or matching tasks. These findings indicate that the cognitive demands of a particular recognition task are as predictive of cortical activation patterns as is category membership.  相似文献   

19.
The aim of the present research was to study the processes involved in knowledge emergence. In a short-term priming paradigm, participants had to categorize pictures of objects as either "kitchen objects" or "do-it-yourself tools". The primes and targets represented objects belonging to either the same semantic category or different categories (object category similarity), and their use involved gestures that were either similar or very different (gesture similarity). The condition with a SOA of 100ms revealed additive effects of motor similarity and object category similarity, whereas another condition with a SOA of 300ms showed an interaction between motor and category similarity. These results were interpreted in terms of the activation and integration processes involved in the emergence of mental representations.  相似文献   

20.
Phinney RE  Siegel RM 《Perception》1999,28(6):725-737
Object recognition was studied in human subjects to determine whether the storage of the visual objects was in a two-dimensional or a three-dimensional representation. Novel motion-based and disparity-based stimuli were generated in which three-dimensional and two-dimensional form cues could be manipulated independently. Subjects were required to generate internal representations from motion stimuli that lacked explicit two-dimensional cues. These stored internal representations were then matched against internal three-dimensional representations constructed from disparity stimuli. These new stimuli were used to confirm prior studies that indicated the primacy of two-dimensional cues for view-based object storage. However, under tightly controlled conditions for which only three-dimensional cues were available, human subjects were also able to match an internal representation derived from motion of that of disparity. This last finding suggests that there is an internal storage of an object's representations in three dimensions, a tenet that has been rejected by view-based theories. Thus, any complete theory of object recognition that is based on primate vision must incorporate three-dimensional stored representations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号