首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other.  相似文献   

2.
Viewpoint dependence in visual and haptic object recognition   总被引:5,自引:0,他引:5  
On the whole, people recognize objects best when they see the objects from a familiar view and worse when they see the objects from views that were previously occluded from sight. Unexpectedly, we found haptic object recognition to be viewpoint-specific as well, even though hand movements were unrestricted. This viewpoint dependence was due to the hands preferring the back "view" of the objects. Furthermore, when the sensory modalities (visual vs. haptic) differed between learning an object and recognizing it, recognition performance was best when the objects were rotated back-to-front between learning and recognition. Our data indicate that the visual system recognizes the front view of objects best, whereas the hand recognizes objects best from the back.  相似文献   

3.
The present study represents a further investigation of an imagery training procedure previously developed by the authors (Yuille & Catchpole, 1973). Subjects given a brief training session in the use of an interacting imaginal mnemonic showed significantly higher performance than nontrained Ss on tests of immediate, delayed, and second learning set memory. While both recognition and recall were equally affected by the procedure, with recognition showing its usual superiority, the type of immediate memory test did not affect delayed recall. The findings are interpreted as supporting the preference of young children for visual coding.  相似文献   

4.
We investigated whether the relative position of objects and the body would influence haptic recognition. People felt objects on the right or left side of their body midline, using their right hand. Their head was turned towards or away from the object, and they could not see their hands or the object. People were better at naming 2-D raised line drawings and 3-D small-scale models of objects and also real, everyday objects when they looked towards them. However, this head-towards benefit was reliable only when their right hand crossed their body midline to feel objects on their left side. Thus, haptic object recognition was influenced by people's head position, although vision of their hand and the object was blocked. This benefit of turning the head towards the object being explored suggests that proprioceptive and haptic inputs are remapped into an external coordinate system and that this remapping is harder when the body is in an unusual position (with the hand crossing the body midline and the head turned away from the hand). The results indicate that haptic processes align sensory inputs from the hand and head even though either hand-centered or object-centered coordinate systems should suffice for haptic object recognition.  相似文献   

5.
Hand movements: a window into haptic object recognition   总被引:15,自引:0,他引:15  
  相似文献   

6.
This paper reviews evidence from neuropsychological patient studies relevant to two questions concerning the functions of the medial temporal lobe in humans. The first is whether the hippocampus and the adjacent perirhinal cortex make different contributions to memory. Data are discussed from two patients with adult-onset bilateral hippocampal damage who show a sparing of item recognition relative to recall and certain types of associative recognition. It is argued that these data are consistent with Aggleton and Brown's (1999) proposal that familiarity-based recognition memory is not dependent on the hippocampus but is mediated by the perirhinal cortex and dorso-medial thalamic nucleus. The second question is whether the recognition memory deficit observed in medial temporal lobe amnesia can be explained by a deficit in perceptual processing and representation of objects rather than a deficit in memory per se. The finding that amnesics were impaired at recognizing, after short delays, patterns that they could successfully discriminate suggests that their memory impairment did not result from an object-processing deficit. The possibility remains, however, that the human perirhinal cortex plays a role in object processing, as well as in recognition memory, and data are presented that support this possibility.  相似文献   

7.
This paper reviews evidence from neuropsychological patient studies relevant to two questions concerning the functions of the medial temporal lobe in humans. The first is whether the hippocampus and the adjacent perirhinal cortex make different contributions to memory. Data are discussed from two patients with adult-onset bilateral hippocampal damage who show a sparing of item recognition relative to recall and certain types of associative recognition. It is argued that these data are consistent with Aggleton and Brown's (1999) proposal that familiarity-based recognition memory is not dependent on the hippocampus but is mediated by the perirhinal cortex and dorso-medial thalamic nucleus. The second question is whether the recognition memory deficit observed in medial temporal lobe amnesia can be explained by a deficit in perceptual processing and representation of objects rather than a deficit in memory per se. The finding that amnesics were impaired at recognizing, after short delays, patterns that they could successfully discriminate suggests that their memory impairment did not result from an object-processing deficit. The possibility remains, however, that the human perirhinal cortex plays a role in object processing, as well as in recognition memory, and data are presented that support this possibility.  相似文献   

8.
Given a specific view of a simple symmetrical object, participants were asked whether a certain imaginary transformation could result in a second viewed image. An experiment was conducted in which the participants had either to mentally rotate an object or to imagine themselves looking at the object from another position (i.e., the object-based condition and the viewer-based condition, respectively). In the experiment, combinations of these imagery tasks (i.e., the combined conditions) were also included. The symmetrical objects could be oriented horizontally or vertically. The performance in the object-based conditions was generally equal to or better than the performance in the viewer-based conditions. In addition, there were more confusions for shapes with a horizontal orientation, especially when viewer-based upside-down rotations were involved, with an apparent mediating role of object rotation in the combined conditions.  相似文献   

9.
This experiment examined the effects of memory load over three periods of delay. Following presentation of 20, 35, or 50 targets, subjects were required to select these from an equal number of distractors 10 min, 1 week, or 2 weeks later. Increased target load (independent of increases in recognition load) decreased accuracy mainly by decreasing hit rate. Increasing delay decreased accuracy largely as a result of an increased false alarm rate. Most individual ROC curves, plotted on z-coordinates, had slopes < 1. Implications of this finding for the use of d' are discussed.  相似文献   

10.
This study investigated the effect of forgetting of the standard duration on temporal discrimination in a generalization task. In two experiments, participants were given a temporal generalization task with or without a retention delay between the learning of the standard duration and the testing of the comparison durations. During this delay, they either performed or did not perform an interference task. Results failed to reveal any effect of 15-min and 24-h retention delays on time judgments (Experiment 1). However, when an interference task was performed during the 15-min delay (Experiment 2), there was a subjective shortening effect, indicating that the standard duration was judged shorter with than without an interference task. These findings suggest that when an interference task occurs immediately after initial temporal encoding, it affects the process of consolidation in reference memory.  相似文献   

11.
The purpose of the present investigation was to determine whether the orientation between an object's parts is coded categorically for object recognition and physical discrimination. In three experiments, line drawings of novel objects in which the relative orientation of object parts varied by steps of 30 degrees were used. Participants performed either an object recognition task, in which they had to determine whether two objects were composed of the same set of parts, or a physical discrimination task, in which they had to determine whether two objects were physically identical. For object recognition, participants found it more difficult to compare the 0 degrees and 30 degrees versions and the 90 degrees and 60 degrees versions of an object than to compare the 30 degrees and 60 degrees versions, but only at an extended interstimulus interval (ISI). Categorical coding was also found in the physical discrimination task. These results suggest that relative orientation is coded categorically for both object recognition and physical discrimination, although metric information appears to be coded as well, especially at brief ISIs.  相似文献   

12.
In this study, we evaluated observers' ability to compare naturally shaped three-dimensional (3-D) objects, using their senses of vision and touch. In one experiment, the observers haptically manipulated 1 object and then indicated which of 12 visible objects possessed the same shape. In the second experiment, pairs of objects were presented, and the observers indicated whether their 3-D shape was the same or different. The 2 objects were presented either unimodally (vision-vision or haptic-haptic) or cross-modally (vision-haptic or haptic-vision). In both experiments, the observers were able to compare 3-D shape across modalities with reasonably high levels of accuracy. In Experiment 1, for example, the observers' matching performance rose to 72% correct (chance performance was 8.3%) after five experimental sessions. In Experiment 2, small (but significant) differences in performance were obtained between the unimodal vision-vision condition and the two cross-modal conditions. Taken together, the results suggest that vision and touch have functionally overlapping, but not necessarily equivalent, representations of 3-D shape.  相似文献   

13.
We review the organization of the neural networks that underlie haptic object processing and compare that organization with the visual system. Haptic object processing is separated into at least two neural pathways, one for geometric properties or shape, and one for material properties, including texture. Like vision, haptic processing pathways are organized into a hierarchy of processing stages, with different stages represented by different brain areas. In addition, the haptic pathway for shape processing may be further subdivided into different streams for action and perception. These streams may be analogous to the action and perception streams of the visual system and represent two points of neural convergence for vision and haptics.  相似文献   

14.
Abstract

The distinctiveness of a face has been found to be an important factor in face recognition. We investigated the effect of the distinctiveness of a face upon subjects' speed and accuracy of recognition following different presentation times and retention intervals. It was found that (1) hits decreased with increasing delay; (2) false alarms increased and d primes decreased with a presentation time of 1 sec compared with 5 sec; (3) distinctive faces received more hits and higher d primes than non-distinctive faces; and (4) response latencies were shorter for distinctive targets than for distinctive distraction or non distinctive targets or distractors. These results were discussed in terms of the literature on the distinctiveness effect in face recognition.  相似文献   

15.
Haptic picture recognition was tested in 36 undergraduate students to determine whether haptic representations are visual or multi-modal. Participants explored target haptic pictures and were asked to recognize the target from three alternatives. Each participant completed eight control and eight interference trials in visual, lexical and tactile recognition modalities. For interference, participants were assigned to either an articulatory suppression (repeat "the" at a constant rate) or a visual interference (watch a visual display) condition, to prevent recoding into a verbal or visual code. The results showed that accuracy was higher at control compared to interference trials and for tactile compared to lexical recognition. Additionally, lexical and tactile recognition decreased with articulatory suppression whereas only tactile recognition decreased with visual interference. These findings suggest that haptic pictures may be represented by coordinating visual and verbal codes.  相似文献   

16.
Humans and others primates are highly attuned to temporal consistencies and regularities in their sensory environment and learn to predict such statistical structure. Moreover, in several instances, the presence of temporal structure has been found to facilitate procedural learning and to improve task performance. Here we extend these findings to visual object recognition and to presentation sequences in which mutually predictive objects form distinct clusters or “communities.” Our results show that temporal community structure accelerates recognition learning and affects the order in which objects are learned (“onset of familiarity”).

Our understanding of the world is grounded in sensory experience. Typically, this experience consists of contiguous streams of sensations that are richly structured in both time and space (Schapiro and Turk-Browne 2015). Such statistical structure may involve simple correlations of pairs of sensory events or, more generally, clusters of correlations between mutually predictive events forming a “temporal community” (Schapiro et al. 2013). Both humans and other primates (Miyashita 1988) can learn to predict such statistical regularities in space and time (Fiser and Aslin 2001, 2002). Moreover, statistical structure can be exploited explicitly or implicitly to enhance task performance. For example, predictable presentation order can facilitate motor learning (Kahn et al. 2018), language learning (Saffran et al. 1996), visual search (Chun and Jiang 1998; Jiang and Wagner 2004; Sisk et al. 2019), and conditional associative learning (Hamid et al. 2010).In general, implicit (unsupervised) learning of temporal structure is thought to provide a biological basis for important cognitive functions, including the formation of episodic memories, learning of task-sets, model-based planning, and structural learning (e.g., Kemp and Tenenbaum 2008; Rigotti et al. 2010; Gershman 2017; Russek et al. 2017). To improve experimental access to these phenomena, we sought behavioral evidence for interactions between learning at different hierarchical levels, namely, learning of individual objects and learning of the temporal context in which such objects are experienced.Sequences of visual presentations may exhibit different kinds of temporal structure arising from sequential dependencies. A simple kind of structure is sequential dependency between consecutively presented items (i.e., an increased probability of item X, given preceding item Y). A more complex kind of structure arises when sequential dependencies are clustered within subsets of items. This leads to longer-term dependencies (i.e., an increased probability of item X, given recent item Z) and extended sequences of items that are mutually predictive (Schapiro et al. 2013; Karuza et al. 2017; Kahn et al. 2018).The mechanisms of visual object recognition have been studied extensively (Wallis and Bülthoff 1999) with considerable evidence supporting “feature-based mechanisms” that represent three-dimensional objects in terms of multiple two-dimensional features/views (plus interpolations) (Bülthoff and Edelman 1992). Presumably, temporal regularities arise naturally in handling three-dimensional objects and help associate distinct two-dimensional views and/or features (Wallis and Bülthoff 1999). For example, when nonhuman primates learn to categorize initially unfamiliar objects, they readily form neural representations for arbitrary two-dimensional features that are diagnostic for category (Sigala and Logothetis 2002; Sigala et al. 2002). Interestingly, such representations automatically encompass predictive sequential dependencies between successive trials, even when its diagnostic information is redundant (Miyashita 1988; Wallis 1998).The effect of sequential dependencies between successive trials on visual object recognition was investigated by two previous studies, which found a reaction time advantage (Barakat et al. 2013) and a recognition memory advantage (Otsuka and Saiki 2016) for target objects that consistently follow particular objects, compared with target objects that follow varying objects. Here we extended these findings in two ways: First, we monitored the formation of recognition memory more closely and comprehensively (every presentation of every object), and second, we considered the effect of clustered dependencies creating “temporal communities” of objects (which are typically experienced for nine successive presentations).We investigated performance of observers in a visual object recognition learning task under three conditions: (1) “strongly structured” sequences comprising distinct temporal communities (clusters of mutually predictive objects), (2) “weakly structured” sequences with uniform sequential dependence, and (3) “random” or “unstructured” sequences without sequential dependence. All sequences were generated as random walks on graphs of n = 15 distinct objects (Fig. 1A), in which nodes represented distinct objects and edges represented possible transitions (in both directions). As one sequence comprised 180 object presentations, each graph was traversed multiple times (∼11.3 times). Graphs were either modular and sparsely connected (“strongly structured” sequences), or nonmodular and sparsely connected (“weakly structured” sequences), or nonmodular and fully connected (“unstructured” or “random” sequences). In “strongly structured” sequences, approximately 9.2 ± 0.1 successive presentations (mean ± SEM) featured objects of the same temporal community.Open in a separate windowFigure 1.Presentation sequence and trial structure. (A) Presentation sequences were generated as (nearly) random walks on three types of graphs, with nodes representing a distinct object and edges representing possible transitions (in both directions). A sparsely connected, modular graph generated “strongly structured” sequences with distinct community structures (left), a sparsely connected, nonmodular graph generated “weakly structured” sequences (middle), and a full connected graph generated “unstructured” or “random” sequences (right). (B) Presentation sequences consisted of 180 complex, three-dimensional objects (shown rotating for 2 sec about a randomly oriented axis in the frontal plane). Of these, 170 ± 0.04 (mean ± SEM) objects were recurring, and 9.2 ± 0.04 objects were nonrecurring. Observers categorized each object as “familiar” or “unfamiliar.” Over the four sessions of 1 wk, observers performed 24 runs and viewed 4320 presentations, with every recurring object appearing at least 250 times.One presentation sequence (“run”) comprised exactly 180 objects and on average included 9.2 ± 0.04 (mean ± SEM) nonrecurring objects appearing exactly once during the entire experiment. Nonrecurring objects were spaced 14–19 presentations apart. The remaining 170 ± 0.04 objects were recurring and were selected by performing a pseudorandom walk on a graph (Fig. 1A), albeit with some restrictions: no direct repeats and returns were permitted (e.g., X–X or X–Y–X) and all n = 15 objects were repeated comparably often (11.4 ± 0.04 repetitions). The repetition latency for any given object ranged from three to >60 presentations. Very short latencies (of three to five presentations) were far more common in strongly structured sequences than in weakly structured or unstructured sequences (Supplemental Fig. S8).To control the difficulty of shape recognition, ensure initial unfamiliarity of all objects, and minimize interference from semantic associations, we generated complex three-dimensional objects by convolving two closed Bezier curves in a plane. Complexity was controlled by number and the position of random seeds for the two curves. The pairwise dissimilarity of the resulting complex objects was statistically unrelated to their pairwise distance in the presentation sequence (see Supplemental Fig. S1). To ensure this, dissimilarity was quantified in terms of the vector distance between depth maps (of resolution 64 × 64 × 64) obtained from six viewing directions along the three principal component axes.Objects were presented for 2 sec rotating with an angular velocity of 144 deg/sec about an axis in the frontal plane. Starting angle and axis orientation were randomized for each trial, forcing observers to become familiar with the full three-dimensional shape (rather than just certain features). Presentation periods were separated by 0.5-sec transition periods, during which the previous object disappeared toward a distant location on the right, while the next object approached from a distant location on the left. This was intended to encourage observers to imagine a spatially extended sequence of distinct objects (Supplemental_ Movie_S1).Twenty healthy observers (eight males and 12 females, aged 25 to 34 yr old) participated in three experiments. Two experiments compared “strongly structured” and “unstructured” sequences, and one experiment compared “strongly structured” and “weakly structured” sequences. All observers had normal or corrected to normal vision and were paid for their participation. Ethical guidelines of the Centre for Neuroscientific Innovation and Technology, Magdeburg, were followed.In order to monitor the progress of recognition learning as closely as possible, observers were required to classify every object presented as either “familiar” (seen previously) or “unfamiliar” (never seen previously). For each observer, a fresh set of 30 pairwise dissimilar objects was generated. The set was divided arbitrarily into two subsets of 15 objects, one used for “structured” sequences and the other for “unstructured” sequences. In addition, we generated a larger number (∼500) of nonrecurring objects, which appeared exactly once during the entire experiment. During each trial, the observer categorized the current object as “familiar,” “unfamiliar,” and “not sure,” by pressing a key. No feedback was provided. Observers performed this task on four different days within 1 wk, with six sequences per day (24 sequences overall). Accordingly, observers viewed 4320 presentations during which every recurring object appearing at least 250 times. After pausing for a week, observers repeated the experiment with entirely new objects and with sequences generated from another graph (Fig. 1B). Observers were told that each condition used new objects that were never shown before. To further emphasize this point, object color changed between conditions. The order of conditions (structured or unstructured) was counter-balanced between observers. Observer instructions did not mention presentation order (sequence structure).At the end of each week of testing, observers were required to additionally perform a validation task, to assess the extent to which objects had become familiar (Supplemental_Movie_S2; Supplemental_Material). In this task, observers viewed for 30 sec an array of 12 simultaneously rotating objects, of which three were randomly selected from the 15 “recurring” objects and nine objects were entirely new (never seen before). Observers were asked to pick out the three most “familiar” objects and received binary feedback (“all correct” or “one or more incorrect”). All observers approached ceiling performance (proportion correct >0.95) in all conditions (all sequence structures), confirming that almost all recurring objects had become familiar.To establish the progress of recognition learning, we analyzed 250 repetitions (over four sessions and 24 sequences) of every recurring object. To this end, we considered “sliding windows” with Nw = 5 successive presentations of a given object (for details see Supplemental Fig. S3). Note that some windows bridged successive presentation sequences and/or sessions. For each window and “recurring” object, we computed the proportion of “familiar” responses (“hit rate”) (Fig. 2A). As “familiar” objects were common, some false positives were to be expected. To take this into account, we also established a “false alarm rate” for each session, as the fraction of “nonrecurring” objects not categorized as “unfamiliar” (Fig. 2B). Combining hit rate (of a window) with false alarm rate (of the concomitant session), we performed a simplified sensitivity analysis (Macmillan and Creelman 2004) to obtain a corrected classification performance ρ and decision bias b for each window and “familiar” object (see the Supplemental_Material). Alternative sensitivity analyses and performance measures (A′, d′; Stanislaw and Todorov 1999) did not materially alter the results.Open in a separate windowFigure 2.Time course of recognition learning. (A) Average hit rate (recurring categorized as familiar, per window) increases with the number of presentations of a given object. (B) Average false alarm rate (nonrecurring not categorized as unfamiliar, per session) decreases with the number of presentations. (C) Average corrected performance ρ increases nearly monotonically with presentation number. It was consistently larger for strongly structured sequences (with temporal community structure) than for unstructured sequences. (D) Average criterion bias b, as a function of presentation number. Green regions indicate the transition between sessions (20%–80% of objects in previous session).The resulting corrected performance ρ (mean and SEM, assuming binomial variability) is shown in Figure 2C. Performance increased nearly monotonically, but was consistently superior when objects were presented with “strongly structured” sequences with “temporal community structure” than when they were presented in unstructured sequences. This difference was significant after ∼60 presentations. As expected, observers rapidly developed a liberal bias (favoring “familiar” responses), which weakened somewhat over subsequent sessions (Fig. 2D).We also analyzed the time-development of average response times (RTs). Consistent with the performance results, RTs decreased faster for strongly structured sequences than for unstructured sequences (Supplemental_Material; Supplemental Fig. S2).In addition to the gradual increase in the probability of recognizing recurring objects, we also sought to determine the point in time at which individual objects became familiar (“onset of familiarity”). We defined this point in two alternative ways: (1) as the first window in which corrected performance exceeded a threshold of ρ ≥ 0.875 (high threshold approach) or (2) as the window in which entropy Hρ=[ρlog2(ρ)+(1ρ)log2(1ρ)] of corrected performance reached its peak value (low threshold approach). Note that entropy peaks at the transition from exclusively “unfamiliar” to exclusively “familiar” responses.After establishing the “onset of familiarity” for each object, we ranked all objects by order of onset and established the “onset separation” between object pairs in terms of onset rank (Δn) and presentation rank (Δk). The median separation of successive onsets (defined by threshold or entropy) was nine or 16 presentations, respectively. Interestingly, the median separation of successive onsets in same cluster was roughly thrice as long, with 24 and 50 presentations, respectively, implying that successive onsets occurred during separate visits to a given community.In strongly structured sequences, one may distinguish objects pairs XY that are “adjacent” [follow each other with P(Y|X) = 0.25] or “nonadjacent” [never follow each other, P(Y|X) = 0]. In addition, one may distinguish object pairs within the same community (either adjacent or nonadjacent) and between different communities (also either adjacent or nonadjacent). Note that the objects linking different communities (“linking objects”) contribute both “adjacent” pairs in different communities and “nonadjacent” pairs in the same community (Fig. 3B). We analyzed the “onset of familiarity” for different object pairs (as defined above), specifically, the probability that the two members of a pair exhibit successive onsets (Δn = 1) or nearly successive (Δn = 2) onsets. Interestingly, the probability of successive onsets was significantly higher than chance for objects in the same community (null hypothesis H0: “onsets” are ordered randomly) (Fig. 3A). Moreover, we found the probability of successive “onsets” to be significantly elevated for “adjacent” objects in the same cluster, insignificantly elevated for “adjacent” objects in different clusters (“linking objects”), and significantly reduced for “nonadjacent” objects in different clusters (P < 0.05; corrected for false discovery rate of multiple comparisons) (Fig. 3B; Benjamini and Hochberg 1995).Open in a separate windowFigure 3.Analysis of the onset of familiarity with individual objects. (A) Successive onsets of familiarity (Δn = 1) are far more likely ([**] P < 0.005) for objects within the same cluster than would be expected by chance (dashed line). For nearly successive onsets (Δn = 2) this effect was not observed. (B) Comparison of frequency of successive onsets, compared with chance level, for objects pairs either in the same cluster (outlined blue and cyan) or in different clusters (green and red), which are either adjacent (blue and green) or nonadjacent on the graph (cyan and red). Frequency is significantly elevated ([*] P < 0.05 FDR corrected) for adjacent objects in the same cluster (blue) and suppressed for nonadjacent objects in different clusters (red).We conclude that temporal community structure had a significant effect on the order of recognition learning in the sense that familiarity of one object in a community facilitated familiarity of another object in the same community, provided the latter was “adjacent” [i.e., followed the former sometimes, P(Y|X) = 0.25]. Interestingly, no such “domino effect” was observed for the objects linking two different communities (i.e. adjacent objects in different communities).The results presented in Figures 2 and and33 were replicated with an additional eight observers in a second experiment of almost identical design (Supplemental Figs. S4, S6).To dissociate the effects of cluster-membership and adjacency, we also conducted a third experiment, in which six further observers viewed either “weakly structured” presentation sequences (during 1 wk) or “strongly structured” sequences (during another week). To generate “weakly structured” sequences without temporal communities, we generated sparsely connected graphs with exactly four links per node, but without any triangular link formations (Maslov and Sneppen 2002; Rubinov and Sporns 2010). Recognition learning was faster for “strongly structured” sequences than for “weakly structured” ones. The “domino” effect described above was again observed for “strongly structured” sequences (with both “onset” definitions), but to some extent also for “weakly structured” sequences (for one “onset” definition). Thus, the ordering of “onsets” of familiarity may be affected both by community membership and by adjacency in the presentation sequence (Supplemental Figs. S5, S7).In this study, we investigated the effect of temporal community structure by comparing more or less structured presentation sequences. First, in “weakly structured” sequences, sparse connectivity of the generative graph ensured that each object predicted the next object with 25% probability (one of four possibilities). Second, in “strongly structured” sequences, the (equally sparse) generative graph was clustered into three communities of five objects, so that each object predicted the community membership of the next object with 90% probability (18 of 20 possibilities).Previous studies of statistical learning did not aim to closely follow the learning of individual items (Siegelman et al. 2018). Here we sought to monitor the degree of familiarity of each individual object over successive presentations (Fig. 2). Whereas classification performance improved monotonically with presentation number for all sequences, a significant performance advantage developed quickly (over 60 to 70 presentations) for “strongly structured” sequences compared with either “unstructured” or “weakly structured” sequences (Supplemental Fig. S5). Note that recognition performance improved comparably over time, with or without having practiced stimulus-response mapping in a separate training session (experiments 2 and 3). Accordingly, we do not believe that motor learning contributed appreciably to these results.Thanks to close monitoring, we could almost always determine the onset of familiarity for an individual object. Interestingly, the ordering of onsets did not appear to be fully random, in that objects of the same community (“temporal community”) tended to become familiar after one another more often than expected by chance. Interestingly, this “domino effect” typically did not occur within one “extended visit” to a community but over subsequent visits to a given community. This “domino effect” was particularly pronounced for adjacent objects in the same community, but was not observed for adjacent objects in different communities. As a similar effect was observed for adjacent objects in “weakly structured” sequences without communities, there seems to be a contribution of frequent temporal proximity.At the end of training, all objects had become familiar and could be retrieved explicitly from long-term memory, for both structured and unstructured sequences. The reason for the observed difference in learning rates remains unclear. One possibility is that structured sequences pose a reduced working-memory load, facilitating encoding and accelerating learning. When large sets of items are divided (“chunked”) into subsets, both chunked and nonchunked items benefit and are learned more readily. Presumably, chunking reduces the dimensionality of the classification problem presented by each item (just like chunking the search array in an odd-man-out task reduces the dimensionality of target detection). This reduced dimensionality could then lower working-memory load and facilitate classification by comparison with long-term memory for both familiar (chunked) items and unfamiliar (nonchunked) items. Another important factor might be that temporal communities reduce repetition latencies (Supplemental Fig. S8). There is evidence that timely repetitions help consolidate memories, whereas delayed repetitions leave memories prone to disruption (Thalmann et al. 2019).Previous studies of the effect of “temporal community structure” have shown that cluster borders are detectable (Schapiro et al. 2013) and that such borders elevate reaction time (Kahn et al. 2018; Karuza et al. 2019). As border items are thought to facilitate encoding/retrieval (Swallow et al. 2009), one might have expected accelerated recognition learning for “linking objects” that join two different clusters. However, in our paradigm, neither learning rate nor ordering of onsets of familiarity distinguished “linking objects” from other objects. In fact, our results suggest that any chunking benefits (Thalmann et al. 2019) apply more to objects within clusters than to objects that “link” clusters.In summary, we showed that the presence of temporal communities of mutually predictive objects accelerates recognition learning for complex, three-dimensional objects and alters the order of recognition learning such that members of a group are often learned after one another (but separated by many intervening presentations).  相似文献   

17.
Animal Cognition - The spontaneous object recognition (SOR) task is a versatile and widely used memory test that was only recently established in nonhuman primates (marmosets). Here, we extended...  相似文献   

18.
These experiments investigated the involvement of several temporal lobe regions in consolidation of recognition memory. Anisomycin, a protein synthesis inhibitor, was infused into the hippocampus, perirhinal cortex, insular cortex, or basolateral amygdala of rats immediately after the sample phase of object or object-in-context recognition memory training. Anisomycin infused into perirhinal or insular cortices blocked long-term (24 h), but not short-term (90 min) object recognition memory. Infusions into the hippocampus or amygdala did not impair object recognition memory. Anisomycin infused into the hippocampus blocked long-term, but not short-term object-in-context recognition memory, whereas infusions administered into the perirhinal cortex, insular cortex, or amygdala did not affect object-in-context recognition memory. These results clearly indicate that distinct regions of the temporal lobe are differentially involved in long-term object and object-in-context recognition memory. Whereas perirhinal and insular cortices are required for consolidation of familiar objects, the hippocampus is necessary for consolidation of contextual information of recognition memory. Altogether, these results suggest that temporal lobe structures are differentially involved in recognition memory consolidation.  相似文献   

19.
Responses to targets that appear at a noncued position within the same object (invalid–same) compared to a noncued position at an equidistant different object (invalid–different) tend to be faster and more accurate. These cueing effects have been taken as evidence that visual attention can be object based (Egly, Driver, & Rafal, Journal of Experimental Psychology: General, 123, 161–177, 1994). Recent findings, however, have shown that the object-based cueing effect is influenced by object orientation, suggesting that the cueing effect might be due to a more general facilitation of attentional shifts across the horizontal meridian (Al-Janabi & Greenberg, Attention, Perception, & Psychophysics, 1–17, 2016; Pilz, Roggeveen, Creighton, Bennet, & Sekuler, PLOS ONE, 7, e30693, 2012). The aim of this study was to investigate whether the object-based cueing effect is influenced by object similarity and orientation. According to the object-based attention account, objects that are less similar to each other should elicit stronger object-based cueing effects independent of object orientation, whereas the horizontal meridian theory would not predict any effect of object similarity. We manipulated object similarity by using a color (Exp. 1, Exp. 2A) or shape change (Exp. 2B) to distinguish two rectangles in a variation of the classic two-rectangle paradigm (Egly et al., 1994). We found that the object-based cueing effects were influenced by the orientation of the rectangles and strengthened by object dissimilarity. We suggest that object-based cueing effects are strongly affected by the facilitation of attention along the horizontal meridian, but that they also have an object-based attentional component, which is revealed when the dissimilarity between the presented objects is accentuated.  相似文献   

20.
Face recognition was investigated in a successive comparison task. Subjects were required to makesame/different judgments about pairs of Photo-fit faces that were either identical or differed by a single feature. Picture information extraction and retention were examined by manipulating stimulus delay and exposure duration. Results indicated that overall performance was better for the top of the face. The eyes and mouth were more vulnerable than the rest of the face to recognition decrement after a delay, possibly due to their role in facial expression. When features were ranked in order of processing difficulty for each subject, it appeared that features were processed serially and delay affected a retrieval stage, while short exposure affected a visual comparison stage of processing. For the feature ranks, a single dimension of “salience” appeared to be both perceptual and mnemonic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号