首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this study the ability of newborn infants to learn arbitrary auditory–visual associations in the absence versus presence of amodal (redundant) and contingent information was investigated. In the auditory-noncontingent condition 2-day-old infants were familiarized to two alternating visual stimuli (differing in colour and orientation), each accompanied by its ‘own’ sound: when the visual stimulus was presented the sound was continuously presented, independently of whether the infant looked at the visual stimulus. In the auditory-contingent condition the auditory stimulus was presented only when the infant looked at the visual stimulus: thus, presentation of the sound was contingent upon infant looking. On the post-familiarization test trials attention recovered strongly to a novel auditory–visual combination in the auditory-contingent condition, but remained low, and indistinguishable from attention to the familiar combination, in the auditory-noncontingent condition. These findings are a clear demonstration that newborn infants’ learning of arbitrary auditory–visual associations is constrained and guided by the presence of redundant (amodal) contingent information. The findings give strong support to Bahrick’s theory of early intermodal perception.  相似文献   

2.
To further the understanding of the relations among sociocognitive abilities and social behavior, the current study examined theory of mind (ToM), social information processing (SIP), and prosocial behavior in 116 preschoolers (M age = 58.88 months) in Turkey. False belief tasks were utilized to test ToM and cartoons were used to assess SIP patterns. Prosocial behavior was measured with mother-reports and individual assessments. ToM was not related with the attribution of intent and was the only sociocognitive predictor of prosocial behavior, but just in boys. Results also pointed at sex differences in levels of sociocognitive development; girls showed greater ToM and more non-hostile attribution. Findings imply that SIP patterns might be less closely related to positive than antisocial behaviors, and understanding others’ minds might be less needed for positive acts in Turkish girls, who may learn to engage in such behavior as part of their gender role more strongly.  相似文献   

3.
A chimpanzee acquired an auditory–visual intermodal matching-to-sample (AVMTS) task, in which, following the presentation of a sample sound, the subject had to select from two alternatives a photograph that corresponded to the sample. The acquired AVMTS performance might shed light on chimpanzee intermodal cognition, which is one of the least understood aspects in chimpanzee cognition. The first aim of this paper was to describe the training process of the task. The second aim was to describe through a series of experiments the features of the chimpanzee AVMTS performance in comparison with results obtained in a visual intramodal matching task, in which a visual stimulus alone served as the sample. The results show that the acquisition of AVMTS was facilitated by the alternation of auditory presentation and audio-visual presentation (i.e., the sample sound together with a visual presentation of the object producing the particular sample sound). Once AVMTS performance was established for the limited number of stimulus sets, the subject showed rapid transfer of the performance to novel sets. However, the subject showed a steep decay of matching performance as a function of the delay interval between the sample and the choice alternative presentations when the sound alone, but not the visual stimulus alone, served as the sample. This might suggest a cognitive limitation for the chimpanzee in auditory-related tasks. Accepted after revision: 11 September 2001 Electronic Publication  相似文献   

4.
This aim of this paper was twofold: (1) to display the various competencies of the infant's hands for processing information about the shape of objects; and (2) to show that the infant's haptic mode shares some common mechanisms with the visual mode. Several experiments on infants from birth and up to five months of age using a habituation/dishabituation procedure, intermodal transfer task between touch and vision, and various cognitive tasks revealed that infants may perceive and understand the physical world through their hands without visual control. From birth, infants can habituate to shape and detect discrepancies between shapes. But information exchanges between vision and touch are partial in cross-modal transfer tasks. Plausibly, modal specificities such as discrepancies in information gathering between the two modalities and the different functions of the hands (perceptual and instrumental) limit the links between the visual and haptic modes. In contrast, when infants abstract information from an event not totally felt or seen, amodal mechanisms underlie haptic and visual knowledge in early infancy. Despite various discrepancies between the sensory modes, conceiving the world is possible with hands as with eyes.  相似文献   

5.
The present study investigated whether training on an identity match-to-sample task followed by non-reinforced matching probes with complex stimuli leads to the emergence of multiple arbitrary matching performances and arbitrary stimulus classes in preschool children. In Experiment 1, eight subjects were trained on a colour-matching task (A-A). Then they received tests with complex AB and AC colour-form stimuli (AB-A, B-AB; AC-A, C-AC). These tasks were designed to help subjects respond to both elements of each complex stimulus. Subsequent B-A, C-A, A-B, A-C, B-C, and C-B tests revealed that all subjects had acquired class-consistent colour-form and form-form relations. Experiment 2 examined whether these results could be replicated when subjects were encouraged to respond to the colour elements of some (AB) complex stimuli and to the form elements of other (AC) stimuli. The procedures were the same as in Experiment 1 except that during the first test only A-AB and C-AC tasks were used. Six of eight subjects demonstrated all tested relations.  相似文献   

6.
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.  相似文献   

7.
Kim J  Davis C  Krins P 《Cognition》2004,93(1):B39-B47
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing.  相似文献   

8.
Ninety-six infants of 3 1/2 months were tested in an infant-control habituation procedure to determine whether they could detect three types of audio-visual relations in the same events. The events portrayed two amodal invariant relations, temporal synchrony and temporal microstructure specifying the composition of the objects, and one modality-specific relation, that between the pitch of the sound and the color/shape of the objects. Subjects were habituated to two events accompanied by their natural, synchronous, and appropriate sounds and then received test trials in which the relation between the visual and the acoustic information was changed. Consistent with Gibson's increasing specificity hypothesis, it was expected that infants would differentiate amodal invariant relations prior to detecting arbitrary, modality-specific relations. Results were consistent with this prediction, demonstrating significant visual recovery to a change in temporal synchrony and temporal microstructure, but not to a change in the pitch-color/shape relations. Two subsequent discrimination studies demonstrated that infants' failure to detect the changes in pitch-color/shape relations could not be attributed to an inability to discriminate the pitch or the color/shape changes used in Experiment 1. Infants showed robust discrimination of the contrasts used.  相似文献   

9.
Five groups of Ss were tested under conditions of intra- and intermodal equivalence matching for free-form unfamiliar shapes originally designed by Gibson. Findings indicated that visual intramodal matching was superior to intermodal matching, a result consistent with previous research. The order of accuracy in forming equivalence was: (1) intramodel visual, (2) intramodal haptic, (3) haptic to visual, (4) visual to haptic. A difference, but not a significant one, in accuracy occurred for intramodal haptic matching when Ss wore goggles and when they did not.  相似文献   

10.
The ability to recognize familiar individuals with different sensory modalities plays an important role in animals living in complex physical and social environments. Individual recognition of familiar individuals was studied in a female chimpanzee named Pan. In previous studies, Pan learned an auditory–visual intermodal matching task (AVIM) consisting of matching vocal samples with the facial pictures of corresponding vocalizers (humans and chimpanzees). The goal of this study was to test whether Pan was able to generalize her AVIM ability to new sets of voice and face stimuli, including those of three infant chimpanzees. Experiment 1 showed that Pan performed intermodal individual recognition of familiar adult chimpanzees and humans very well. However, individual recognition of infant chimpanzees was poorer relative to recognition of adults. A transfer test with new auditory samples (Experiment 2) confirmed the difficulty in recognizing infants. A remaining question was what kind of cues were crucial for the intermodal matching. We tested the effect of visual cues (Experiment 3) by introducing new photographs representing the same chimpanzees in different visual perspectives. Results showed that only the back view was difficult to recognize, suggesting that facial cues can be critical. We also tested the effect of auditory cues (Experiment 4) by shortening the length of auditory stimuli, and results showed that 200 ms vocal segments were the limit for correct recognition. Together, these data demonstrate that auditory–visual intermodal recognition in chimpanzees might be constrained by the degree of exposure to different modalities and limited to specific visual cues and thresholds of auditory cues.  相似文献   

11.
Stimulus equivalence‐based instruction (EBI) was used to teach young children of typical development three 4‐member equivalence classes containing contact information from three caregivers (e.g., mother, father, and grandmother). Each class comprised the caregiver's (a) photograph, (b) printed name, (c) printed phone number, and (d) printed name of employer. A pretest‐train‐posttest‐maintenance design with a nontreatment control group comparison was used. Pretests and posttests assessed the degree to which class‐consistent responding occurred across both visual–visual matching tasks and intraverbals. Intraverbal responding was also probed with a novel instructor. Overall, EBI participants scored significantly higher during the posttests than the control participants across both the derived relations and intraverbal tests. These differences maintained 2 weeks later. Thus, responding generalized to (a) a different topography (i.e., intraverbal), (b) auditory versions of the stimuli, and (c) in the presence of a novel instructor. How such procedures may benefit lost children are discussed.  相似文献   

12.
Until now, the equivalence property of reflexivity—matching physically identical stimuli to themselves after training on a set of arbitrary matching relations—has not been demonstrated in any animal, human or nonhuman. Previous reports of reflexivity have either implicitly or explicitly involved reinforced training on other identity matching relations. Here we demonstrate reflexivity without prior identity matching training. Pigeons received concurrent successive matching training on three arbitrary matching tasks: AB (hue–form), BC (form–hue), and AC (hue–hue with different hues in the A and C sets). Afterwards, pigeons were tested for BB (form–form) reflexivity. Consistent with the predictions of Urcuioli's ( 2008 ) theory, pigeons preferentially responded to B comparison stimuli that matched the preceding B sample stimuli in testing (i.e., BB reflexivity). A separate experiment showed that a slightly different set of arbitrary matching baseline relations yielded a theoretically predicted “anti‐reflexivity” (or emergent oddity) effect in two of five pigeons. Finally, training on just two arbitrary successive matching tasks (AB and BC) did not yield any differential BB responding in testing for five of eight pigeons, with two others showing reflexivity and one showing antireflexivity. These data complement previous findings of symmetry and transitivity (the two other properties of equivalence) in pigeons.  相似文献   

13.
Processing multiple complex features to create cohesive representations of objects is an essential aspect of both the visual and auditory systems. It is currently unclear whether these processes are entirely modality specific or whether there are amodal processes that contribute to complex object processing in both vision and audition. We investigated this using a dual-stream target detection task in which two concurrent streams of novel visual or auditory stimuli were presented. We manipulated the degree to which each stream taxed processing conjunctions of complex features. In two experiments, we found that concurrent visual tasks that both taxed conjunctive processing strongly interfered with each other but that concurrent auditory and visual tasks that both taxed conjunctive processing did not. These results suggest that resources for processing conjunctions of complex features within vision and audition are modality specific.  相似文献   

14.
15.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

16.
The development of generalized conditional discrimination skills was examined in adults with retardation. Two subjects with histories of failure to acquire arbitrary matching under trial-and-error procedures were successful under procedures that trained one or more prerequisite skills. The successive discrimination between the sample stimuli was established by training the subjects to name the stimuli. The simultaneous discrimination between the comparison stimuli was established using either (a) standard simple discrimination training with reversals or (b) a procedure in which each of the two sample-comparison relations in the conditional discrimination was presented in blocks of trials, with the size of the blocks decreasing gradually until sample presentation was randomized. The amount of prerequisite training required varied across subjects and across successive conditional discriminations. After acquiring either two or three conditional discriminations with component training, both subjects learned new conditional discriminations under trial-and-error procedures. In general, each successive conditional discrimination was acquired more rapidly. Tests showed that conditional responding had become a generalized skill. Symmetry was shown for almost all trained relations. Symmetry trial samples were ultimately named the same as the stimuli to which they were related in training.  相似文献   

17.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

18.
This study examined the development of infants' ability to perceive, learn, and remember the unique face-voice relations of unfamiliar adults. Infants of 2, 4, and 6 months were habituated to the faces and voices of 2 same-gender adults speaking and then received test trials where the faces and voices were synchronized yet mismatched. Results indicated that 4- and 6-month-olds, but not 2-month-olds, detected the change in face-voice pairings. Two-month-olds did, however, discriminate among the faces and voices in a control study. Results of a subsequent intermodal matching procedure indicated that only the 6-month-olds showed matching and memory for the face-voice relations. These findings suggest that infants' ability to detect the arbitrary relations between specific faces and voices of unfamiliar adults emerges between 2 and 4 months of age, whereas matching and memory for these relations emerges somewhat later, perhaps between 4 and 6 months of age.  相似文献   

19.
Research reported here concerns neural processes relating to stimulus equivalence class formation. In Experiment 1, two types of word pairs were presented successively to normally capable adults. In one type, the words had related usage in English (e.g., uncle, aunt). In the other, the two words were not typically related in their usage (e.g., wrist, corn). For pairs of both types, event‐related cortical potentials were recorded during and immediately after the presentation of the second word. The obtained waveforms differentiated these two types of pairs. For the unrelated pairs, the waveforms were significantly more negative about 400 ms after the second word was presented, thus replicating the “N400” phenomenon of the cognitive neuroscience literature. In addition, there was a strong positive‐tending wave form difference post‐stimulus presentation (peaked at about 500 ms) that also differentiated the unrelated from related stimulus pairs. In Experiment 2, the procedures were extended to study arbitrary stimulus—stimulus relations established via matching‐to‐sample training. Participants were experimentally näive adults. Sample stimuli (Set A) were trigrams, and comparison stimuli (Sets B, C, D, E, and F) were nonrepresentative forms. Behavioral tests evaluated potentially emergent equivalence relations (i.e., BD, DF, CE, etc.). All participants exhibited classes consistent with the arbitrary matching training. They were also exposed also to an event‐related potential procedure like that used in Experiment 1. Some received the ERP procedure before equivalence tests and some after. Only those participants who received ERP procedures after equivalence tests exhibited robust N400 differentiation initially. The positivity observed in Experiment 1 was absent for all participants. These results support speculations that equivalence tests may provide contextual support for the formation of equivalence classes including those that emerge gradually during testing.  相似文献   

20.
The present study investigated the relationship between psychometric intelligence and temporal resolution power (TRP) as simultaneously assessed by auditory and visual psychophysical timing tasks. In addition, three different theoretical models of the functional relationship between TRP and psychometric intelligence as assessed by means of the Adaptive Matrices Test (AMT) were developed. To test the validity of these models, structural equation modeling was applied. Empirical data supported a hierarchical model that assumed auditory and visual modality-specific temporal processing at a first level and amodal temporal processing at a second level. This second-order latent variable was substantially correlated with psychometric intelligence. Therefore, the relationship between psychometric intelligence and psychophysical timing performance can be explained best by a hierarchical model of temporal information processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号