首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Implicit statistical learning (ISL) is exclusive to neither a particular sensory modality nor a single domain of processing. Even so, differences in perceptual processing may substantially affect learning across modalities. In three experiments, statistically equivalent auditory and visual familiarizations were presented under different timing conditions that either facilitated or disrupted temporal processing (fast or slow presentation rates). We find an interaction of rate and modality of presentation: At fast rates, auditory ISL was superior to visual. However, at slow presentation rates, the opposite pattern of results was found: Visual ISL was superior to auditory. Thus, we find that changes to presentation rate differentially affect ISL across sensory modalities. Additional experiments confirmed that this modality-specific effect was not due to cross-modal interference or attentional manipulations. These findings suggest that ISL is rooted in modality-specific, perceptually based processes.  相似文献   

2.
Implicit statistical learning (ISL) is exclusive to neither a particular sensory modality nor a single domain of processing. Even so, differences in perceptual processing may substantially affect learning across modalities. In three experiments, statistically equivalent auditory and visual familiarizations were presented under different timing conditions that either facilitated or disrupted temporal processing (fast or slow presentation rates). We find an interaction of rate and modality of presentation: At fast rates, auditory ISL was superior to visual. However, at slow presentation rates, the opposite pattern of results was found: Visual ISL was superior to auditory. Thus, we find that changes to presentation rate differentially affect ISL across sensory modalities. Additional experiments confirmed that this modality-specific effect was not due to cross-modal interference or attentional manipulations. These findings suggest that ISL is rooted in modality-specific, perceptually based processes.  相似文献   

3.
Abstract

Two experiments examined the learning of a set of Greek pronunciation rules through explicit and implicit modes of rule presentation. Experiment 1 compared the effectiveness of implicit and explicit modes of presentation in two modalities, visual and auditory. Subjects in the explicit or rule group were presented with the rule set, and those in the implicit or natural group were shown a set of Greek words, composed of letters from the rule set, linked to their pronunciations. Subjects learned the Greek words to criterion and were then given a series of tests which aimed to tap different types of knowledge. The results showed an advantage of explicit study of the rules. In addition, an interaction was found between mode of presentation and modality. Explicit instruction was more effective in the visual than in the auditory modality, whereas there was no modality effect for implicit instruction. Experiment 2 examined a possible reason for the advantage of the rule groups by comparing different combinations of explicit and implicit presentation in the study and learning phases. The results suggested that explicit presentation of the rules is only beneficial when it is followed by practice at applying them.  相似文献   

4.
《Acta psychologica》2013,143(1):58-64
The current study investigates whether probabilistic categorization on the Weather Prediction task involves a single, modality/domain general learning mechanism or there are modality/domain differences. The same probabilistic categorization task was used in three modalities/domains and two modes of presentation. Cues consisted of visual, auditory-verbal or auditory-nonverbal stimuli, and were presented either sequentially or simultaneously. Results show that while there was no general difference in performance across modalities/domains, the mode of presentation affected them differently. In the visual modality, simultaneous performance had a general advantage over sequential presentation, while in the auditory conditions, there was an initial advantage of simultaneous presentation, which disappeared, and in the non-verbal condition, gave over to a sequential advantage in the later stages of learning. Data suggest that there are strong peripheral modality effects; however, there are no signs of modality/domain of stimuli centrally affecting categorization.  相似文献   

5.
This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10–12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.  相似文献   

6.
ABSTRACT

The testing effect refers to improved memory after retrieval practice and has been researched primarily with visual stimuli. In two experiments, we investigated whether the testing effect can be replicated when the to-be-learned information is presented auditorily, or visually?+?auditorily. Participants learned Swahili-English word pairs in one of three presentation modalities – visual, auditory, or visual?+?auditory. This was manipulated between-participants in Experiment 1 and within-participants in Experiment2. All participants studied the word pairs during three study trials. Half of participants practiced recalling the English translations in response to the Swahili cue word twice before the final test whereas the other half simply studied the word pairs twice more. Results indicated an improvement in final test performance in the repeated test condition, but only in the visual presentation modality (Experiments 1 and 2) and in the visual?+?auditory presentation modality (Experiment 2). This suggests that the benefits of practiced retrieval may be limited to information presented in a visual modality.  相似文献   

7.
Parr LA 《Animal cognition》2004,7(3):171-178
The ability of organisms to discriminate social signals, such as affective displays, using different sensory modalities is important for social communication. However, a major problem for understanding the evolution and integration of multimodal signals is determining how humans and animals attend to different sensory modalities, and these different modalities contribute to the perception and categorization of social signals. Using a matching-to-sample procedure, chimpanzees discriminated videos of conspecifics' facial expressions that contained only auditory or only visual cues by selecting one of two facial expression photographs that matched the expression category represented by the sample. Other videos were edited to contain incongruent sensory cues, i.e., visual features of one expression but auditory features of another. In these cases, subjects were free to select the expression that matched either the auditory or visual modality, whichever was more salient for that expression type. Results showed that chimpanzees were able to discriminate facial expressions using only auditory or visual cues, and when these modalities were mixed. However, in these latter trials, depending on the expression category, clear preferences for either the visual or auditory modality emerged. Pant-hoots and play faces were discriminated preferentially using the auditory modality, while screams were discriminated preferentially using the visual modality. Therefore, depending on the type of expressive display, the auditory and visual modalities were differentially salient in ways that appear consistent with the ethological importance of that display's social function.  相似文献   

8.

How do people automatize their dual-task performance through bottleneck bypassing (i.e., accomplish parallel processing of the central stages of two tasks)? In the present work we addressed this question, evaluating the impact of sensory–motor modality compatibility—the similarity in modality between the stimulus and the consequences of the response. We hypothesized that incompatible sensory–motor modalities (e.g., visual–vocal) create conflicts within modality-specific working memory subsystems, and therefore predicted that tasks producing such conflicts would be performed less automatically after practice. To probe for automaticity, we used a transfer psychological refractory period (PRP) procedure: Participants were first trained on a visual task (Exp. 1) or an auditory task (Exp. 2) by itself, which was later presented as Task 2, along with an unpracticed Task 1. The Task 1–Task 2 sensory–motor modality pairings were either compatible (visual–manual and auditory–vocal) or incompatible (visual–vocal and auditory–manual). In both experiments we found converging indicators of bottleneck bypassing (small dual-task interference and a high rate of response reversals) for compatible sensory–motor modalities, but indicators of bottlenecking (large dual-task interference and few response reversals) for incompatible sensory–motor modalities. Relatedly, the proportion of individuals able to bypass the bottleneck was high for compatible modalities but very low for incompatible modalities. We propose that dual-task automatization is within reach when the tasks rely on codes that do not compete within a working memory subsystem.

  相似文献   

9.
Language processing always involves a combination of sensory (auditory or visual) and motor modalities (vocal or manual). In line with embodied cognition theories, we additionally assume a semantically implied modality (SIM) due to modality references of the underlying concept. Understanding ear-related words (e.g. “noise”), for example, should activate the auditory SIM. In the present study, we investigated the influence of the SIM on sensory-motor modality switching (e.g. switching between the auditory-vocal and visual-manual combination). During modality switching, participants categorised words with regard to their SIM (e.g. ear- versus eye-related words). Overall performance was improved and switch costs were reduced whenever there was concordance between SIMs and sensory-motor modalities (e.g. an auditory presentation of ear-related words). Thus, the present study provides first evidence for semantic effects during sensory-motor modality switching in terms of facilitation effects whenever the SIM was in concordance with sensory-motor modalities.  相似文献   

10.
Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial working memory overload in the visual text condition; and second, the temporal contiguity assumption, according to which the modality effect occurs because solely auditory texts and pictures can be attended to simultaneously. The latter explanation applies only to simultaneous presentation, the former to both simultaneous and sequential presentation. This paper introduces a third explanation, according to which parts of the modality effect are due to early, sensory processes. This account predicts that-for texts longer than one sentence-the modality effect with sequential presentation is restricted to the information presented most recently. Two multimedia experiments tested the influence of text modality across three different conditions: simultaneous presentation of texts and pictures versus sequential presentation versus presentation of text only. Text comprehension and picture recognition served as dependent variables. An advantage for auditory texts was restricted to the most recent text information and occurred under all presentation conditions. With picture recognition, the modality effect was restricted to the simultaneous condition. These findings clearly support the idea that the modality effect can be attributed to early processes in perception and sensory memory rather than to a working memory bottleneck.  相似文献   

11.
Analogous auditory and visual central-incidental learning tasks were administered to 24 second-, fourth-, and sixth-grade and college-age subjects to study the effects of modality of presentation on memory for central and incidental stimulus materials. There was no strong evidence to indicate that modality of presentation was an important factor in the development of selective attention. Central task learning increased with age for both auditory and visual presentations; incidental learning declined at the oldest age level for both auditory and visual tasks. The serial position analysis revealed that the observed developmental increase in recall performance was due primarily to differences in the initial serial positions. The use of active strategies for focusing attention on the relevant stimulus materials seemed to be the crucial determinant of level of performance.  相似文献   

12.
Repeating temporal patterns were presented in the auditory and visual modalities so that: (a) all elements were of equal intensity and were equally spaced in time (uniform presentation); (b) the intensity of one element was increased (accent presentation); or (c) the interval between two elements was increased (pause presentation). Intensity and interval patterning serve to segment the element sequence into repeating patterns.

For uniform presentation, pattern organization was by pattern structure, with auditory identification being faster. For pause presentation, organization was by the pauses; both auditory and visual identification were twice as fast as for uniform presentation. For auditory accent presentation, organization was by pattern structure and identification was slower than for uniform presentation. In contrast, the organization of visual accent presentation was by accents and identification was faster than for uniform presentation. These results suggest that complex stimuli, in which elements are patterned along more than one sensory dimension, are perceptually unique and therefore their identification rests on the nature of each modality.  相似文献   

13.
Statistical learning (SL), sensitivity to probabilistic regularities in sensory input, has been widely implicated in cognitive and perceptual development. Little is known, however, about the underlying mechanisms of SL and whether they undergo developmental change. One way to approach these questions is to compare SL across perceptual modalities. While a decade of research has compared auditory and visual SL in adults, we present the first direct comparison of visual and auditory SL in infants (8–10 months). Learning was evidenced in both perceptual modalities but with opposite directions of preference: Infants in the auditory condition displayed a novelty preference, while infants in the visual condition showed a familiarity preference. Interpreting these results within the Hunter and Ames model (1988), where familiarity preferences reflect a weaker stage of encoding than novelty preferences, we conclude that there is weaker learning in the visual modality than the auditory modality for this age. In addition, we found evidence of different developmental trajectories across modalities: Auditory SL increased while visual SL did not change for this age range. The results suggest that SL is not an abstract, amodal ability; for the types of stimuli and statistics tested, we find that auditory SL precedes the development of visual SL and is consistent with recent work comparing SL across modalities in older children.  相似文献   

14.
The present study investigated the effects of modality presentation on the verbal learning performance of 26 older adults and 26 younger cohorts. A multitrial free-recall paradigm was implemented incorporating three modalities: Auditory, Visual, and simultaneous Auditory plus Visual. Older subjects learned fewer words than younger subjects but their rate of learning was similar to that of the younger group. The visual presentation of objects (with or without the simultaneous auditory presentation of names) resulted in better learning, recall, and retrieval of information than the auditory presentation alone.  相似文献   

15.
The retention interval hypothesis (Izawa 1972) was formulated on the basis of retention interval distributions given in random order to account for inconsistent performance differences between anticipation and study-test procedures, which have been regarded as a puzzle for nearly two decades. Under the anticipation procedure a presentation of the stimulus term of a paired-associate (a test event) is immediately followed by a presentation of both stimulus and response terms of the pair (a study event), whereas under the study-test procedure, study and test events are given on separate cycles. The retention interval hypothesis was specifically tested in a situation without any random distributions of the retention intervals i.e., under a constant item presentation order from cycle to cycle, in addition to random presentation orders, in a paired-associate learning experiment, using 80 college students. As predicted by the theory, significantly superior performances were obtained for the study-test method vis-à-vis the anticipation method also under a constant item presentation order. The constant presentation-order- arrangements significantly outperformed the random ones. Directions for further theoretical elaborations are suggested.  相似文献   

16.
Abstract

The role of stimulus similarity as an organising principle in short-term memory was explored in a series of seven experiments. Each experiment involved the presentation of a short sequence of items that were drawn from two distinct physical classes and arranged such that item class changed after every second item. Following presentation, one item was re-presented as a probe for the ‘target’ item that had directly followed it in the sequence. Memory for the sequence was considered organised by class if probability of recall was higher when the probe and target were from the same class than when they were from different classes. Such organisation was found when one class was auditory and the other was visual (spoken vs. written words, and sounds vs. pictures). It was also found when both classes were auditory (words spoken in a male voice vs. words spoken in a female voice) and when both classes were visual (digits shown in one location vs. digits shown in another). It is concluded that short-term memory can be organised on the basis of sensory modality and on the basis of certain features within both the auditory and visual modalities.  相似文献   

17.
Methodological biases may help explain the modality effect, which is superior recall of auditory recency (end of list) items relative to visual recency items. In 1985 Nairne and McNabb used a counting procedure to reduce methodological biases, and they produced modality-like effects, such that recall of tactile recency items was superior to recall of visual recency items. The present study extended Nairne and McNabb's counting procedure and controlled several variables which may have enhanced recall of tactile end items or disrupted recall of visual end items in their study. Although the results of the present study indicated general serial position effects across tactile, visual, and auditory presentation modalities, the tactile condition showed lower recall for the initial items in the presentation list than the other two conditions. Moreover, recall of the final list item did not differ across the three presentation modalities; modality effects were not found. These results did not replicate the findings of Nairne and McNabb, or much of the past research showing superior recall of auditory recency items. Implications of these findings are discussed.  相似文献   

18.
Conway and Gathercole [(1990). Writing and long-term memory: Evidence for a “translation” hypothesis. The Quarterly Journal of Experimental Psychology, 42, 513–527] proposed a translation account to explain why certain types of encoding produce benefits in memory: Switching modalities from what is presented to what is encoded enhances item distinctiveness. We investigated this hypothesis in a recognition experiment in which the presentation modality of a study list (visual vs. auditory) and the encoding activity (speaking vs. typing vs. passive encoding) were manipulated between-subjects. Manipulating encoding activity between-subjects ruled out any potential influence of the relationally distinct processing that can occur in a within-subject manipulation (in which all previous translation effects have been demonstrated). We found no overall difference in memory for words presented auditorily vs. visually nor for visual vs. auditory encoding, but critically presentation modality and encoding activity did interact. Translating from one modality to another – particularly from auditory presentation to visual encoding (typing) – led to the best memory discrimination. This was largely because of reduced false alarms, not increased hits, consistent with the distinctiveness heuristic.  相似文献   

19.
We investigated the effects of learning schedule and multi‐modality stimulus presentation on foreign language vocabulary learning. In Experiment 1, participants learned German vocabulary words utilizing three learning methods that were organized either in a blocked or interleaved fashion. We found interleaving with the keyword mnemonic and rote study advantageous over blocking, but retrieval practice was better served in a blocked schedule. It is likely that the excessively delayed feedback for the retrieval practice in the interleaved practice schedule impeded learning while the spacing involved in the interleaved schedule enhanced learning in the keyword mnemonic and rote study. In Experiment 2, we examined whether a multi‐modality stimulus presentation from visual and auditory channels is better suited for aiding learning over a visual presentation condition. We found benefits of multi‐modality presentation only for the keyword mnemonic condition, presumably because the nature of the keyword mnemonic involving sound and visualization was particularly relevant with the multi‐modality presentation. The present study suggests that optimal foreign language learning environments should incorporate learning schedules and multimedia presentations based on specific learning methods and materials. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

20.
邢强  吴潇  王家慰  张忠炉 《心理学报》2021,53(10):1059-1070
选取不同感知学习风格的熟练粤-普双言者被试, 比较在不同通道呈现方式下他们刺激命名任务的表现, 由此考察感知学习风格与通道呈现方式的匹配性对熟练双言者双言切换代价的影响。结果发现, 被试在视觉线索呈现条件下要比在听觉线索条件下的切换代价小; 当感知学习风格与通道呈现方式匹配时, 双言切换代价更低。表明感知学习风格与通道呈现方式的匹配性对于双言切换代价有调节作用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号