首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An experiment is reported in which the subject read visually presented lists with four different degrees of vocalization; immediately after reading each list he was required to reproduce it either aloud or in writing. Each list consisted of eight consonants and presentation rates were varied between I and 4 letters per sec. For any given series of lists, the subject was asked either to read the letters silently, or to mouth them silently, or to whisper them, or to say them aloud while reading.

At the fastest presentation-rate immediate recall improved monotonically with the degree of vocalization during reading of the lists; at slower rates this generalization held less well, especially for the lower degrees of vocalization. Vocalization was most helpful at the highest presentation-rate.

The overall amount correctly recalled was better for more slowly presented lists and for written as opposed to spoken recall. Analysis of the errors suggested that acoustic confusions were affected by the conditions of presentation; and that serial order intrusions were independent of presentation-or recall-conditions. An apparent variation of transpositions with voicing-and-recall-method failed to reach statistical significance. Theoretical implications of the experiment are discussed, including reference to Broadbent's theory of short-term memory (1958).  相似文献   

2.
A dual-task paradigm is used to investigate whether the auditory input logogen is distinct from the articulatory output logogen. In the first two experiments it is shown that the task of detecting an unspecified name in an auditory input stream can be combined with reading aloud visually presented words with relatively little single- to dual-task decrement. The stimuli for both tasks are independent streams of random words presented at rapid rates. A series of control experiments suggest that the first task places a considerable information processing load on the auditory input logogen, the second a considerable load on the phonological output logogen, and that subjects do not switch between the two tasks. The fact that the two tasks can be combined with ease is therefore interpreted as supporting the view that the systems underlying reading aloud and listening are separate. The ease of performance when the input streams are in different modalities, compared to the difficulties when they are in the same, has implications for general models of attention.  相似文献   

3.
Since the 19th century, it has been known that response latencies are longer for naming pictures than for reading words aloud. While several interpretations have been proposed, a common general assumption is that this difference stems from cognitive word-selection processes and not from articulatory processes. Here we show that, contrary to this widely accepted view, articulatory processes are also affected by the task performed. To demonstrate this, we used a procedure that to our knowledge had never been used in research on language processing: response-latency fractionating. Along with vocal onsets, we recorded the electromyographic (EMG) activity of facial muscles while participants named pictures or read words aloud. On the basis of these measures, we were able to fractionate the verbal response latencies into two types of time intervals: premotor times (from stimulus presentation to EMG onset), mostly reflecting cognitive processes, and motor times (from EMG onset to vocal onset), related to motor execution processes. We showed that premotor and motor times are both longer in picture naming than in reading, although than in reading, although articulation is already initiated in the latter measure. Future studies based on this new approach should bring valuable clues for a better understanding of the relation between the cognitive and motor processes involved in speech production.  相似文献   

4.
It is well known that an acoustic-sensory code supports retention of linguistic materials whose storage is particularly based on phonological information (e.g., unrelated word lists). The present study investigates whether such a code also contributes to the retention of sentences. It has been shown that short-term sentence recall particularly depends on propositional and lexicosemantic information, which are assumed to be supplied independently of modality influences. We employed the intrusion paradigm of Potter and Lombardi (1990) and manipulated the availability of acoustic-sensory information. Participants were instructed to read sentences either silently or aloud. Since these two reading conditions also differ with respect to articulatory information, a further condition that provided articulatory but not acoustic-sensory information was introduced (i.e., silent mouthing). Our data suggest that acoustic-sensory information is used, if available, even in sentence recall.  相似文献   

5.
We present sentence reading data from a large-scale study with children (N = 632), focusing on three key research questions. (1) What are the trajectories of reading development in oral as compared to silent reading? (2) How are word frequency effects developing and are changes differentially affected by reading mode? (3) Are there systematic differences between better and weaker comprehenders when reading silently vs. aloud? Results illuminate a number of differences between reading modes, including more and prolonged fixations in oral reading, along with less inter-word regressions and attenuated effects of word frequency. Weaker comprehenders were slower, especially in oral reading and showed less flexibility in the allocation of word processing time. Differences between reading modes can be explained by additional processing demands imposed by concurrent articulation and eye–voice coordination when reading aloud.  相似文献   

6.
This study was conducted to provide an indirect test of Wingate's “modified vocalization” hypothesis. In this formulation, the improved fluency that stutterers experience in various novel conditions is attributed to changes in the key correlates of stress, namely, fundamental frequency, vocal SPL, and rate.Normal speakers and stutterers read aloud in an habitual condition following instructions to read at higher- and lower-than-normal pitches. Objective measures were taken of subjects' fundamental frequency, fundamental frequency deviation, vocal SPL, and fluent reading rate. Disfluences were also counted. Findings showed that both stutterers and normals altered several features of voicing from the habitual to the two experimental conditions. Significant condition main effects emerged for fundamental frequency deviation, vocal SPL, fluent reading rate, and disfluency. The only meaningful between-group difference noted showed that the stutterers were more disfluent than the normals across all conditions. Results were interpreted as supporting Wingate's “modified vocalization” position and were discussed relative to how the vocal changes observed might act to promote fluency.  相似文献   

7.
In the present study, the authors examined the extent to which familiarity and feedback (auditory and/or articulatory) might be beneficial to proofreading. Participants proofread unfamiliar and familiar (repeated) passages while (a) concurrently reading either aloud or silently, (b) concurrently listening to the passages being read to them, or (c) reading without either auditory or articulatory feedback. Errors were one-letter changes that transformed function words into contextually inappropriate words. Familiarity improved reading times largely irrespective of feedback, and it enhanced error detection only when auditory feedback was available to participants. Proofreaders' enhanced error detection in familiar text reflected a change in their sensitivity to errors rather than any change in the placement of the response criterion for reporting errors. These findings suggest that familiarity can produce two kinds of functional fluency, one involving speed of processing, which is largely independent of feedback, and the other concerned with accuracy of processing, which relies on feedback.  相似文献   

8.
In the present study, the authors examined the extent to which familiarity and feedback (auditory and/or articulatory) might be beneficial to proofreading. Participants proofread unfamiliar and familiar (repeated) passages while (a) concurrently reading either aloud or silently, (b) concurrently listening to the passages being read to them, or (c) reading without either auditory or articulatory feedback. Errors were one-letter changes that transformed function words into contextually inappropriate words. Familiarity improved reading times largely irrespective of feedback, and it enhanced error detection only when auditory feedback was available to participants. Proofreaders' enhanced error detection in familiar text reflected a change in their sensitivity to errors rather than any change in the placement of the response criterion for reporting errors. These findings suggest that familiarity can produce two kinds of functional fluency, one involving speed of processing, which is largely independent of feedback, and the other concerned with accuracy of processing, which relies on feedback.  相似文献   

9.
Short-term forgetting and the articulatory loop   总被引:2,自引:0,他引:2  
Two experiments explored the role of subvocal articulatory rehearsal in the Peterson short-term forgetting task. In the first of these, subjects recalled consonant trigrams after an interval of 0, 5 or 15 s during which they either counted backwards in threes, suppressed articulation by continuously uttering the word “the”, or in a third control condition continuously tapped on the table. While counting backwards caused the usual dramatic forgetting, tapping caused no forgetting, and articulatory suppression only minimal forgetting at the longest delay. A second study used the same procedure but included only two conditions, articulatory suppression during the retention interval and articulatory suppression during both input and retention. Neither showed evidence of forgetting over the 15 s delay. These results suggest that covert speech is not necessary for rehearsal in short-term verbal memory. As such they call for a re-evaluation of the nature and function of rehearsal.  相似文献   

10.


Two experiments showed that articulating “blah” repeatedly aloud or silently interfered with the speed and accuracy of judging whether pairs of words rhymed. Articulation from another voice affected only accuracy, and foot tapping had no effect. This suggests that it is mainly the articulatory component of articulatory suppression that interferes with this task and that rhyme judgments of words depend on an articulatory code or an acoustic code accessed via articulation.

A third experiment confirmed these effects on speed of judgment, but there were no significant effects on errors in this experiment. Non-verbal use of the articulatory musculature in chewing also slowed performance. Similar results were obtained for word and non-word rhyme judgments, though the latter effects were somewhat weaker. It is argued that the results fail to support the theory that there are different methods of accessing phonology for words or non-words and indicate that access to phonology was via articulation, rather than a means to achieving articulation. If the task incorporated mechanisms employed in normal reading, the results refute suggestions that conversion of print to sound, whether for words or non-words, occurs prior to retrieval of an articulatory code.  相似文献   

11.
Contrary to the received view that reading aloud reflects processes that are "automatic," recent evidence suggests that some of these processes require a form of attention. This issue was investigated further by examining the effect of a prior presentation of exception words (words whose spelling-sound translation are atypical, such as pint as compared with mint, hint, or lint) and pseudohomophones (nonwords that sound identical to words, such as brane from brain) on reading aloud in the context of the psychological refractory period paradigm. For exception words, the joint effects of repetition and stimulus onset asynchrony (SOA) yielded an underadditive interaction on the time to read aloud, replicating previous work -- a short SOA between Task 1 and Task 2 increased reaction time (RT) and reduced the magnitude of the repetition effect relative to the long SOA. For pseudohomophones, in contrast, the joint effects of repetition and SOA were additive on RT. These results provide converging evidence for the conclusion that (a) processing up to and including the orthographic input lexicon does not require central attention when reading aloud, whereas (b) translating lexical and sublexical spelling to sound requires the use of central attention.  相似文献   

12.
The current research uses a novel methodology to examine the role of semantics in reading aloud. Participants were trained to read aloud 2 sets of novel words (i.e., nonwords such as bink): some with meanings (semantic) and some without (nonsemantic). A comparison of reading aloud performance between these 2 sets of novel words was used to provide an indicator of the importance of semantic information in reading aloud. In Experiment 1, in contrast to expectations, reading aloud performance was not better for novel words in the semantic condition. In Experiment 2, the training of novel words was modified to reflect more realistic steps of lexical acquisition: Reading aloud performance became faster and more accurate for novel words in the semantic condition, but only for novel words with inconsistent pronunciations. This semantic advantage for inconsistent novel words was again observed when a subset of participants from Experiment 2 was retested 6-12 months later (in Experiment 3). These findings provide support for a limited but significant role for semantics in the reading aloud process.  相似文献   

13.
A word from a dense neighborhood is often read aloud faster than a word from a sparse neighborhood. This advantage is usually attributed to orthography, but orthographic and phonological neighbors are typically confounded. Two experiments investigated the effect of neighborhood density on reading aloud when phonological density was varied while orthographic density was held constant, and vice versa. A phonological neighborhood effect was observed, but not an orthographic one. These results are inconsistent with the predominant role ascribed to orthographic neighbors in accounts of visual word recognition and reading aloud. Consistent with this interpretation, 6 different computational models of reading aloud failed to simulate this pattern of results. The results of the present experiments thus provide a new understanding of some of the processes underlying reading aloud, and new challenges for computational models.  相似文献   

14.
Although mind-wandering during silent reading is well documented, to date no research has investigated whether similar processes occur during reading aloud. In the present study, participants read a passage either silently or aloud while periodically being probed about mind-wandering. Although their comprehension accuracies were similar for both reading conditions, participants reported more mind-wandering while they were reading aloud. These episodes of mindless reading were associated with nearly normal prosody, but were nevertheless distinguished by subtle fluctuations in volume that were predictive of both overall comprehension accuracy and individual sentence comprehension. Together, these findings reveal that previously hidden within the common activity of reading aloud lies: (1) a demonstration of the remarkable automaticity of speech, (2) a situation that is surprisingly conducive to mind-wandering, (3) subtle vocal signatures of mind-wandering and comprehension accuracy, and (4) the promise of developing useful interventions to improve reading.  相似文献   

15.
Normal individual differences are rarely considered in the modelling of visual word recognition – with item response time effects and neuropsychological disorders being given more emphasis – but such individual differences can inform and test accounts of the processes of reading. We thus had 100 participants read aloud words selected to assess theoretically important item response time effects on an individual basis. Using two major models of reading aloud – DRC and CDP+ – we estimated numerical parameters to best model each individual’s response times to see if this would allow the models to capture the effects, individual differences in them and the correlations among these individual differences. It did not. We therefore created an alternative model, the DRC-FC, which successfully captured more of the correlations among individual differences, by modifying the locus of the frequency effect. Overall, our analyses indicate that (i) even after accounting for individual differences in general speed, several other individual difference in reading remain significant; and (ii) these individual differences provide critical tests of models of reading aloud. The database thus offers a set of important constraints for future modelling of visual word recognition, and is a step towards integrating such models with other knowledge about individual differences in reading.  相似文献   

16.
Levy (1977) reported a series of experiments in which a distracting task (counting aloud) interfered more with reading than with listening. The results were interpreted as evidence of the importance of phonological recoding during reading. In a similar experiment we varied the nature of the distracting task, using one task related to speech (counting aloud) and one task not related to speech (manual response to a threshold shock). Both distracting tasks led to similar results, namely, more interference with reading than listening. On the basis of our results and a consideration of related literature, we ascribe the selective interference effect to the relative difficulty of reading over listening rather than to the importance of speech recoding in reading.  相似文献   

17.
Participants read aloud nonword letter strings, one at a time, which varied in the number of letters. The standard result is observed in two experiments; the time to begin reading aloud increases as letter length increases. This result is standardly understood as reflecting the operation of a serial, left-to-right translation of graphemes into phonemes. The novel result is that the effect of letter length is statistically eliminated by a small number of repetitions. This elimination suggests that these nonwords are no longer always being read aloud via a serial left-to-right sublexical process. Instead, the data are taken as evidence that new orthographic and phonological lexical entries have been created for these nonwords and are now read at least sometimes by recourse to the lexical route. Experiment 2 replicates the interaction between nonword letter length and repetition observed in Experiment 1 and also demonstrates that this interaction is not seen when participants merely classify the string as appearing in upper or lower case. Implications for existing dual-route models of reading aloud and Share's self-teaching hypothesis are discussed.  相似文献   

18.
Two experiments were carried out to test the hypothesis that verbal recoding of visual stimuli in short-term memory influences long-term memory encoding and impairs subsequent mental image operations. Easy and difficult-to-name stimuli were used. When rotated 90 degrees counterclockwise, each stimulus revealed a new pattern consisting of two capital letters joined together. In both experiments, subjects first learned a short series of stimuli and were then asked to rotate mental images of the stimuli in order to detect the hidden letters. In Experiment 1, articulatory suppression was used to prevent subjects from subvocal rehearsal when learning the stimuli, whereas in Experiment 2, verbal labels were presented with each stimulus during learning to encourage a reliance on the verbal code. As predicted, performance in the imagery task was significantly improved by suppression when the stimuli were easy to name (Experiment 1) but was severely disrupted by labeling when the stimuli were difficult to name (Experiment 2). We concluded that verbal recoding of stimuli in short-term memory during learning disrupts the ability to generate veridical mental images from long-term memory.  相似文献   

19.
A series of experiments was conducted to determine if linguistic representations accessed during reading include auditory imagery for characteristics of a talker's voice. In 3 experiments, participants were familiarized with two talkers during a brief prerecorded conversation. One talker spoke at a fast speaking rate, and one spoke at a slow speaking rate. Each talker was identified by name. At test, participants were asked to either read aloud (Experiment 1) or silently (Experiments 1, 2, and 3) a passage that they were told was written by either the fast or the slow talker. Reading times, both silent and aloud, were significantly slower when participants thought they were reading a passage written by the slow talker than when reading a passage written by the fast talker. Reading times differed as a function of passage author more for difficult than for easy texts, and individual differences in general auditory imagery ability were related to reading times. These results suggest that readers engage in a type of auditory imagery while reading that preserves the perceptual details of an author's voice.  相似文献   

20.
The priming of new associations has been a controversial topic, with some studies finding significant effects but others failing to replicate these results. Three studies investigated the priming of new associations in a reading time task, presenting lists of word pairs that were read aloud as quickly as possible. In Experiment 1, significant priming of new associations was found after two study presentations, replicating similar results by Moscovitch, Winocur, and MacLachan (1986). In Experiment 2, reading time was facilitated for intact pairs when word positions remained constant relative to when word positions were reversed. This suggested that the associative priming effect was related to specific lower level features of the word pairs rather than to abstract associations. In Experiment 3, the insertion of the wordand between test words eliminated the pairing-specific effect, placing the locus of new association priming at the transition between words within pairs. These findings demonstrate that the knowledge that supports priming of new associations in the reading time task involves perceptual or articulatory information about the transitions between words rather than abstract associative knowledge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号