首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Acoustic similarity is known to impair short-term memory (STM) for letter sequences. The present series of experiments investigated the effects of acoustic similarity on long-term retention. In the first experiment, subjects were asked to learn one of two lists of 8 letters, the letters being either of high or low acoustic similarity. Lists were visually presented for three trials, with subjects responding after each trial. Then subjects participated in an immediate memory task for digits which lasted for 20 min. Finally, subjects tried to recall the list of letters they had learned previously. Lists having items of high acoustic similarity were more difficult to recall on the first trial, but were better recalled on the delayed retention test. In a second experiment, groups of subjects were again asked to learn one of two lists of 8 letters differing in acoustic similarity, using different orders of the letters used previously. The procedures were identical except that in two groups, a STM task for digits intervened between the presentation and test of the letters. This intervening task minimized the effects of STM and eliminated the differences in retention found previously. In a third experiment, better long-term retention for material having high acoustic similarity was also obtained when subjects used a backward recall procedure. In the last experiment 14 item lists were learned to a criterion of two correct trials, and retention was tested after each trial and at a delay of 20 min. and 23 hr. No effect of acoustic similarity was found and little retention loss occurred. These results suggest that reducing the STM component by introducing a STM control or by lengthening the list caused the effect of acoustic similarity to disappear.  相似文献   

2.
A variety of experimental findings have indicated that a system of precategorical acoustic storage is responsible for the recency effect obtained in the immediate serial recall of sequences of digits, consonants, or syllables. This study investigated whether such findings could be generalized to the recall of sequences of words. Experiment 1 showed that phonemic similarity among a sequence of words failed to reduce the modality effect or the recency effect. Experiment 2 demonstrated that this finding was not attributable to a failure to control the phonemic properties of the stimulus material. Experiment 3 showed that the stimulus suffix effect obtained with sequences of words was not affected by the acoustic similarity between the list items and the stimulus suffix. Finally, Experiment 4 demonstrated that phonemic similarity among a sequence of words failed to reduce the stimulus suffix effect. These results were explained by extending the original model of short-term memory to incorporate a system of postcategorical lexical storage.  相似文献   

3.
The nature of acoustic memory and its relationship to the categorizing process in speech perception is investigated in three experiments on the serial recall of lists of syllables. The first study confirms previous reports that sequences comprising the syllables, bah, dah, and gah show neither enhanced retention when presented auditorily rather than visually, nor a recency effect—both occurred with sequences in which vowel sounds differed (bee, bih, boo). This was found not to be a simple vowel-consonant difference since acoustic memory effects did occur with consonant sequences that were acoustically more discriminable (sha, ma, ga and ash, am, ag). Further experiments used the stimulus suffix effect to provide evidence of acoustic memory, and showed (1), increasing the acoustic similarity of the set grossly impairs acoustic memory effects for vowels as well as consonants, and (2) such memory effects are no greater for steady-state vowels than for continuously changing diphthongs. It is concluded that the usefulness of the information that can be retrieved from acoustic memory depends on the acoustic similarity of the items in the list rather than on their phonetic class or whether or not they have “encoded” acoustic cues. These results question whether there is any psychological evidence for “encoded” speech sounds being categorized in ways different from other speech sounds.  相似文献   

4.
This study attempts to discover why items which are similar in sound are hard to recall in a short-term memory situation. The input, storage, and retrieval stages of the memory system are examined separately. Experiments I, II and III use a modification of the Peterson and Peterson technique to plot short-term forgetting curves for sequences of acoustically similar and control words. If acoustically similar sequences are stored less efficiently, they should be forgotten more rapidly. All three experiments show a parallel rate of forgetting for acoustically similar and control sequences, suggesting that the acoustic similarity effect does not occur during storage. Two input hypotheses are then examined, one involving a simple sensory trace, the other an overloading of a system which must both discriminate and memorize at the same time. Both predict that short-term memory for spoken word sequences should deteriorate when the level of background noise is increased. Subjects performed both a listening test and a memory test in which they attempted to recall sequences of five words. Noise impaired performance on the listening test but had no significant effect on retention, thus supporting neither of the input hypotheses. The final experiments studied two retrieval hypotheses. The first of these, Wickelgren's phonemic-associative hypotheses attributes the acoustic similarity effect to inter-item associations. It predicts that, when sequences comprising a mixture of similar and dissimilar items are recalled, errors should follow acoustically similar items. The second hypothesis attributes the effect to the overloading of retrieval cues which consequently do not discriminate adequately among available responses. It predicts maximum error rate on, not following, similar items. Two experiments were performed, one involving recall of visually presented letter sequences, the other of auditorily presented word sequences. Both showed a marked tendency for errors to coincide with acoustically similar items, as the second hypothesis would predict. It is suggested that the acoustic similarity effect occurs at retrieval and is due to the overloading of retrieval cues.  相似文献   

5.
Three experiments examined short-term encoding processes of deaf signers for different aspects of signs from American Sign Language. Experiment 1 compared short-term memory for lists of formationally similar signs with memory for matched lists of random signs. Just as acoustic similarity of words interferes with short-term memory or word sequences, formational similarity of signs had a marked debilitating effect on the ordered recall of sequences of signs. Experiment 2 evaluated the effects of the semantic similarity of the signs on short-term memory: Semantic similarity had no significant effect on short-term ordered recall of sequences of signs. Experiment 3 studied the role that the iconic (representational) value of signs played in short-term memory. Iconicity also had no reliable effect on short-term recall. These results provide support for the position that deaf signers code signs from American Sign Language at one level in terms of linguistically significant formational parameters. The semantic and iconic information of signs, however, seems to have little effect on short-term memory.  相似文献   

6.
Four lists of Chinese words in a 2 × 2 factorial design of visual and acoustic similarity were used in a short-term memory experiment. In addition to a strong acoustic similarity effect, a highly significant visual similarity effect was also obtained. This was particularly pronounced in the absence of acoustic similarity in the words used. The results not only confirm acoustic encoding to be a basic process in short-term recall of verbal stimuli in a language other than English but also lend support to the growing evidence of visual encoding in short-term memory as the situation demands.  相似文献   

7.
A statistical procedure that separates storage from retrieval was used to study the acoustic similarity effect as a function of retention interval in the Brown-Peterson paradigm. Both storage and retrieval components showed reliable and independent changes with retention interval, but only the storage component was affected by acoustic similarity. Hence, acoustic similarity affects trace durability and retrieval plays no essential role in the similarity effect. This finding is inconsistent with the address hypothesis (cf. Baddeley, 1968). It is argued that acoustic similarity induces subjects to encode the target item in a confused fashion, particularly in regard to order information.  相似文献   

8.
Since both acoustic and semantic similarity exert an influence on memory, the role of memory in concept identification (CI) was investigated by varying the acoustic and semantic similarity of the stimuli used in the CI task. Varying acoustic similarity had no effect on CI, but CI was significantly impaired when the dimensions of a CI task were semantically similar.  相似文献   

9.
Ss judged the some pairs of words “same” or “different” under semantic, acoustic and visualcriteria. RTs were compared for each criterion, and the effects of different kinds of confusabittty, such as acoustic similarity in the semantic matching task, or semantic similarity in the acoustic matching task, were also studied.  相似文献   

10.
It has been shown that short-term memory (STM) for word sequences is grossly impaired when acoustically similar words are used, but is relatively unaffected by semantic similarity. This study tests the hypothesis that long-term memory (LTM) will be similarly affected. In Experiment I subjects attempted to learn one of four lists of 10 words. The lists comprised either acoustically or semantically similar words (A and C) or control words of equal frequency (B and D). Lists were learned for four trials, after which subjects spent 20 min. on a task involving immediate memory for digits. They were then asked to recall the word list. The acoustically similar list was learned relatively slowly, but unlike the other three lists showed no forgetting. Experiment II showed that this latter paradox can be explained by assuming the learning score to depend on both LTM and STM, whereas the subsequent retest depends only on LTM. Experiment III repeats Experiment I but attempts to minimize the effects of STM during learning by interposing a task to prevent rehearsal between the presentation and testing of the word sequences. Unlike STM, LTM proved to be impaired by semantic similarity but not by acoustic similarity. It is concluded that STM and LTM employ different coding systems.  相似文献   

11.
Contrary to previous indications, retroactive interference in long-term paired associate learning was found to be a function of acoustic similarity. Experimental groups were exposed to the A-B, A'-C paradigm where corresponding stimuli were homophones. Their retention scores were substantially and significantly lower than control groups run with an A-B, C-D paradigm. The failure of previous studies to reveal effects of acoustic similarity in this way is attributed to the use of an insufficiently high degree of similarity.  相似文献   

12.
Given sequences of digits with temporally equidistant acoustic onsets, listeners do not perceive them as isochronous (Morton, Marcus, & Frankish, 1976). In order for the sequences to be perceptually isochronous, systematic departures from acoustic isochrony must be introduced. These acoustic departures are precisely those that talkers generate when asked to produce an isochronous sequence (Fowler, 1979), suggesting that listeners judge isochrony based on acoustic information about articulatory timing. The present experiment was an attempt to test directly whether perceptually isochronous sequences have isochronous articulatory correlates. Electromyographic potentials were recorded from the orbicularis oris muscle when speakers produced sequences of monosyllables “as if speaking in time to a metronome.” Sequences were devised so that lip-muscle activity was related to the syllable-initial consonant, the stressed vowel, or the stressed vowel and final consonant. Results indicate that isochronous muscular activity accompanies both isochronous and anisochronous acoustic signals produced under instructions to generate isochronous sequences. These results support an interpretation of the perceptual phenomenon reported by Morton et al. to the effect that listeners judge isochrony of the talker’s articulations as they are reflected in the acoustic signal.  相似文献   

13.
Morton, Marcus, and Frankish (1976) report that listeners hear acoustically isochronous digit sequences as anisochronous. Moreover, given a chance to adjust intervals in the sequences until they are perceptually isochronous, the listeners introduce systematic deviations from isochrony. The present series of studies investigates these phenomena further. They indicate that when asked toproduce isochronous sequences, talkers generate precisely the acoustic anisochronies that listeners require in order tohear a sequence as isochronous. The acoustic anisochronies that talkers produce are expected if talkers initiate the articulation of successive items in the sequence at temporally equidistant intervals. Items whose initial consonants differ in respect to manner class will have acoustic consequences (other than silence) at different lags with respect to their articulatory onsets thereby generating the observed acoustic anisochronies. The findings suggest that listeners judge isochrony based on acoustic information aboutarticulatory timing rather than on some articulation-free acoustic basis.  相似文献   

14.
The question of whether repetition avoidance in sequential response production depends on the phonetic or the semantic encoding of previous responses was investigated by varying the acoustic and semantic similarity among the response alternatives. The results indicated that acoustic similarity affected repetition avoidance with six alternative words and a production rate of one per second, but not with four alternative letters and a rate of one per 2 sac. Semantic similarity between words was also studied, and was not seen to affect repetition avoidance. Results were explained by means of a model in which comparisons between a memory set of admissible responses and a memory set of recent responses are made at a phonetic level of response representation.  相似文献   

15.
Discrimination of natural, sustained vowels was studied in 5 budgerigars. The birds were trained using operant conditioning procedures on a same-different task, which was structured so that response latencies would provide a measure of stimulus similarity. These response latencies were used to construct similarity matrices, which were then analyzed by multidimensional scaling (MDS) procedures. MDS produced spatial maps of these speech sounds where perceptual similarity was represented by spatial proximity. The results of the three experiments suggest that budgerigars perceive natural, spoken vowels according to phonetic categories, find the acoustic differences among different talkers less salient than the acoustic differences among vowel categories, and use formant frequencies in making these complex discriminations.  相似文献   

16.
Similarity and categorization of environmental sounds   总被引:1,自引:0,他引:1  
Four experiments investigated the acoustical correlates of similarity and categorization judgments of environmental sounds. In Experiment 1, similarity ratings were obtained from pairwise comparisons of recordings of 50 environmental sounds. A three-dimensional multidimensional scaling (MDS) solution showed three distinct clusterings of the sounds, which included harmonic sounds, discrete impact sounds, and continuous sounds. Furthermore, sounds from similar sources tended to be in close proximity to each other in the MDS space. The orderings of the sounds on the individual dimensions of the solution were well predicted by linear combinations of acoustic variables, such as harmonicity, amount of silence, and modulation depth. The orderings of sounds also correlated significantly with MDS solutions for similarity ratings of imagined sounds and for imagined sources of sounds, obtained in Experiments 2 and 3--as was the case for free categorization of the 50 sounds (Experiment 4)--although the categorization data were less well predicted by acoustic features than were the similarity data.  相似文献   

17.
The following three studies of single-probe recognition memory set out to show the effect on the signal-detectability measures of d' and β (Tanner and Swets, 1954) of variations in the acoustic similarity of interfering material, which may either precede or follow the item to be remembered (proactive or retroactive interference --PI or RI). The first experiment studies a situation employed by Wickelgren (1966a), who reported that acoustically similar RI substantially reduced d'. It is shown that this effect could have been due to biases in Wickelgren's original designs, and that when a bias-free design is used, the fall in d' is only of borderline significance.

To investigate this problem further, a design was evolved in which two items were presented for memorizing, which varied in acoustic similarity to each other, and (after a distracting task) a probe was presented with one of three questions: Was this the first item of the pair? Was it the second? or, Did it occur in either position? In the first case, recognition-memory with RI of varying acoustic similarity was being studied, and as in the first experiment, it was found that similarity slightly reduced d'. With the second question, PI effects were being studied, and here negligible differences were found. With the third type of question, a “location-free” test, no effects of similarity were found. The last result rules out Posner's (1967) “acid-bath” explanation of similarity effects in interference: an explanation in terms of “differentiation” (or “filtering”) was also invalidated by the results of a third experiment, in which the same effects were found even though similarity varied only between stimulus items and interference, and not between these and the probe. Wickelgren's (1966b) associative model appears to have least difficulty in accommodating these results, though even this needs certain ad hoc assumptions to be able to do so.  相似文献   

18.
Previous research has identified acoustic properties modulating the perceived urgency of alarms. The authors conducted 3 experiments using a multidimensional approach in which participants made acoustic dissimilarity judgments and urgency dissimilarity judgments for pairs of sequences. Experiment 1 confirmed the validity of acoustic parameters in urgency perception of experimental alarms. Experiment 2 confirmed the role of these acoustic parameters with real alarms but suggested the importance of additional factors. Experiment 3 compared the relative degrees of urgency of alarms from Experiments 1 and 2, highlighting the role of both sequence structure and associated mental representation. The authors conclude that the design of alarms should not be based exclusively on acoustic factors but should also take into consideration the acquisition of an appropriate mental representation.  相似文献   

19.
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed.  相似文献   

20.
It is well known that the formant transitions of stop consonants in CV and VC syllables are roughly the mirror image of each other in time. These formant motions reflect the acoustic correlates of the articulators as they move rapidly into and out of the period of stop closure. Although acoustically different, these formant transitions are correlated perceptually with similar phonetic segments. Earlier research of Klatt and Shattuck (1975) had suggested that mirror image acoustic patterns resembling formant transitions were not perceived as similar. However, mirror image patterns could still have some underlying similarity which might facilitate learning, recognition, and the establishment of perceptual constancy of phonetic segments across syllable positions. This paper reports the results of four experiments designed to study the perceptual similarity of mirror-image acoustic patterns resembling the formant transitions and steady-state segments of the CV and VC syllables /ba/, /da/, /ab/, and /ad/. Using a perceptual learning paradigm, we found that subjects could learn to assign mirror-image acoustic patterns to arbitrary response categories more consistently than they could do so with similar arrangements of the same patterns based on spectrotemporal commonalities. Subjects respond not only to the individual components or dimensions of these acoustic patterns, but also process entire patterns and make use of the patterns’ internal organization in learning to categorize them consistently according to different classification rules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号