首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Zeldow and McAdams (1993) recently presented artifactual explanations for our data showing dissimilarity between the content of speech elicited by the Thematic Apperception Test (Murray, 1943) and free speech tasks (Schnurr, Rosenberg, & Oxman, 1992). In particular, they alleged that our findings resulted from a lack of psychological meaning in our content categories and in the free speech task. We cite empirical and theoretical support to refute this allegation and provide additional analyses of our data that are consistent with our earlier suggestion that text samples elicited under different conditions may not be interchangeable.  相似文献   

2.
Schnurr, Rosenberg, and Oxman (1992) recently compared the free speech samples and Thematic Apperception Test (TAT) responses of 95 normal adults. They concluded that the two techniques are not interchangeable, and that the TAT, which proved superior in the prediction of individual differences, may be preferable to free speech instructions for eliciting data in content analytic studies. We disagree with both conclusions. Various forms of narrative speech samples may be highly correlated, so long as psychologically meaningful, well-validated, and higher order content categories are used. The use of first-order content categories is less likely to contribute to the study of personality.  相似文献   

3.
Schnurr, Rosenberg, and Oxman (1992) recently compared the free speech samples and Thematic Apperception Test (TAT) responses of 95 normal adults. They concluded that the two techniques are not interchangeable, and that the TAT, which proved superior in the prediction of individual differences, may be preferable to free speech instructions for eliciting data in content analytic studies. We disagree with both conclusions. Various forms of narrative speech samples may be highly correlated, so long as psychologically meaningful, well-validated, and higher order content categories are used. The use of first-order content categories is less likely to contribute to the study of personality.  相似文献   

4.
We compared the free speech samples and Thematic Apperception Test (TAT) responses of 95 community-residing volunteers by using the General Inquirer content analysis computer program and the Harvard III Psychosociological Dictionary (Stone, Dunphy, Smith, & Ogitvie, 1966). Comparability was assessed by computing mean differences and correlations among techniques. The techniques were evaluated by assessing the ability of data derived from each to predict individual differences in developmental level, gender, depressive symptomatology, and personality. Results show highly discriminable profiles and low reliability among techniques. TAT data were superior in predicting individual, differences. We suggest that structured techniques like the TAT are preferable to standard free speech instructions for eliciting data in content analytic studies and discuss the possibility of computerized content analysis as a method of scoring the TAT.  相似文献   

5.
We compared the free speech samples and Thematic Apperception Test (TAT) responses of 95 community-residing volunteers by using the General Inquirer content analysis computer program and the Harvard III Psychosociological Dictionary (Stone, Dunphy, Smith, & Ogitvie, 1966). Comparability was assessed by computing mean differences and correlations among techniques. The techniques were evaluated by assessing the ability of data derived from each to predict individual differences in developmental level, gender, depressive symptomatology, and personality. Results show highly discriminable profiles and low reliability among techniques. TAT data were superior in predicting individual, differences. We suggest that structured techniques like the TAT are preferable to standard free speech instructions for eliciting data in content analytic studies and discuss the possibility of computerized content analysis as a method of scoring the TAT.  相似文献   

6.
7.
The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as “mora” is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.  相似文献   

8.
The notion of automatic syntactic analysis received support from some event-related potential (ERP) studies. However, none of these studies tested syntax processing in the presence of a concurrent speech stream. Here we present two concurrent continuous speech streams, manipulating two variables potentially affecting speech processing in a fully crossed design: attention (focused vs. divided) and task (lexical – detecting numerals vs. syntactical – detecting syntactic violations). ERPs elicited by syntactic violations and numerals as targets were compared with those for distractors (task-relevant events in the unattended speech stream) and attended and unattended task-irrelevant events. As was expected, only target numerals elicited the N2b and P3 components. The amplitudes of these components did not significantly differ between focused and divided attention. Both task-relevant and task-irrelevant syntactic violations elicited the N400 ERP component within the attended but not in the unattended speech stream. P600 was only elicited by target syntactic violations. These results provide no support for the notion of automatic syntactic analysis. Rather, it appears that task-relevance is a prerequisite of P600 elicitation, implying that in-depth syntactic analysis occurs only for attended speech under everyday listening situations.  相似文献   

9.
Two experiments examined the effects of processing fluency—that is, the ease with which speech is processed—on language attitudes toward native‐ and foreign‐accented speech. Participants listened to an audio recording of a story read in either a Standard American English (SAE) or Punjabi English (PE) accent. They heard the recording either free of noise or mixed with background white noise of various intensity levels. Listeners attributed more solidarity (but equal status) to the SAE than the PE accent. Compared to quieter listening conditions, noisier conditions reduced processing fluency, elicited a more negative affective reaction, and resulted in more negative language attitudes. Processing fluency and affect mediated the effects of noise on language attitudes. Theoretical, methodological, and practical implications are discussed.  相似文献   

10.
We used H(2)15O PET to characterize the common features of two successful but markedly different fluency-evoking conditions -- paced speech and singing -- in order to identify brain mechanisms that enable fluent speech in people who stutter. To do so, we compared responses under fluency-evoking conditions with responses elicited by tasks that typically elicit dysfluent speech (quantifying the degree of stuttering and using this measure as a confounding covariate in our analyses). We evaluated task-related activations in both stuttering subjects and age- and gender-matched controls. Areas that were either uniquely activated during fluency-evoking conditions, or in which the magnitude of activation was significantly greater during fluency-evoking than dysfluency-evoking tasks included auditory association areas that process speech and voice and motor regions related to control of the larynx and oral articulators. This suggests that a common fluency-evoking mechanism might relate to more effective coupling of auditory and motor systems -- that is, more efficient self-monitoring, allowing motor areas to more effectively modify speech. These effects were seen in both PWS and controls, suggesting that they are due to the sensorimotor or cognitive demands of the fluency-evoking tasks themselves. While responses seen in both groups were bilateral, however, the fluency-evoking tasks elicited more robust activation of auditory and motor regions within the left hemisphere of stuttering subjects, suggesting a role for the left hemisphere in compensatory processes that enable fluency. EDUCATIONAL OBJECTIVES: The reader will learn about and be able to: (1) compare brain activation patterns under fluency- and dysfluency-evoking conditions in stuttering and control subjects; (2) appraise the common features, both central and peripheral, of fluency-evoking conditions; and (3) discuss ways in which neuroimaging methods can be used to understand the pathophysiology of stuttering.  相似文献   

11.
The stereotypic portrayal of women as more emotional than men was evaluated in the present study. Equal numbers of female and male subjects were administered an interview consisting of questions designed to elicit two different levels of affect. Measures of subjects' facial expressions, speech, and visual behavior were analyzed for indications of emotionality, the affective state elicited in the interview, and emotional expression, a trait which is independent of question content. As hypothesized, it was found that women were more expressive of emotion in their higher level of facial activity. However, measures of speech and visual behavior, which reflected the expected difference in the affective states elicited by the different types of interview questions, did not differentiate between men and women. Finally, ratings of the quality of the subjects' facial expressions provided some evidence of sex differences in reactions to the questions. It was concluded that a reconsideration of the question of sex differences in emotionality is needed. Previous generalizations based on indirect experimental data and on potentially unreliable subjective reports must be challenged by more direct and dependable investigations.The author is grateful to Galen Baril for his assistance in the collection of the data and to Max Zachau for coding the data. This research was supported by a Faculty Research Grant from the University of Maine at Orono.  相似文献   

12.
We used H215O PET to characterize the common features of two successful but markedly different fluency-evoking conditions — paced speech and singing — in order to identify brain mechanisms that enable fluent speech in people who stutter. To do so, we compared responses under fluency-evoking conditions with responses elicited by tasks that typically elicit dysfluent speech (quantifying the degree of stuttering and using this measure as a confounding covariate in our analyses). We evaluated task-related activations in both stuttering subjects and age- and gender-matched controls.

Areas that were either uniquely activated during fluency-evoking conditions, or in which the magnitude of activation was significantly greater during fluency-evoking than dysfluency-evoking tasks included auditory association areas that process speech and voice and motor regions related to control of the larynx and oral articulators. This suggests that a common fluency-evoking mechanism might relate to more effective coupling of auditory and motor systems — that is, more efficient self-monitoring, allowing motor areas to more effectively modify speech.

These effects were seen in both PWS and controls, suggesting that they are due to the sensorimotor or cognitive demands of the fluency-evoking tasks themselves. While responses seen in both groups were bilateral, however, the fluency-evoking tasks elicited more robust activation of auditory and motor regions within the left hemisphere of stuttering subjects, suggesting a role for the left hemisphere in compensatory processes that enable fluency.

Educational objectives: The reader will learn about and be able to: (1) compare brain activation patterns under fluency- and dysfluency-evoking conditions in stuttering and control subjects; (2) appraise the common features, both central and peripheral, of fluency-evoking conditions; and (3) discuss ways in which neuroimaging methods can be used to understand the pathophysiology of stuttering.  相似文献   


13.
This study was designed to investigate if persons who stutter differ from persons who do not stutter in the coproduction of different types of consonant clusters, as measured in the number of dysfluencies and incorrect speech productions, in speech reaction times and in word durations. Based on the Gestural Phonology Model of Browman and Goldstein, two types of consonant clusters were formed: homorganic and heterorganic clusters, both intra-syllabic (CVCC) and inter-syllabic (CVC#CVC). Overall, the results indicated that homorganic clusters elicited more incorrect speech productions and longer reaction times than the heterorganic clusters, but there was no difference between the homorganic and the heterorganic clusters in the word duration data. Persons who stutter showed a higher percentage dysfluencies and a higher percentage incorrect speech productions than PWNS but there were no main group effects in reaction times and word durations. However, there was a significant three-way interaction effect between group, cluster type and cluster place: homorganic clusters elicited longer reaction times than heterorganic clusters, but only in the inter-syllabic condition and only for persons who stutter. These results suggest that the production of two consonants with the same place of articulation across a syllable boundary puts higher demands on motor planning and/or initiation than producing the same cluster at the end of a syllable, in particular for PWS. The findings are discussed in light of current theories on speech motor control in stuttering. EDUCATIONAL OBJECTIVES: The reader will be able to describe: (1) the effect of gestural overlap between consonant clusters on speech reaction time and word duration of people who do and do not stutter and be able to (2) identify the literature in the field of gestural overlap between consonant clusters.  相似文献   

14.
15.
16.
In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors--"slips of the tongue". The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units--gestures--in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action.  相似文献   

17.
Anonymity promotes free speech by protecting the identity of people who might otherwise face negative consequences for expressing their ideas. Wrongdoers, however, often abuse this invisibility cloak. Defenders of anonymity online emphasise its value in advancing public debate and safeguarding political dissension. Critics emphasise the need for identifiability in order to achieve accountability for wrongdoers such as trolls. The problematic tension between anonymity and identifiability online lies in the desirability of having low costs (no repercussions) for desirable speech and high costs (appropriate repercussions) for undesirable speech. If we practice either full anonymity or identifiability, we end up having either low or high costs in all online contexts and for all kinds of speech. I argue that free speech is compatible with instituting costs in the form of repercussions and penalties for controversial and unacceptable speech. Costs can minimise the risks of anonymity by providing a reasonable degree of accountability. Pseudonymity is a tool that can help us regulate those costs while furthering free speech. This article argues that, in order to redesign the Internet to better serve free speech, we should shape much of it to resemble an online masquerade.  相似文献   

18.
Disfluencies can affect language comprehension, but to date, most studies have focused on disfluent pauses such as er. We investigated whether disfluent repetitions in speech have discernible effects on listeners during language comprehension, and whether repetitions affect the linguistic processing of subsequent words in speech in ways which have been previously observed with ers. We used event-related potentials (ERPs) to measure participants’ neural responses to disfluent repetitions of words relative to acoustically identical words in fluent contexts, as well as to unpredictable and predictable words that occurred immediately post-disfluency and in fluent utterances. We additionally measured participants’ recognition memories for the predictable and unpredictable words. Repetitions elicited an early onsetting relative positivity (100–400 ms post-stimulus), clearly demonstrating listeners’ sensitivity to the presence of disfluent repetitions. Unpredictable words elicited an N400 effect. Importantly, there was no evidence that this effect, thought to reflect the difficulty of semantically integrating unpredictable compared to predictable words, differed quantitatively between fluent and disfluent utterances. Furthermore there was no evidence that the memorability of words was affected by the presence of a preceding repetition. These findings contrast with previous research which demonstrated an N400 attenuation of, and an increase in memorability for, words that were preceded by an er. However, in a later (600–900 ms) time window, unpredictable words following a repetition elicited a relative positivity. Reanalysis of previous data confirmed the presence of a similar effect following an er. The effect may reflect difficulties in resuming linguistic processing following any disruption to speech.  相似文献   

19.
The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or relatively slow acoustic variations, is unclear. We recorded the cardiac activity of 82 near-term fetuses (38 weeks GA) in quiet sleep during a silent control condition and four 15 s streams presented at 90 dB SPL Leq: two piano melodies with opposite contours, a natural Icelandic sentence and a chimera of the sentence--all its spectral information was replaced with broadband noise, leaving its specific temporal variations in amplitude intact without any phonological information. All stimuli elicited a heart rate deceleration. The response patterns to the melodies were the same and differed significantly from those observed with the Icelandic sentence and its chimera, which did not differ. The melodies elicited a monophasic heart rate deceleration, indicating a stimulus orienting reflex while the Icelandic and its chimera evoked a sustained lower magnitude response, indicating a sustained attentional response or more focused information processing. A conservative interpretation of the data is that near-term fetuses can perceive sound streams and the rapid temporal variations in amplitude that are specific to speech sounds with no spectral variations at all.  相似文献   

20.
A tip-of-the-tongue (TOT) elicitation task and a picture-naming task were used to examine the role of neighborhood frequency as well as word frequency and neighborhood density in speech production. As predicted for the younger adults in Experiment 1, more TOT states were elicited for words with low word frequency and with sparse neighborhoods. Contrary to predictions, neighborhood frequency did not significantly influence retrieval of the target word. For the older adults in Experiment 2, however, more TOT states were elicited for words with low neighborhood frequency. Furthermore, in Experiment 3, pictures with high neighborhood frequency were named more quickly and accurately than pictures with low neighborhood frequency. These results show that the number of neighbors and the frequency of those neighbors influence lexical retrieval in speech production. The facilitative nature of these factors is more parsimoniously accounted for by an interactive model rather than by a strictly feedforward model of speech production.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号