Detecting lies is crucial in numerous contexts, including situations in which individuals do not interact in their native language. Previous research suggests that individuals are perceived as less credible when they communicate in a nonnative compared with native language. The current study was the first to test this effect in truthful and fabricated messages written by native and nonnative English speakers. One hundred native English speakers judged the veracity of these messages, and overall, they proved less likely to believe and to correctly classify nonnative speakers' messages; differences in verbal cues between native and nonnative speakers' messages partly explained the differences in the judgments. Given the increased use of nonnative languages in a globalized world, the discrimination against nonnative speakers in veracity judgments is problematic. Further research should more thoroughly investigate the role of verbal cues in written and spoken nonnative language to enable the development of effective interventions. 相似文献
This article consists of four brief responses to Lucretia B. Yaghjian's “Pedagogical challenges in teaching ESOL/Multilingual writers in theological education,” published in this issue of the journal. 相似文献
The existence of the Language Familiarity Effect (LFE), where talkers of a familiar language are easier to identify than talkers of an unfamiliar language, is well-documented and uncontroversial. However, a closely related phenomenon known as the Other Accent Effect (OAE), where accented talkers are more difficult to recognize, is less well understood. There are several possible explanations for why the OAE exists, but to date, little data exist to adjudicate differences between them. Here, we begin to address this issue by directly comparing listeners’ recognition of talkers who speak in different types of accents, and by examining both the LFE and OAE in the same set of listeners. Specifically, Canadian English listeners were tested on their ability to recognize talkers within four types of voice line-ups: Canadian English talkers, Australian English talkers, Mandarin-accented English talkers, and Mandarin talkers. We predicted that the OAE would be present for talkers of Mandarin-accented English but not for talkers of Australian English—which is precisely what we observed. We also observed a disconnect between listeners’ confidence and performance across different types of accents; that is, listeners performed equally poorly with Mandarin and Mandarin-accented talkers, but they were more confident with their performance with the latter group of talkers. The present findings set the stage for further investigation into the nature of the OAE by exploring a range of potential explanations for the effect, and introducing important implications for forensic scientists’ evaluation of ear witness testimony. 相似文献
We examined whether speech‐related differences between truth tellers and liars are more profound when answering unexpected questions than when answering expected questions. We also examined whether the presence of an interpreter affected these results. In the experiment, 204 participants from the United States (Hispanic participants only), Russia, and the Republic of Korea were interviewed in their native language by a native‐speaking interviewer or by a British interviewer through an interpreter. Truth tellers discussed a trip that they had made during the last 12 months; liars fabricated a story about such a trip. The key dependent variables were the amount of information provided and the proportion of all statements that were complications. The proportion of complications distinguished truth tellers from liars better when answering unexpected than expected questions, but only in interpreter‐absent interviews. The number of details provided did not differ between truth tellers and liars or between interpreter‐absent and interpreter‐present interviews. 相似文献
We used H215O PET to characterize the common features of two successful but markedly different fluency-evoking conditions — paced speech and singing — in order to identify brain mechanisms that enable fluent speech in people who stutter. To do so, we compared responses under fluency-evoking conditions with responses elicited by tasks that typically elicit dysfluent speech (quantifying the degree of stuttering and using this measure as a confounding covariate in our analyses). We evaluated task-related activations in both stuttering subjects and age- and gender-matched controls.
Areas that were either uniquely activated during fluency-evoking conditions, or in which the magnitude of activation was significantly greater during fluency-evoking than dysfluency-evoking tasks included auditory association areas that process speech and voice and motor regions related to control of the larynx and oral articulators. This suggests that a common fluency-evoking mechanism might relate to more effective coupling of auditory and motor systems — that is, more efficient self-monitoring, allowing motor areas to more effectively modify speech.
These effects were seen in both PWS and controls, suggesting that they are due to the sensorimotor or cognitive demands of the fluency-evoking tasks themselves. While responses seen in both groups were bilateral, however, the fluency-evoking tasks elicited more robust activation of auditory and motor regions within the left hemisphere of stuttering subjects, suggesting a role for the left hemisphere in compensatory processes that enable fluency.
Educational objectives: The reader will learn about and be able to: (1) compare brain activation patterns under fluency- and dysfluency-evoking conditions in stuttering and control subjects; (2) appraise the common features, both central and peripheral, of fluency-evoking conditions; and (3) discuss ways in which neuroimaging methods can be used to understand the pathophysiology of stuttering. 相似文献