首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Is language linked to mental representations of space? There are several reasons to think that language and space might be separated in our cognitive systems, but they nevertheless interact in important ways. These interactions are evident in language viewed as a means of communication and in language considered a form of representation. In communication, spatial factors may be explicit in language itself, such as the spatial-gestural system of American Sign Language. Even the act of conversing with others is a spatial behavior because we orient to the locations of other participants. Language and spatial representations probably converge at an abstract level of concepts and simple spatial schemas.  相似文献   

2.
Numerical processing and language processing are both grounded in space. In the present study we investigated whether these are fully independent phenomena, or whether they share a common basis. If number processing activates spatial dimensions that are also relevant for understanding words, then we can expect that processing numbers may influence subsequent lexical access to words. Specifically, if high numbers relate to upper space, then they can be expected to facilitate understanding of words such as bird that are having referents typically found in the upper vertical space. The opposite should hold for low numbers. These should facilitate the understanding of words such as ground referring to entities with referents in the lower vertical space. Indeed, in two experiments we found evidence for such an interaction between number and word processing. By eliminating a contribution of linguistic factors gained from additional investigations on large text corpora, this strongly suggests that understanding numbers and language is based on similar modal representations in the brain. The implications of these findings for a broader perspective on grounded cognition will be discussed.  相似文献   

3.
The acceptability of sentences in natural language is constrained not only grammaticality, but also by the relationship between what is being conveyed and such factors as context and the beliefs of interlocutors. In many languages the critical element in a sentence (its focus) must be given grammatical prominence. There are different accounts of the nature of focus marking. Some researchers treat it as the grammatical realization of a potentially arbitrary feature of universal grammar and do not provide an explicit account of its origins; others have argued, however, that focus marking is a (grammaticalized) functional solution to the problem of efficiently transmitting information via a noisy channel. By adding redundancy to highlight critical elements in particular, focus protects key parts of the message from noise. If this information‐theoretic account is true, then we should expect focus‐like behavior to emerge even in non‐linguistic communication systems given sufficient noise and pressures for efficiency. We tested this in an experiment in which participants played a simple communication game in which they had to click cells on a grid to communicate one of two line figures drawn across the grid. We manipulated the noise, available time, and required effort, and measured patterns of redundancy. Because the lines in many cases overlapped, meaning that only some parts of each line could be used to distinguish it from the other, we were able to compare the extent to which effort was expended on adding redundancy to critical (non‐overlapping) and non‐critical (overlapping) parts of the message. The results supported the information‐theoretic account of focus and shed light on the emergence of information structure in language.  相似文献   

4.
People who know the outcome of an event tend to overestimate their own prior knowledge or others' na?ve knowledge of it. This hindsight bias pervades cognition, lending the world an unwarranted air of inevitability. In four experiments, we showed how knowing the identities of words causes people to overestimate others' na?ve ability to identify moderately to highly degraded spoken versions of those words. We also showed that this auditory hindsight bias occurs despite people's efforts to avoid it. We discuss our findings in the context of communication, in which speakers overestimate the clarity of their message and listeners overestimate their understanding of the message.  相似文献   

5.
This paper begins with a discussion of Stanley Cavell??s philosophy of language learning. Young people learn more than the meaning of words when acquiring language: they learn about (the quality of) our form of life. If we??as early childhood educators??see language teaching as something like handing some inert thing to a child, then we unduly limit the possibilities of education for that child. Cavell argues that we must become poets if we are to be the type of representatives of language that education calls for. In the final section of the paper I discuss the work of Lucy Sprague Mitchell, someone who developed an approach to language teaching that overlaps in interesting ways with Cavell??s approach in The Claim of Reason.  相似文献   

6.
7.
The “I Said, You Said” technique leads the couple through a series of communication exercises that emphasizes the power of verbal and non-verbal cues. Initially, non-verbal cues that a couple uses to interpret each other’s spoken words are decreased to reduce the outside influences of attributed meaning. This allows the couple to focus on the clarity of the spoken message. Additional steps in this technique include restructuring speaker and listener roles, education about communication patterns, and learning how to communicate with more clarity and effectiveness, even when topics are emotionally laden. A brief vignette using the intervention follows this discussion.  相似文献   

8.
Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language’s writing system and the assimilation of phonemes according to the language’s constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes produced by grapheme-phoneme conversion when read. In two experiments, each involving 40 fluent readers, we compared visual lexical decision on Korean orthographic forms that would require such a change (C stimuli) or not (NC stimuli). We found that NC words were accepted faster than C words, and C nonwords were rejected faster than NC nonwords. The results suggest that phoneme-to-phoneme transformations involved in uttering a word may also be involved in visually identifying the word.  相似文献   

9.
We cannot understand why Treisman and Geffen (1967) think their experiment argues against our theory (Deutsch and Deutsch, 1963). Briefly, Treisman and Geffen ask subjects to repeat and tap to certain words in one message, played to one ear, and only tap to such words when they occur in another message played to the other ear. They find that subjects neglect the words to which they only have to tap. According to our theory, stimuli with a greater weighting of importance inhibit certain outputs (such as storage, motor response) of the structures processing stimuli with a lesser weighting of importance. Now it seems to be clear that Treisman and Geffen have by their instructions (to tap and repeat one set of words and only to tap to another set of words) produced a situation in which one set of stimuli is given a larger weighting of importance than the other. It is therefore not surprising on our theory that the less important set is almost disregarded. It is instructive here to consider Lawson's (1966) very similar experiment. In this experiment the signals to which the subject has to tap do not also have to be repeated if they occur in the message which is being shadowed. (These signals are non-verbal.) Lawson's results are almost the opposite of Treisman and Geffen's, as would be expected from our theory. Treisman and Geffen have some difficulty in explaining the discrepancy. “It seems that analysis of simple physical signals precedes both the selective filter and the analysis of verbal content in the perceptual sequence, that the bottle-neck in attention arises chiefly in speech recognition where of course the information load is usually much higher. To confirm the belief that the verbal content of the secondary message was not being analysed, we find no evidence whatever of interference from secondary target words when these received no tapping response.” (We quote the last sentence as just one example of the fact that Treisman and Geffen have failed to understand our theory. It is one of the major points of this theory to explain why “secondary” messages do not cause interfernce with the “primary” message while they are being analysed.) To return now to the subject of Lawson's experiments, we would suggest that the outcome of such experiments would be the same if instead of signals, words were used in Lawson's paradigm. These words should occur on both channels and should be distinguishable by another speaking voice. The subject should be asked to respond to, but not to repeat such words. To make sure the subject is not simply responding to differences in timbre, pitch, etc., the target words should be interspersed with other words. Treisman and Geffen could not then postulate differences in information load to explain an unfavourable result.  相似文献   

10.
Recall of recently heard words is affected by the clarity of presentation: Even if all words are presented with sufficient clarity for successful recognition, those that are more difficult to hear are less likely to be recalled. Such a result demonstrates that memory processing depends on more than whether a word is simply “recognized” versus “not recognized.” More surprising is that, when a single item in a list of spoken words is acoustically masked, prior words that were heard with full clarity are also less likely to be recalled. To account for such a phenomenon, we developed the linking-by-active-maintenance model (LAMM). This computational model of perception and encoding predicts that these effects will be time dependent. Here we challenged our model by investigating whether and how the impact of acoustic masking on memory depends on presentation rate. We found that a slower presentation rate causes a more disruptive impact of stimulus degradation on prior, clearly heard words than does a fast rate. These results are unexpected according to prior theories of effortful listening, but we demonstrated that they can be accounted for by LAMM.  相似文献   

11.
If dyslexic individuals have the ability to express themselves in different ways, particularly in the field of modern graphic design, would they be a favoured group in creating the extraordinary and outstanding ideas that are required in communication design? The study group consisted of 20 primary school dyslexics between ages of 7-12 and 20 non-dyslexics serving as a control group. A jury with four specialists evaluated the drawings gathered from the 40 participants. Even though we might not say surely that the dyslexics are the best possible candidates for communication design education, based on the statistical results we have concluded that they should be among the potential candidates for both general communication design education and for more specific minor study areas such as icon design.  相似文献   

12.
13.
When listening to speech from one’s native language, words seem to be well separated from one another, like beads on a string. When listening to a foreign language, in contrast, words seem almost impossible to extract, as if there was only one bead on the same string. This contrast reveals that there are language-specific cues to segmentation. The puzzle, however, is that infants must be endowed with a language-independent mechanism for segmentation, as they ultimately solve the segmentation problem for any native language. Here, we approach the acquisition problem by asking whether there are language-independent cues to segmentation that might be available to even adult learners who have already acquired a native language. We show that adult learners recognize words in connected speech when only prosodic cues to word-boundaries are given from languages unfamiliar to the participants. In both artificial and natural speech, adult English speakers, with no prior exposure to the test languages, readily recognized words in natural languages with critically different prosodic patterns, including French, Turkish and Hungarian. We suggest that, even though languages differ in their sound structures, they carry universal prosodic characteristics. Further, these language-invariant prosodic cues provide a universally accessible mechanism for finding words in connected speech. These cues may enable infants to start acquiring words in any language even before they are fine-tuned to the sound structure of their native language.  相似文献   

14.
We present evidence that English- and Mandarin-speakers agree about how to map dimensions (e.g., size and clarity) to vertical space and that they do so in a directional way. We first developed visual stimuli for four dimensions—size, clarity, complexity, and darkness—and in each case we varied the stimuli to express a range of the dimension (e.g., there were five total items expressing the range covering big, medium, and small). In our study, English- and Mandarin-speakers mapped these stimuli to an unlabelled vertical scale. Most people mapped dimensional endpoints in similar ways; using size as a standard, we found that the majority of participants mapped the clearest, most complex, and darkest items to the same end of the vertical scale as they mapped the biggest items. This indicates that all four dimensions have a weighted or unmarked end (i.e., all are directional or polar). The strong similarities in polarity across language groups contrasted with group differences on a lexical task, for which there was little cross-linguistic agreement about which comparative words to use to describe stimulus pairs (e.g., “bigger” vs. “smaller”). Thus, we found no evidence in this study that the perception of these dimensions is influenced by language.  相似文献   

15.
Expectancy theory suggests that people develop normative expectations about appropriateness of communication behavior that differ for males and females. Support was found for an interaction hypothesis predicting that males would be expected to use more verbally aggressive persuasive message strategies and would negatively violate expectations and be less persuasive when they deviated from such strategies. Moreover, females are expected to be less verbally aggressive and use more prosocial message strategies and are penalized for deviations from such an expected strategy. Manipulation checks indicated that people have clear differences in expected strategy use by males and females and that neither the psychological sex role nor biological sex of receivers alters those expectations. Results are discussed in terms of similarity to prior language research, as an extension of expectancy theory and as added knowledge about the effects of specific compliance-gaining message strategies.  相似文献   

16.
Infants start learning words, the building blocks of language, at least by 6 months. To do so, they must be able to extract the phonological form of words from running speech. A rich literature has investigated this process, termed word segmentation. We addressed the fundamental question of how infants of different ages segment words from their native language using a meta‐analytic approach. Based on previous popular theoretical and experimental work, we expected infants to display familiarity preferences early on, with a switch to novelty preferences as infants become more proficient at processing and segmenting native speech. We also considered the possibility that this switch may occur at different points in time as a function of infants' native language and took into account the impact of various task‐ and stimulus‐related factors that might affect difficulty. The combined results from 168 experiments reporting on data gathered from 3774 infants revealed a persistent familiarity preference across all ages. There was no significant effect of additional factors, including native language and experiment design. Further analyses revealed no sign of selective data collection or reporting. We conclude that models of infant information processing that are frequently cited in this domain may not, in fact, apply in the case of segmenting words from native speech.  相似文献   

17.
The ability to identify the grammatical category of a word (e.g., noun, verb, adjective) is a fundamental aspect of competence in a natural language. Children show evidence of categorization by as early as 18 months, and in some cases younger. However, the mechanisms that underlie this ability are not well understood. The lexical co-occurrence patterns of words in sentences could provide information about word categories--for example, words that follow the in English often belong to the same category. As a step in understanding the role distributional mechanisms might play in language learning, the present study investigated the ability of adults to categorize words on the basis of distributional information. Forty participants listened for approximately 6 min to sentences in an artificial language and were told that they would later be tested on their memory for what they had heard. Participants were next tested on an additional set of sentences and asked to report which sentences they recognized from the first 6 min. The results suggested that learners performed a distributional analysis on the initial set of sentences and recognized sentences on the basis of their memory of sequences of categories of words. Thus, mechanisms that would be useful in natural language learning were shown to be active in adults in an artificial language learning task.  相似文献   

18.
贾玲  雷江华  宫慧娜  张奋  陈影 《心理科学》2018,(5):1077-1083
本研究探讨了聋生在加工手语词过程中不同编码方式和指拼特征的影响。实验1采用指拼类手语词和手势类手语词考察了指拼有无对手语词编码方式的影响。实验2则采用不同特征的指拼类手语词,深入考察指拼位置、指拼形式对手语词编码方式的影响。结果表明,聋生较语音编码更擅长使用指拼编码加工手语词;指拼与手势的加工存在显著差异,指拼在聋生手语词加工中起到了促进作用;指拼位置与指拼形式共同作用于手语词的加工过程。  相似文献   

19.
Conclusion In 2004, Prof. Christopher Henshilwood of the University of Bergen discovered in South Africa what appears to be the oldest known jewelry—75,000 year old pierced and ochre-tinted tick shells. His discovery suggested the importance of jewelry and other forms of interpersonal communication and representation. Henshilwood asserts that “once symbolically mediated behavior was adopted by our ancestors it meant communication strategies rapidly shifted, leading to the transmission of individual and widely shared cultural values” (Graham 2004). If we agree with Prof. Henshilwood’s assessment of the import of the initial use of symbolic display technologies (in this case, tick shell decorative jewelry), the implications for evolving practices of mobile communication technology may be even more significant than we generally assume. Specifically, novel forms of widespread mediated communication could alter the cultural values we embrace and transmit. They could also transform social structure, interpersonal processes and land use in ways we might neither anticipate nor desire. The lines of investigation sketched above are important since the illuminate understand current and emerging social practices and their implications. Mobile technology allows unprecedented permutations and concatenation of innovations in communication at the levels of place and space, individual, group and mass, and creative new services offered from a range of entities from amateur creators to gigantic corporations. Therefore, we have an opportunity to structure services and social practices in a self-aware way that should be conductive to outcomes that are better than would otherwise be the case. I would, for the purposes of argument, go further and suggest that it might be the case that the mobile communication is also likely to be a transformative technology. He is the author of Connections: Social and Cultural Studies of the Telephone in American Life (1999) and editor of Machines that Become Us: The Social Context of Personal Communication Technology (2003), both available from Transaction Publishers.  相似文献   

20.
Iconicity – the correspondence between form and meaning – may help young children learn to use new words. Early‐learned words are higher in iconicity than later learned words. However, it remains unclear what role iconicity may play in actual language use. Here, we ask whether iconicity relates not just to the age at which words are acquired, but also to how frequently children and adults use the words in their speech. If iconicity serves to bootstrap word learning, then we would expect that children should say highly iconic words more frequently than less iconic words, especially early in development. We would also expect adults to use iconic words more often when speaking to children than to other adults. We examined the relationship between frequency and iconicity for approximately 2000 English words. Replicating previous findings, we found that more iconic words are learned earlier. Moreover, we found that more iconic words tend to be used more by younger children, and adults use more iconic words when speaking to children than to other adults. Together, our results show that young children not only learn words rated high in iconicity earlier than words low in iconicity, but they also produce these words more frequently in conversation – a pattern that is reciprocated by adults when speaking with children. Thus, the earliest conversations of children are relatively higher in iconicity, suggesting that this iconicity scaffolds the production and comprehension of spoken language during early development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号