首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   52篇
  免费   7篇
  国内免费   2篇
  61篇
  2023年   6篇
  2022年   1篇
  2021年   3篇
  2020年   3篇
  2019年   12篇
  2018年   4篇
  2017年   5篇
  2016年   3篇
  2015年   1篇
  2014年   1篇
  2013年   6篇
  2012年   1篇
  2011年   3篇
  2009年   2篇
  2007年   2篇
  2006年   2篇
  2005年   3篇
  2003年   1篇
  2000年   1篇
  1996年   1篇
排序方式: 共有61条查询结果,搜索用时 0 毫秒
1.
    
The linguistic input children receive has a massive and immediate effect on their language acquisition. This fact makes it difficult to discover the biases that children bring to language learning simply because their input is likely to obscure those biases. In this article, I turn to children who lack linguistic input to aid in this discovery: deaf children whose hearing losses prevent their acquisition of spoken language and whose hearing parents have not yet exposed them to sign language. These children lack input from a conventional language model, yet create gestures, called homesigns, to communicate with hearing individuals. Homesigns have many, although not all, of the properties of human language. These properties offer the clearest window onto the linguistic structures that children seek as they either learn or, in the case of homesigners, construct language.  相似文献   
2.
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.  相似文献   
3.
Abstract

This qualitative study explored Christian pastors’ perceptions and insights for family therapists who refuse to work with lesbian, gay, and bisexual (LGB) clients by consistently referring them. Twenty-one pastors from diverse Christian traditions were interviewed. Thematic analysis identified the following themes related to the pastors’ perceptions of the practice of referring LGB clients: (1) Best Interest of the Client and Therapist, (2) Discriminatory Practice, and (3) No Referrals will be Provided. The following themes represent pastors’ advice for family therapists: (1) Develop a Well-Thought-Out Referral Plan; (2) Be Accountable for Your Own Beliefs; and (3) Engage in Conversations.  相似文献   
4.
Extensive research shows that caregivers’ speech and gestures can scaffold children’s learning. This study examines whether caregivers increase the amount of spoken and gestural instruction when a task becomes difficult for children. We also examine whether increasing the amount of instruction containing both speech and gestures enhances children’s problem-solving. Ninety-three 3- to 4-year-old Chinese children and their caregivers participated in our study. The children tried to assemble two jigsaw puzzles (with 12 pieces in one and 20 in the other); each puzzle was attempted in three phases. The order in which the puzzles were to be solved was randomized. In Phases 1 and 3, the children tried to solve the puzzles alone. In Phase 2, the children received instruction from their caregivers. The children assembled a smaller proportion of the 20-piece puzzle than of the 12-piece one, suggesting that the 20-piece puzzle was more difficult than the 12-piece one. The caregivers produced more spoken and gestural instruction for the 20-piece than for the 12-piece puzzle. The proportion of the instruction employing both speech and gesture (+InstS+InstG) was significantly greater for the 20-piece puzzle than for the 12-piece puzzle. More importantly, the children who received more instruction with +InstS+InstG performed better in solving the 20-piece puzzle than those who received less instruction of the same type. Those who did not receive +InstS+InstG instruction performed less successfully in Phase 3. However, the facilitating effect of instruction with +InstS+InstG was not found with the 12-piece puzzle. Our findings suggest that adults should incorporate speech and gesture in their instruction as frequently as possible when teaching their children to perform a difficult task.  相似文献   
5.
Studies of great apes have revealed that they use manual gestures and other signals to communicate about distal objects. There is also evidence that chimpanzees modify the types of communicative signals they use depending on the attentional state of a human communicative partner. The majority of previous studies have involved chimpanzees requesting food items from a human experimenter. Here, these same communicative behaviors are reported in chimpanzees requesting a tool from a human observer. In this study, captive chimpanzees were found to gesture, vocalize, and display more often when the experimenter had a tool than when she did not. It was also found that chimpanzees responded differentially based on the attentional state of a human experimenter, and when given the wrong tool persisted in their communicative efforts. Implications for the referential and intentional nature of chimpanzee communicative signaling are discussed.  相似文献   
6.
The origin and functions of the hand and arm gestures that accompany speech production are poorly understood. It has been proposed that gestures facilitate lexical retrieval, but little is known about when retrieval is accompanied by gestural activity and how this activity is related to the semantics of the word to be retrieved. Electromyographic (EMG) activity of the dominant forearm was recorded during a retrieval task in which participants tried to identify target words from their definitions. EMG amplitudes were significantly greater for concrete than for abstract words. The relationship between EMG amplitude and other conceptual attributes of the target words was examined. EMG was positively related to a word’s judged spatiality, concreteness, drawability, and manipulability. The implications of these findings for theories of the relation between speech production and gesture are discussed.This experiment was done by the first author under the supervision of the second author in partial completion of the Ph.D. degree at Columbia University. We gratefully acknowledge the advice and comments of Lois Putnam, Robert Remez, James Magnuson, Michele Miozzo, and Robert B. Tallarico, and the assistance of Stephen Krieger, Lauren Walsh, Jennifer Kim, and Jillian White.  相似文献   
7.
A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a “frame” (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a “last item” belonging to one of four categories: a high-close-probability sign (a “semantically reasonable” completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a “semantically odd” completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity.  相似文献   
8.
9.
    
Our preferences are sensitive to social influences. For instance, we like more the objects that are looked-at by others than non-looked-at objects. Here, we explored this liking effect, using a modified paradigm of attention cueing by gaze. First, we investigated if the liking effect induced by gaze relied on motoric representations of the target object by testing if the liking effect could be observed for non-manipulable (alphanumeric characters) as well as for manipulable items (common tools). We found a significant liking effect for the alphanumeric items. Second, we tested if another type of powerful social cue could also induce a liking effect. We used an equivalent paradigm but with pointing hands instead of gaze cues. Pointing hands elicited a robust attention-orienting effect, but they did not induce any significant liking effect. This study extends previous findings and reinforces the view of eye gaze as a special cue in human interactions.  相似文献   
10.
    
Infant signs are intentionally taught/learned symbolic gestures which can be used to represent objects, actions, requests, and mental state. Through infant signs, parents and infants begin to communicate specific concepts earlier than children’s first spoken language. This study examines whether cultural differences in language are reflected in children’s and parents’ use of infant signs. Parents speaking East Asian languages with their children utilize verbs more often than do English-speaking mothers; and compared to their English-learning peers, Chinese children are more likely to learn verbs as they first acquire spoken words. By comparing parents’ and infants’ use of infant signs in the U.S. and Taiwan, we investigate cultural differences of noun/object versus verb/action bias before children’s first language. Parents reported their own and their children's use of first infant signs retrospectively. Results show that cultural differences in parents’ and children’s infant sign use were consistent with research on early words, reflecting cultural differences in communication functions (referential versus regulatory) and child-rearing goals (independent versus interdependent). The current study provides evidence that intergenerational transmission of culture through symbols begins prior to oral language.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号