首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   921篇
  免费   0篇
  国内免费   34篇
  2024年   7篇
  2023年   33篇
  2022年   18篇
  2021年   33篇
  2020年   76篇
  2019年   63篇
  2018年   41篇
  2017年   80篇
  2016年   79篇
  2015年   66篇
  2014年   53篇
  2013年   358篇
  2012年   5篇
  2011年   1篇
  2010年   3篇
  2009年   4篇
  2008年   4篇
  2007年   8篇
  2006年   5篇
  2005年   1篇
  2004年   6篇
  1994年   1篇
  1983年   2篇
  1982年   1篇
  1980年   2篇
  1979年   1篇
  1977年   2篇
  1976年   1篇
  1975年   1篇
排序方式: 共有955条查询结果,搜索用时 31 毫秒
251.
Performance in numerical classification tasks involving either parity or magnitude judgements is quicker when small numbers are mapped onto a left-sided response and large numbers onto a right-sided response than for the opposite mapping (i.e., the spatial–numerical association of response codes or SNARC effect). Recent research by Gevers et al. [Gevers, W., Santens, S., Dhooge, E., Chen, Q., Van den Bossche, L., Fias, W., & Verguts, T. (2010). Verbal-spatial and visuospatial coding of number–space interactions. Journal of Experimental Psychology: General, 139, 180–190] suggests that this effect also arises for vocal “left” and “right” responding, indicating that verbal–spatial coding has a role to play in determining it. Another presumably verbal-based, spatial–numerical mapping phenomenon is the linguistic markedness association of response codes (MARC) effect whereby responding in parity tasks is quicker when odd numbers are mapped onto left-sided responses and even numbers onto right-sided responses. A recent account of both the SNARC and MARC effects is based on the polarity correspondence principle [Proctor, R. W., & Cho, Y. S. (2006). Polarity correspondence: A general principle for performance of speeded binary classification tasks. Psychological Bulletin, 132, 416–442]. This account assumes that stimulus and response alternatives are coded along any number of dimensions in terms of – and + polarities with quicker responding when the polarity codes for the stimulus and the response correspond. In the present study, even–odd parity judgements were made using either “left” and “right” or “bad” and “good” vocal responses. Results indicated that a SNARC effect was indeed present for the former type of vocal responding, providing further evidence for the sufficiency of the verbal–spatial coding account for this effect. However, the decided lack of an analogous SNARC-like effect in the results for the latter type of vocal responding provides an important constraint on the presumed generality of the polarity correspondence account. On the other hand, the presence of robust MARC effects for “bad” and “good” but not “left” and “right” vocal responses is consistent with the view that such effects are due to conceptual associations between semantic codes for odd–even and bad–good (but not necessarily left–right).  相似文献   
252.
IntroductionMany authors agree on the importance of training parents in early literacy strategies.ObjectiveThis study analyses the effects of an intervention to improve parent–child interactions during reading sessions, using interactive reading techniques.MethodThe design is exploratory and includes a treatment group (n = 22), which benefited from four interactive reading workshops, and a control group (n = 18), which did not benefit from specific training. Both groups read the same books, three times a week, for 10 weeks. The children come from middle socioeconomic backgrounds and attend preschool or kindergarten (grades 1–3).ResultsThe analyses were conducted on the basis of pre- and post-intervention video observations, coded using the Adult–Child Interactive Reading Inventory (ACIRI). Results from an ANCOVA show that parental behavior, and in turn child behavior, improves in post-intervention: parents improve their children's attention to the text and implement literacy strategies, while the children become more involved in the interactions.ConclusionInteractive reading workshops for parents improve the quantity and quality of parent–child interactions when reading books in a natural and playful context.  相似文献   
253.
The present study was designed to examine the influence of explanation-based knowledge regarding system functions and the driver’s role in conditionally automated driving (Level 3, as defined in SAE J3016). In particular, we studied how safely and successfully drivers assume control of the vehicle when encountering situations that exceed the automation parameters. This examination was conducted through a test-track experiment. Thirty-two younger drivers (mean age = 37.3 years) and 24 older drivers (mean age = 71.1 years) participated in Experiments 1 and 2, respectively. Adopting a between-participants design, in each experiment the participants were divided into two age- and sex-matched groups that were given differing levels of explanation-based knowledge concerning the system limitations of automated driving. The only information given to the less-informed groups was that, during automated driving, drivers may be required to occasionally assume control of the vehicle. The well-informed groups were given the same information, as well as details regarding the auditory-visual alerts produced by the human–machine interface (HMI) during requests to intervene (RtIs), and examples of situations where RtIs would be issued. Ten and nine RtI events were staged for each participant in Experiment 1 and 2, respectively; the participants performed a non-driving-related task while the automated driving system was functioning. For both experiments it was found that, for all RtI events, more participants in the well-informed groups than the less-informed groups successfully assumed control of the vehicle. These results suggest that, in addition to providing information regarding the possible occurrence of RtIs, explanations of HMI and RtI-related situations are effective for helping both younger and older drivers safely and successfully negotiate such events.  相似文献   
254.
In partially automated vehicles, the driver and the automated system share control of the vehicle. Consequently, the driver may have to switch between driving and monitoring activities. This can critically impact the driver’s situational awareness. The human–machine interface (HMI) is responsible for efficient collaboration between driver and system. It must keep the driver informed about the status and capabilities of the automated system, so that he or she knows who or what is in charge of the driving. The present study was designed to compare the ability of two HMIs with different information displays to inform the driver about the system’s status and capabilities: a driving-centered HMI that displayed information in a multimodal way, with an exocentric representation of the road scene, and a vehicle-centered HMI that displayed information in a more traditional visual way. The impact of these HMIs on drivers was compared in an on-road study. Drivers’ eye movements and response times for questions asked while driving were measured. Their verbalizations during the test were also transcribed and coded. Results revealed shorter response times for questions on speed with the exocentric and multimodal HMI. The duration and number of fixations on the speedometer were also greater with the driving-centered HMI. The exocentric and multimodal HMI helped drivers understand the functioning of the system, but was more visually distracting than the traditional HMI. Both HMIs caused mode confusions. The use of a multimodal HMI can be beneficial and should be prioritized by designers. The use of auditory feedback to provide information about the level of automation needs to be explored in longitudinal studies.  相似文献   
255.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   
256.
Abstract

Perceiving one’s self as accepted by important others, such as parents, is fundamental and crucial for the well-being of each individual. One major aspect of interpersonal acceptance-rejection theory (IPARTheory) is examining how parental acceptance-rejection affects people’s psychological adjustment. This theory has been validated in many countries and cultural groups around the world, but has not been utilized in the Vietnamese context. This research aims to assess the reliability of IPARTheory measures in Vietnam and applicability of the theory itself among a Vietnamese sample. Participants included 162 students from a high school in Hanoi (Mage = 15.58 years; 69.8% female). Materials consisted of Vietnamese versions of various IPARTheory measures: Parental Acceptance-Rejection Questionnaire, Personality Assessment Questionnaire, Interpersonal Relationship Anxiety Questionnaire, and a demographics form designed specifically for this research. Analyses show that psychological maladjustment significantly correlated with perceived paternal rejection, maternal rejection, and their subscales. Cronbach’s alphas were strong, ranging from .73 to .97, except for the dependency subscale and hostility subscale of the Personality Assessment Questionnaire. Thus, the results provide evidence for the reliability of various IPARTheory measures in Vietnam. The relationships found in this study have implications for parents, teachers, and psychologists to employ in order to provide adolescents with appropriate guidance and intervention based on the importance of perceived parental acceptance-rejection.  相似文献   
257.
This study examined whether the induction of different states of arousal via positive emotions broadens thought–action repertoires. Sixty-two Japanese undergraduate and graduate students were randomly assigned to (a) high-arousal positive emotion, (b) low-arousal positive emotion, and (c) neutral groups, after which they watched a 3-min film clip. Participants completed the Affect Grid to confirm their mood state before and after watching the film. Following this, they completed the Twenty Statements Test, which measures thought–action repertoires. A one-way analysis of variance was conducted on the Twenty Statements Test score. The results showed that high-arousal positive emotion broadened thought–action repertoires to a greater extent than did low-arousal positive emotion and the neutral state, while low-arousal positive emotion broadened such repertoires to a greater extent than did the neutral state. We discuss the different effects of high- and low-arousal positive emotions on thought–action repertoires.  相似文献   
258.
采用跨期选择任务范式,考察得失情境下自我–他人决策差异。结果发现:(1)为自己决策比为他人决策更偏好于选择即刻选项;(2)损失情境比获益情境下更偏好于选择即刻选项;(3)获益情境下为自己决策与为他人决策在选择即刻选项上不存在显著差异,而损失情境下为自己决策比为他人决策更偏好于选择即刻选项,表明得失情境下自我–他人决策差异存在不对称性。  相似文献   
259.
Deficits in facial emotion recognition occur frequently after stroke, with adverse social and behavioural consequences. The aim of this study was to investigate the neural underpinnings of the recognition of emotional expressions, in particular of the distinct basic emotions (anger, disgust, fear, happiness, sadness and surprise). A group of 110 ischaemic stroke patients with lesions in (sub)cortical areas of the cerebrum was included. Emotion recognition was assessed with the Ekman 60 Faces Test of the FEEST. Patient data were compared to data of 162 matched healthy controls (HC’s). For the patients, whole brain voxel-based lesion–symptom mapping (VLSM) on 3-Tesla MRI images was performed. Results showed that patients performed significantly worse than HC’s on both overall recognition of emotions, and specifically of disgust, fear, sadness and surprise. VLSM showed significant lesion–symptom associations for FEEST total in the right fronto-temporal region. Additionally, VLSM for the distinct emotions showed, apart from overlapping brain regions (insula, putamen and Rolandic operculum), also regions related to specific emotions. These were: middle and superior temporal gyrus (anger); caudate nucleus (disgust); superior corona radiate white matter tract, superior longitudinal fasciculus and middle frontal gyrus (happiness) and inferior frontal gyrus (sadness). Our findings help in understanding how lesions in specific brain regions can selectively affect the recognition of the basic emotions.  相似文献   
260.
Research on visuospatial memory has shown that egocentric (subject-to-object) and allocentric (object-to-object) reference frames are connected to categorical (non-metric) and coordinate (metric) spatial relations, and that motor resources are recruited especially when processing spatial information in peripersonal (within arm reaching) than extrapersonal (outside arm reaching) space. In order to perform our daily-life activities, these spatial components cooperate along a continuum from recognition-related (e.g., recognizing stimuli) to action-related (e.g., reaching stimuli) purposes. Therefore, it is possible that some types of spatial representations rely more on action/motor processes than others. Here, we explored the role of motor resources in the combinations of these visuospatial memory components. A motor interference paradigm was adopted in which participants had their arms bent behind their back or free during a spatial memory task. This task consisted in memorizing triads of objects and then verbally judging what was the object: (1) closest to/farthest from the participant (egocentric coordinate); (2) to the right/left of the participant (egocentric categorical); (3) closest to/farthest from a target object (allocentric coordinate); and (4) on the right/left of a target object (allocentric categorical). The triads appeared in participants' peripersonal (Experiment 1) or extrapersonal (Experiment 2) space. The results of Experiment 1 showed that motor interference selectively damaged egocentric-coordinate judgements but not the other spatial combinations. The results of Experiment 2 showed that the interference effect disappeared when the objects were in the extrapersonal space. A third follow-up study using a within-subject design confirmed the overall pattern of results. Our findings provide evidence that motor resources play an important role in the combination of coordinate spatial relations and egocentric representations in peripersonal space.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号