首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   44篇
  免费   3篇
  2020年   1篇
  2018年   3篇
  2017年   1篇
  2016年   1篇
  2015年   2篇
  2013年   3篇
  2012年   1篇
  2010年   1篇
  2009年   1篇
  2008年   3篇
  2007年   4篇
  2006年   1篇
  2002年   1篇
  1998年   1篇
  1994年   1篇
  1993年   2篇
  1992年   1篇
  1991年   1篇
  1990年   2篇
  1989年   4篇
  1988年   1篇
  1984年   2篇
  1983年   1篇
  1974年   1篇
  1973年   1篇
  1971年   1篇
  1969年   3篇
  1968年   1篇
  1967年   1篇
排序方式: 共有47条查询结果,搜索用时 15 毫秒
11.
Background and objectives: This study evaluates the process and consequence of inducing self-compassion during recovery from social performance stressors. Though interest in self-compassion as an intervention target is growing, extant findings suggest that initially cultivating self-compassion can be challenging for those with high self-criticism and anxiety, common features of social anxiety disorder (SAD).

Design: Quasi-experimental design.

Methods: The current study evaluates the feasibility, content, and outcomes of a brief written self-compassion induction administered after consecutive laboratory social stressors, among adults with SAD (n?=?21) relative to healthy controls (HC; n?=?35).

Results: Findings demonstrate the feasibility of employing a written self-compassion induction among adults with (and without) SAD, reveal group differences in written responses to the induction, and suggest that the SAD group benefitted more from the induction than the HC group, based on greater reductions in state anxiety and greater increases in self-compassion during stressor recovery. Greater use of negative affect words within written responses to the self-compassion induction, but not during general writing, predicted lower subsequent state anxiety across groups, by a medium effect size.

Conclusions: Collectively, the findings support the feasibility and utility of cultivating self-compassion among adults with SAD.  相似文献   
12.
Formal notations are diagrams: Evidence from a production task   总被引:2,自引:0,他引:2  
Although a general sense of the magnitude, quantity, or numerosity of objects is common in both untrained people and animals, the abilities to deal exactly with large quantities and to reason precisely in complex but well-specified situations--to behave formally, that is--are skills unique to people trained in symbolic notations. These symbolic notations typically employ complex, hierarchically embedded structures, which all extant analyses assume are constructed by concatenative, rule-based processes. The primary goal of this article is to establish, using behavioral measures on naturalistic tasks, that some of the same cognitive resources involved in representing spatial relations and proximities are also involved in representing symbolic notations--in short, that formal notations are a kind of diagram. We examined self-generated productions in the domains of handwritten arithmetic expressions and typewritten statements in a formal logic. In both tasks, we found substantial evidence for spatial representational schemes even in these highly symbolic domains.  相似文献   
13.
14.
The image of a material's surface varies not only with viewing and illumination conditions, but also with the material's surface properties, including its 3-D texture and specularity. Previous studies on the visual perception of surface material have typically focused on single material properties, ignoring possible interactions. In this study, we used a conjoint-measurement design to determine how observers represent perceived 3-D texture ("bumpiness") and specularity ("glossiness") and modeled how each of these two surface-material properties affects perception of the other. Observers made judgments of bumpiness and glossiness of surfaces that varied in both surface texture and specularity. We quantified how changes in each surface-material property affected judgments of the other and found that a simple additive model captured visual perception of texture and specularity and their interaction. Conjoint measurement is potentially a powerful tool for analyzing perception of surface material in realistic environments.  相似文献   
15.
16.
HIPS (Human Information Processing Laboratory’s Image processing System) is a software system for image processing that runs under the UNIX operating system. HIPS is modular and flexible: it provides automatic documentation of its actions, and is relatively independent of special equipment. It has proved its usefulness in the study of the perception of American Sign Language (ASL). Here, we demonstrate some of its applications in the study of vision, and as a tool in general signal processing. Ten examples of HIPS-generated stimuli and—in some cases—analyses are provided, including the spatial filtering analysis of two types of visual illusions; the study of frequency channels with sine-wave gratings and band-limited noise; 3-dimensional perceptual reconstruction from 2-dimensional images in the kinetic depth effect; the perception of depth in random dot stereograms and cinematograms; and the perceptual segregation of objects induced by differential dot motion. Finally, examples of noise-masked, cartoon coded, and hierarchically encoded ASL images are provided.  相似文献   
17.
EVE, theEarly VisionEmulation software, is a set of computer programs designed to compute models of early visual processing. EVE may be used with a wide variety of models concerning spatial detection and discrimination, motion analysis, and issues of spatial sampling. EVE is modular and flexible. It runs under the UNIX operating system, and is device-independent. We describe the implementation of the EVE software and discuss how it may be applied to several visual models.  相似文献   
18.
We introduce an objective shape-identification task for measuring the kinetic depth effect (KDE). A rigidly rotating surface consisting of hills and valleys on an otherwise flat ground was defined by 300 randomly positioned dots. On each trial, 1 of 53 shapes was presented; the observer's task was to identify the shape and its overall direction of rotation. Identification accuracy was an objective measure, with a low guessing base rate of the observer's perceptual ability to extract 3D structure from 2D motion via KDE. (1) Objective accuracy data were consistent with previously obtained subjective rating judgments of depth and coherence. (2) Along with motion cues, rotating real 3D dot-defined shapes inevitably produced a cue of changing dot density. By shortening dot lifetimes to control dot density, we showed that changing density was neither necessary nor sufficient to account for accuracy; motion alone sufficed. (3) Our shape task was solvable with motion cues from the 6 most relevant locations. We extracted the dots from these locations and used them in a simplified 2D direction-labeling motion task with 6 perceptually flat flow fields. Subjects' performance in the 2D and 3D tasks was equivalent, indicating that the information processing capacity of KDE is not unique. (4) Our proposed structure-from-motion algorithm for the shape task first finds relative minima and maxima of local velocity and then assigns 3D depths proportional to velocity.  相似文献   
19.
20.
Many studies of explanation have focused on higher level tasks and on how explanations draw upon relevant prior knowledge, which then helps in understanding some event or observation. However, explanations may also affect performance in simple tasks even when they include no task-relevant information. In three experiments, we show that explanations adding no task-relevant information alter performance in a sequential binary decision task. Whereas people with no explanation for why two events occurred at different rates tended to predict each outcome in proportion to their probability of occurrence (to "probability match"), people with an explanation tended to predict the more likely event more often (to "overmatch," a better strategy). These results suggest a broader view of explanation, which includes a role in shaping simple tasks outside of higher level reasoning.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号