Existing social stressor concepts disregard the variety of task-related situations at work that require skillful social behavior to maintain good social relationships while achieving certain task goals. In this article, we challenge the view that social stressors at work are solely dysfunctional aspects evoking employee ill health. Drawing from the challenge-hindrance stressor framework, we introduce the concept of social challenge stressors as a job characteristic and examine their relationships with individual well-and ill-being. In study 1, we developed a new scale for the measurement of social challenge stressors and tested the validity of the scale. Results from two independent samples indicated support for a single-factor structure and showed that social challenge stressors are distinct from related stressor concepts. Using two samples, one of which was already used to test the factor structure, we analyzed the unique contribution of social challenge stressors in predicting employee well- and ill-being. As expected, social challenge stressors were simultaneously related to psychological strain and well-being. Using time-lagged data, study 2 investigated mechanisms that may explain how social challenge stressors are linked to well-being and strain. In line with the stress-as-offense-to-self approach, we expected indirect relationships via self-esteem. Additionally, social support was expected to moderate the relationships between social stressors and self-esteem. Whereas the indirect relationships were mostly confirmed, we found no support for the buffering role of social support in the social hindrance stressors-self-esteem link. Although we found a moderation effect for social challenge stressors, results indicated a compensation model that conflicted with expectations.
In the present experiment subjects made a decision between two alternatives which was either reversible or irreversible. After the choice, subjects evaluated the attractiveness of both alternatives once more under different time levels. It was found that with increasing time level, re-evaluation of alternatives increased under irreversible and decreased under reversible conditions. The results are discussed in the framework of dissonance theory. 相似文献
The present study was conducted to demonstrate classic conditioning in electrodermal (ED) and heart rate (HR) responses by
using a nonaversive reaction time (RT) task as unconditional stimulus (US). Three groups of 12 subjects each were studied
to test the efficacy of this US procedure by varying the essential components of the RT task-US between groups. Eight seconds
differential delay conditioning was applied in each group. Simple geometric features (square, cross) displayed on a TV screen
were used as CS+ and CS−. RT task consisted of a nonaversive tone (72 dBA, 1000 or 1200 Hz) and a motor response (pressing
a button with the left index finger). Subjects were asked to respond as soon as the tone stimulus was presented. The three
groups received different stimulus sequences during the 16-trial acquisition phase only. In one group (Group C1), CS+ was
followed by a tone to which subjects were to respond, whereas CS− was not followed by a tone. Similarly, in a second group
(Group H), CS+ was followed by a tone, whereas CS− was not; however, subjects of Group H (habituation group) were not required
to respond to the tone. In a third group, (Group C2) CS+ was followed by a tone to which subjects were to respond, while CS−
was followed by a different tone requiring no response. According to analysis of Group C1 data, differential conditioning
was obtained in each response measure. Group H displayed habituation in each response measure obtained. In Group C2, differential
conditioning was obtained in the second latency window of ED responses only. In all trials, first-interval anticipatory ED
responses and HR responses did occur during acquisition, but were not differentiated with respect to the CS conditions. Although
the results of Group C2 need further exploration, differential conditioning of HR and in all latency windows of ED responses
was demonstrated by the use of a nonaversive RT task as US. 相似文献
It has repeatedly been shown that the time and accuracy of recognizing a word depend strongly on where in the word the eye is fixating. Word-recognition performance is maximal when the eye fixates a region near the word’s center, and decreases to both sides of this “optimal viewing position.” The reason for this phenomenon is assumed to be the strong drop-off of visual acuity: the visibility of letters decreases with increasing eccentricity from fixation location. Consequently, fewer letters can be identified when the beginning or ending of a word is fixated than when its center is fixated. The present study is a test of this visual acuity hypothesis. If the phenomenon is caused by letter visibility, then it should be sensitive to variations of visual conditions in which the letters are presented. By increasing the interletter distances of the word(e.g.,a_t_t_e_m_ p_ t), letter visibility was decreased. As expected from our hypothesis, the viewing-position effect became more exaggerated. An additional experiment showed that destroying word-shape information (e.g., aTtEmPt) decreased overall word-recognition performance but had no influence on the viewingposition effect. Varying the viewing position in words might thus be used as a paradigm, allowing one to separate out the contribution of letter information and supraletter information to word recognition. 相似文献