首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
算法拒绝意指尽管算法通常能比人类做出更准确的决策, 但人们依然更偏好人类决策的现象。算法拒绝的三维动机理论归纳了算法主体怀疑、道德地位缺失和人类特性湮没这三个主要原因, 分别对应信任、责任和掌控三种心理动机, 并对应改变算法拒绝的三种可行方案: 提高人类对算法的信任度, 强化算法的主体责任, 探索个性化算法设计以突显人对算法决策的控制力。未来研究可进一步以更社会性的视角探究算法拒绝的发生边界和其他可能动机。  相似文献   

2.
An algebraic approach to programs called recursive coroutines — due to Janicki [3] — is based on the idea to consider certain complex algorithms as algebraics models of those programs. Complex algorithms are generalizations of pushdown algorithms being algebraic models of recursive procedures (see Mazurkiewicz [4]). LCA — logic of complex algorithms — was formulated in [11]. It formalizes algorithmic properties of a class of deterministic programs called here complex recursive ones or interacting stacks-programs, for which complex algorithms constitute mathematical models. LCA is in a sense an extension of algorithmic logic as initiated by Salwicki [14] and of extended algorithmic logic EAL as formulated and examined by the present author in [8], [9], [10]. In LCA — similarly as in EAL-ω + -valued logic is applied as a tool to construct control systems (stacks) occurring in corresponding algorithms. The aim of this paper is to give a complete axiomatization. of LCA and to prove a completeness theorem. Logic of complex algorithms was presented at FCT'79 (International Symposium on Fundamentals of Computation Theory, Berlin 1979)  相似文献   

3.
算法常用于决策, 但相较于由人类做出的决策, 即便内容相同, 算法决策更容易引起个体反应的分化, 此即算法决策趋避。趋近指个体认为算法的决策比人类的更加公平、含有更少的偏见和歧视、也更能信任和接受, 回避则与之相反。算法决策趋避的过程动机理论用以解释趋避现象, 归纳了人与算法交互所经历的原初行为互动、建立类社会关系和形成身份认同三个阶段, 阐述了各阶段中认知、关系和存在三种动机引发个体的趋避反应。未来研究可着眼于人性化知觉、群际感知对算法决策趋避的影响, 并以更社会性的视角来探究算法决策趋避的逆转过程和其他可能的心理动机。  相似文献   

4.
Semi-sparse PCA     
Eldén  Lars  Trendafilov  Nickolay 《Psychometrika》2019,84(1):164-185

It is well known that the classical exploratory factor analysis (EFA) of data with more observations than variables has several types of indeterminacy. We study the factor indeterminacy and show some new aspects of this problem by considering EFA as a specific data matrix decomposition. We adopt a new approach to the EFA estimation and achieve a new characterization of the factor indeterminacy problem. A new alternative model is proposed, which gives determinate factors and can be seen as a semi-sparse principal component analysis (PCA). An alternating algorithm is developed, where in each step a Procrustes problem is solved. It is demonstrated that the new model/algorithm can act as a specific sparse PCA and as a low-rank-plus-sparse matrix decomposition. Numerical examples with several large data sets illustrate the versatility of the new model, and the performance and behaviour of its algorithmic implementation.

  相似文献   

5.
Algorithmic risk assessment tools are informed by scientific research concerning which factors are predictive of recidivism and thus support the evidence‐based practice movement in criminal justice. Automated assessments of individualized risk (low, medium, high) permit officials to make more effective management decisions. Computer‐generated algorithms appear to be objective and neutral. But are these algorithms actually fair? The focus herein is on gender equity. Studies confirm that women typically have far lower recidivism rates than men. This differential raises the question of how well algorithmic outcomes fare in terms of predictive parity by gender. This essay reports original research using a large dataset of offenders who were scored on the popular risk assessment tool COMPAS. Findings indicate that COMPAS performs reasonably well at discriminating between recidivists and non‐recidivists for men and women. Nonetheless, COMPAS algorithmic outcomes systemically overclassify women in higher risk groupings. Multiple measures of algorithmic equity and predictive accuracy are provided to support the conclusion that this algorithm is sexist.  相似文献   

6.
Moon H  Phillips PJ 《Perception》2001,30(3):303-321
Algorithms based on principal component analysis (PCA) form the basis of numerous studies in the psychological and algorithmic face-recognition literature. PCA is a statistical technique and its incorporation into a face-recognition algorithm requires numerous design decisions. We explicitly state the design decisions by introducing a generic modular PCA-algorithm. This allows us to investigate these decisions, including those not documented in the literature. We experimented with different implementations of each module, and evaluated the different implementations using the September 1996 FERET evaluation protocol (the de facto standard for evaluating face-recognition algorithms). We experimented with (i) changing the illumination normalization procedure; (ii) studying effects on algorithm performance of compressing images with JPEG and wavelet compression algorithms; (iii) varying the number of eigenvectors in the representation; and (iv) changing the similarity measure in the classification process. We performed two experiments. In the first experiment, we obtained performance results on the standard September 1996 FERET large-gallery image sets. In the second experiment, we examined the variability in algorithm performance on different sets of facial images. The study was performed on 100 randomly generated image sets (galleries) of the same size. Our two most significant results are (i) changing the similarity measure produced the greatest change in performance, and (ii) that difference in performance of +/- 10% is needed to distinguish between algorithms.  相似文献   

7.
In this journal, Spitz (New Ideas in Psychology, 13, 167–182, 1995) proposed that calendar-calculating savants achieved their astonishing performances by concentrative efforts that developed “smart” unconscious brain algorithms. In this paper I offer argument and examples that support Spitz's contentions. Within the context of a brain algorithm based position I call Neurological Positivism (NP), I argue that all problem solving performances, whether those of savants or anyone else (including Albert Einstein), are the result of common self-organizing, self-referential algorithmic dynamics that connect the powerful algorithms of the phylogenetic brain (for example, those of perception) with algorithmic retoolments developed during ontogeny. Savant performances are described as abnormal outcomes of the evolutionary-driven transformation of phylogenetic algorithms into cultural-level problems. Albert Einstein's experiential account of personal discovery, which he termed intuition, is offered as corroborative support for NP's position that cultural-level performance arises from perceptual algorithms. It is concluded that culture is driven into existence by evolutionary dynamics that are immanent in the phylogenetic brain.  相似文献   

8.
In two experiments designed to assess the effect of varying amounts of exposure to noncontingency training, it was discovered that performance decrements could be produced after relatively brief training and again after extended training. Between these conditions was a period of recovery during which no performance deficits were evident. There was also a tendency for individual differences in motivation to moderate deficits following brief but not extended training. A four-stage model is proposed to account for these results. In response to uncontrollable outcomes, individuals are said to pass through a phase of no effect, followed by temporary helplessness, recovery, and final helplessness. The model also proposes that motivational differences and perceptions of noncontingency exert independent and opposing influences on learned helplessness deficits.  相似文献   

9.
Hidden Markov models (HMMs) have been successful for modelling the dynamics of carefully dictated speech, but their performance degrades severely when used to model conversational speech. Since speech is produced by a system of loosely coupled articulators, stochastic models explicitly representing this parallelism may have advantages for automatic speech recognition (ASR), particularly when trying to model the phonological effects inherent in casual spontaneous speech. This paper presents a preliminary feasibility study of one such model class: loosely coupled HMMs. Exact model estimation and decoding is potentially expensive, so possible approximate algorithms are also discussed. Comparison of one particular loosely coupled model on an isolated word task suggests loosely coupled HMMs merit further investigation. An approximate algorithm giving performance which is almost always statistically indistinguishable from the exact algorithm is also identified, making more extensive research computationally feasible.  相似文献   

10.
Two experiments demonstrate how individual differences in working memory (WM) impact the strategies used to solve complex math problems and how consequential testing situations alter strategy use. In Experiment 1, individuals performed multistep math problems under low- or high-pressure conditions and reported their problem-solving strategies. Under low-pressure conditions, the higher individuals' WM, the more likely they were to use computationally demanding algorithms (vs. simpler shortcuts) to solve the problems, and the more accurate their math performance. Under high-pressure conditions, higher WM individuals used simpler (and less efficacious) problem-solving strategies, and their performance accuracy suffered. Experiment 2 turned the tables by using a math task for which a simpler strategy was optimal (produced accurate performance in few problem steps). Now, under low-pressure conditions, the lower individuals' WM, the better their performance (the more likely they relied on a simple, but accurate, problem strategy). And, under pressure, higher WM individuals performed optimally by using the simpler strategies lower WM individuals employed. WM availability influences how individuals approach math problems, with the nature of the task performed and the performance environment dictating skill success or failure.  相似文献   

11.
司继伟  徐艳丽  封洪敏  许晓华  周超 《心理学报》2014,46(12):1835-1849
采用事件相关电位(ERP)技术和选择/无选法范式, 在两位数加法心算和估算中, 探索高、低数学焦虑个体的算术计算策略运用及其内在机制。行为结果:数学焦虑效应在策略运用的反应时和正确率指标上的差异都不显著; 而脑电结果:高数学焦虑个体的N400波幅显著高于低数学焦虑个体; 选择条件中, 估算与心算的数学焦虑效应的N100波幅差异; 无选条件中, 高低数学焦虑个体N1-P2复合波的波幅和潜伏期差异显著。数学焦虑效应在策略编码(0~250 ms)和策略选择/执行阶段(250 ms之后)存在差异。  相似文献   

12.
The authors examined quantity-based judgments for up to 10 items for simultaneous and sequential whole sets as well as for sequentially dropped items in chimpanzees (Pan troglodytes), gorillas (Gorilla gorilla), bonobos (Pan paniscus), and orangutans (Pongo pygmaeus). In Experiment 1, subjects had to choose the larger of 2 quantities presented in 2 separate dishes either simultaneously or 1 dish after the other. Representatives of all species were capable of selecting the larger of 2 quantities in both conditions, even when the quantities were large and the numerical distance between them was small. In Experiment 2, subjects had to select between the same food quantities sequentially dropped into 2 opaque cups so that none of the quantities were ever viewed as a whole. The authors found some evidence (albeit weaker) that subjects were able to select the larger quantity of items. Furthermore, the authors found no performance breakdown with the inclusion of certain quantities. Instead, the ratio between quantities was the best performance predictor. The authors conclude that quantity-based judgments rely on an analogical system, not a discrete object file model or perceptual estimation mechanism, such as subitizing.  相似文献   

13.
This study extended prior research on consideration of future consequences (CFC) by exploring its influence on quality and quantity aspects of job performance. CFC is an individual‐differences variable reflecting the importance a person assigns to the immediate vs. future consequences of his or her actions. We hypothesized that individuals with a high future orientation would produce higher quality work, while low‐CFC participants would produce greater quantities. Participants took part in a data‐entry task where they were asked to enter as many words as they could (quantity) while maintaining the highest accuracy (quality) possible. Results supported the primary hypothesis. Workplace implications of the findings are discussed, particularly with respect to selection and the design of performance incentive systems.  相似文献   

14.
We investigated the mechanisms responsible for the automatic processing of the numerosities represented by digits in the size congruity effect (Henik & Tzelgov, 1982). The algorithmic model assumes that relational comparisons of digit magnitudes (e.g., larger than {8,2}) create this effect. If so, congruity effects ought to require two digits. Memory-based models assume that associations between individual digits and the attributes "small" and "large" create this effect. If so, congruity effects ought only to require one digit. Contrary to the algorithmic model and consistent with memory-based models, congruity effects were just as large when subjects judged the relative physical sizes of small digits paired with letters as when they judged the relative physical sizes of two digits. This finding suggests that size congruity effects can be produced without comparison algorithms.  相似文献   

15.
The superiority of group performance over performance of the average individual is relatively greater on world knowledge tasks than on quantity estimation tasks. Previous research on quantity estimations has involved judgments without an explicit frame of reference. We propose that a frame of reference converts a quantity estimation into a world knowledge inference by embedding the estimation in a larger cognitive structure. Individuals first estimated 30 pairs of quantities, such as the length of the Ohio River and the length of the Arkansas River, given either 2 statements as a frame of reference (the Mississippi River is 2340 miles long; the Colorado River is 1450 miles long), 1 of these statements as a frame of reference, or no frame of reference. Then they made the same 30 pairs of estimations again as 3-person groups or as individuals under the same frame-of-reference conditions. As predicted, group estimations were more accurate than individual estimations, both group and individual estimations were more accurate with either a 2-statement or a 1-statement frame of reference than without a frame of reference, and the frame of reference improved group estimations relatively more than individual estimations.  相似文献   

16.
Two algorithms are described for marginal maximum likelihood estimation for the one-parameter logistic model. The more efficient of the two algorithms is extended to estimation for the linear logistic model. Numerical examples of both procedures are presented. Portions of this research were presented at the meeting of the Psychometric Society in Chapel Hill, N.C. in May, 1981. Thanks to R. Darrell Bock, Gerhard Fischer, and Paul Holland for helpful comments in the course of this research.  相似文献   

17.
Gerald A. Cory Jr. 《Zygon》2000,35(2):385-414
This paper builds upon a critically clarified statement of the triune brain concept to set out the conflict systems neurobehavioral model. The model defines the reciprocal algorithms (rules of procedure) of behavior from evolved brain structure. The algorithms are driven by subjectively experienced behavioral tension as the self-preservational programming, common to our ancestral vertebrates, frequently tugs and pulls against the affectional program-ming of our mammalian legacy. The yoking ( zygon ) of the dual algorithmic dynamic accounts for the emergence of moral and spiritual consciousness as manifested in the universal norm of reciprocity and in the work of such thinkersas Martin Buber and Paul Tillich.  相似文献   

18.
Although many studies have shown that nonhuman animals can choose the larger of two discrete quantities of items, less emphasis has been given to discrimination of continuous quantity. These studies are necessary to discern the similarities and differences in discrimination performance as a function of the type of quantities that are compared. Chimpanzees made judgments between continuous quantities (liquids) in a series of three experiments. In the first experiment, chimpanzees first chose between two clear containers holding differing amounts of juice. Next, they watched as two liquid quantities were dispensed from opaque syringes held above opaque containers. In the second experiment, one liquid amount was presented by pouring it into an opaque container from an opaque syringe, whereas the other quantity was visible the entire time in a clear container. In the third experiment, the heights at which the opaque syringes were held above opaque containers differed for each set, so that sometimes sets with smaller amounts of juice were dropped from a greater height, providing a possible visual illusion as to the total amount. Chimpanzees succeeded in all tasks and showed many similarities in their continuous quantity estimation to how they performed previously in similar tasks with discrete quantities (for example, performance was constrained by the ratio between sets). Chimpanzees could compare visible sets to nonvisible sets, and they were not distracted by perceptual illusions created through various presentation styles that were not relevant to the actual amount of juice dispensed. This performance demonstrated a similarity in the quantitative discrimination skills of chimpanzees for continuous quantities that matches that previously shown for discrete quantities.  相似文献   

19.
Although many studies have shown that nonhuman animals can choose the larger of two discrete quantities of items, less emphasis has been given to discrimination of continuous quantity. These studies are necessary to discern the similarities and differences in discrimination performance as a function of the type of quantities that are compared. Chimpanzees made judgments between continuous quantities (liquids) in a series of three experiments. In the first experiment, chimpanzees first chose between two clear containers holding differing amounts of juice. Next, they watched as two liquid quantities were dispensed from opaque syringes held above opaque containers. In the second experiment, one liquid amount was presented by pouring it into an opaque container from an opaque syringe, whereas the other quantity was visible the entire time in a clear container. In the third experiment, the heights at which the opaque syringes were held above opaque containers differed for each set, so that sometimes sets with smaller amounts of juice were dropped from a greater height, providing a possible visual illusion as to the total amount. Chimpanzees succeeded in all tasks and showed many similarities in their continuous quantity estimation to how they performed previously in similar tasks with discrete quantities (for example, performance was constrained by the ratio between sets). Chimpanzees could compare visible sets to nonvisible sets, and they were not distracted by perceptual illusions created through various presentation styles that were not relevant to the actual amount of juice dispensed. This performance demonstrated a similarity in the quantitative discrimination skills of chimpanzees for continuous quantities that matches that previously shown for discrete quantities.  相似文献   

20.
A view of individuals as constituted of quantities of matter, both understood as continuants enduring over time, is elaborated in some detail. Constitution is a three-place relation which can't be collapsed to identity because of the place-holder for a time and because individuals and quantities of matter have such a radically different character. Individuals are transient entities with limited lifetimes, whereas quantities are permanent existents undergoing change in physical and chemical properties from time to time. Coincidence, considered as a matter of occupying the same place, is developed, alongside sameness of constitutive matter, as a criterion of identity for individuals. Quantities satisfy the mereological criterion of identity, applicable to entities subject to mereological relations and operations such as regions of space and intervals of time. A time-dependent analogue of mereological parthood is defined for individuals, in terms of which analogues of the other mereological relations can be defined. But it is argued that there is no analogue of the mereological operation of summation for individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号