首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The minimum principle states that a perceiver will see the simplest possible interpretation of a pattern. Some theorists of human perception take this principle as a core explanatory concept. Others, especially Rock and Hochberg, hold the view that a perceptual minimum principle is untenable. Rock presents a great number of demonstrations which, in his opinion, rule out the minimum principle. Hochberg states that 'impossible' figures especially present a difficulty for this principle. It is argued here that, in order to test the minimum principle, a method is needed to describe interpretations of patterns in such a way that they can be ordered according to simplicity. To achieve this, Leeuwenberg's coding system was used. The analyses reported here of the patterns which Rock produces as evidence against the principle show that, contrary to Rock's claim, the way these patterns are preferentially perceived provides strong support for the minimum principle. Next, it is demonstrated that interpreting certain patterns as 'impossible' figures is not incompatible with the principle. Finally, it is argued that a test of the minimum principle is necessarily conflated with two other hypotheses, one concerning the metric of simplicity and one concerning the task conception of the experimental subjects.  相似文献   

2.
Summary The minimum principle states that a perceiver will see the simplest possible interpretation of a pattern. Some theorists of human perception take this principle as a core-explanatory concept. Others hold the view that a perceptual minimum principle is untenable. In two recent extensive surveys of the relevant literature a more differentiated position is taken: the minimum principle is not renounced in a definite way. In the research reported here, an intuitively appealing specification of a minimum principle is tested. An experiment on visual pattern completion was performed in which patterns were presented to subjects who traced the contours of the shapes they saw. It was predicted that there would be a preference for interpretations that describe a pattern as a set of separate shapes with minimal information load as computed by Leeuwenberg's coding language. However, only half of the responses given by the subjects were predicted by this specification of a minimum principle. It was further demonstrated that locally complex interpretations of junctions of contour elements are easily made, but not in order to attain globally minimal interpretations.  相似文献   

3.
The complexity of categorical syllogisms was assessed using the relational complexity metric, which is based on the number of entities that are related in a single cognitive representation. This was compared with number of mental models in an experiment in which adult participants solved all 64 syllogisms. Both metrics accounted for similarly large proportions of the variance, showing that complexity depends on the number of categories that are related in a representation of the combined premises, whether represented in multiple mental models, or by a single model. This obviates the difficulty with mental models theory due to equivocal evidence for construction of more than one mental model. The “no valid conclusion” response was used for complex syllogisms that had valid conclusions. The results are interpreted as showing that the relational complexity metric can be applied to syllogistic reasoning, and can be integrated with mental models theory, which together account for a wide range of cognitive performances.  相似文献   

4.
High-point coding refers to the popular practice of classifying Minnesota Multiphasic Personality Inventory (Hathaway & McKinley, 1983) profiles based on which clinical scales are the most elevated. A previous review of high-point code studies (McGrath & Ingersoll, 1999a) noted marked discrepancies across studies in the rules used to define high-point codes. This study was conducted to evaluate the costs and benefits of different strategies for high-point coding. The impact of 4 rules for high-point coding on effect sizes and group sizes was evaluated. The 4 rules included requiring a minimum elevation, excluding potentially invalid protocols, restricting coding to well-defined codes, and replacing the lower scale in infrequently occurring codes with the next most elevated scale. The evidence supported the clinical utility of requiring a minimum elevation for code scales. The results were more equivocal concerning the value of well-defined coding and for not replacing the lower scale in infrequent codes. Results were surprisingly negative concerning the utility of excluding potentially invalid protocols, suggesting that guidelines developed in situations in which there is a clear motivation to distort results may not generalize to other settings.  相似文献   

5.
The beauty of visual patterns is quantified, starting from the assumption that beauty is maximal when a maximum of effect is attained with a minimum of means applied. Birkhoff has based his theory of beauty on this assumption, but his specification of means and effect differs from the one presented here. In our conception, effect is specified as the number of independent regularities (R) of a pattern not accounted for by the simplest interpretation of the pattern. This variable is closely allied to the concept of conjunctive ambiguity. The irregularities of a pattern that are not constrained by additional regularities (P) make up the means term of the means-effect analogy. An index of beauty (M) is defined, M = R - P. It is shown that there is substantial correlation between M and preference judgments for a set of Birkhoff's polygons. It is further demonstrated that M has relevance to describe properties of the relation of beauty to complexity, data of experiments on the aesthetic attractivity of combinations of patterns, and preferences for certain proportions of simple figures above others.  相似文献   

6.
《创造力研究杂志》2013,25(3):199-229
Accumulated theories and research findings about the various attributes of groups and organizations that do creative problem solving (CPS) well, systems models of CPS activity that occurs in social settings, and efforts to measure and enhance joint CPS efforts are reviewed. Conclusions that can be drawn from this review about the nature of CPS and about the persons, groups, and organizations that do CPS well are discussed. A 'tri-level matching theory' is proposed as a way of integrating and explaining these findings. Creatively solvable problems vary widely in their complexity, knowledge needs, and the amounts of divergent and convergent thought that are needed, and so the theory predicts that persons, groups, and organizations with different preferences and abilities, knowledge and work arrangements will best match the character of particular problems. CPS research usually found individuals superior to groups, but this pattern of findings may have resulted from the tasks, concepts, and research methods used. Limitations in conceptualization, research methods, and resulting knowledge about collective CPS efforts are identified and discussed, and extensions of existing research as well as new directions for future study are proposed.  相似文献   

7.
In an earlier note, a new metric for bounded response scales (MBR) was introduced which resembles the city-block metric but is bounded above. It was suggested the MBR may be more appropriate than minkowski metrics for data obtained with bounded response scales. In this article, some formal properties of the MBR are investigated and it is shown that it is indeed a metric. Empirical predictions are then derived from the MBR and contrasted with those of a “monotonicity hypothesis,” which holds that dissimilarity judgements tend to be biased towards overestimation of larger distances, and with the predictions of the minkowski metrics, which imply additivity of collinear segments. Some empirical results are presented which contradict the monotonicity hypothesis and the minkowski metrics, and favor the MBR. Finally, the logic used to motivate the MBR is invoked to define a subadditive concatenation for bounded norms in the one-dimensional case, which may be useful in psychophysical work where the upper bounds are often real, rather than due to the response scale. This concatenation predicts understimation for doubling and overestimation for halving and middling tasks.  相似文献   

8.
Several contemporary theories of cognitive development appear to agree that children progress through a hierarchy of increasingly powerful cognitive organizations, and that higher-level organizations impose greater information-processing loads. Although theories differ in their method of defining both the levels and the information-processing loads they impose, it is possible to find considerable common ground. In this paper it is argued that the different levels of cognitive organization can be attributed to a hierarchy of structure-mapping rules. Structure mapping is part of the process by which children represent and understand concepts. Four levels of structure mapping are defined. The levels, from lowest to highest, are based on element mappings, relational mappings, system mappings and multiple-system mappings. The higher-level rules are more ‘abstract’ in the sense of being less dependent on specific properties of each task, and are more transferable. However, they also impose higher information-processing demands. Children who lack the information-processing capacity for a particular level of structure mapping will be able to attain concepts that belong to that level. The theory is used to predict the characteristic age of attainment of cognitive tasks including transitivity, classification, interpretation of algebraic expressions, analogies, logical reasoning, and hypothesis testing. It is argued that the four-structure mapping levels can subsume the four main stages of cognitive development.  相似文献   

9.
Some researchers state that whereas neural networks are fine for pattern recognition and categorization, complex rule formation requires a separate “symbolic” level. However, the human brain is a connectionist system and, however imperfectly, does complex reasoning and inference. Familiar modeling principles (e.g., Hebbian or associative learning, lateral inhibition, opponent processing, neuromodulation) could recur, in different combinations, in architectures that can learn diverse rules. These rules include, for example, “go to the most novel object,” “alternate between two given objects,” and “touch three given objects, without repeats, in any order.” Frontal lobe damage interferes with learning all three of those rules. Hence, network models of rule learning and encoding should include a module analogous to the prefrontal cortex. They should also include modules analogous to the hippocampus for episode setting and analogous to the amygdala for emotional evaluation.  相似文献   

10.
The experiments reported in this study were conducted to explore the issue of race models versus holistic models of word processing. In both types of model, it is assumed that an available word-level encoding for a display will conceal letter information, and thereby inhibit component-letter detection. However, whereas in holistic models it is assumed that encoding always should occur at the word or pattern level first, in the race models it is assumed that encoding occurs at all levels (e.g., feature, letter, and word) simultaneously, with the final level of encoding being at whatever level has been completed first. If the rate of word-level encoding is facilitated by increasing word frequency, the holistic models predict a generally declining latency for letter detection, because the initial step in letter detection (i.e., word-level encoding) will be occurring more rapidly. The race models, on the other hand, predict that with increasing word frequency there will be an increasing chance that the word-level encoding will win the encoding race, resulting in an increase in the latency for letter detection (i.e., the word code will conceal the letter codes). Two experiments are reported, and the obtained pattern of latency data appears to be most consistent with the race models.  相似文献   

11.
12.
Great interest in non-metric multidimensional scaling has resulted in a number of computer programs to derive solutions. This study examined the effect upon stress of data generated under five metrics and recovered under all five metrics. MDSCAL-5M. TORSCA-9, and POLYCON-II were used to analyse these data. POLYCON-II was the most accurate, although none of the programs was highly successful. In most cases recovery with the Euclidian metric provided, if not the best, very close to the best recovery regardless of the true metric. This study also raised the question of the advisability of using different metric models in nonmetric multidimensional scaling and found that even very different Minkowski metrics are quite similar in the way they rank order dissimilarities.  相似文献   

13.
It is shown that assimilation and brightness contrast effects are evoked by structural aspects of patterns. In a pilot experiment, variously shaped gray patterns were used as stimuli. The backgrounds used with each of these shapes were identical: half black and half white. If the gray area against the black part was judged to be more black than the gray area against the white part, an assimilation effect will have occurred; when the reverse occurred, this was called a contrast effect. The task was to rank-order the stimuli on the assimilation-contrast scale. It is argued that the two effects are due to two different interpretations, each derivable from a different code of a pattern. The simpler the contrast code is with respect to the assimilation code, the more it will be perceptually preferred. In the specification of pattern complexity, structural information theory was used. A significant correlation was discovered between the theoretical preference for the contrast interpretation and the contrast preference of subjects.  相似文献   

14.
Saccade curvature is becoming a popular measure for detecting the presence of competing saccadic motor programs. Several different methods of quantifying saccade curvature have been employed. In the present study, we compared these metrics with each other and with novel measures based on curve fitting. Initial deviation metrics were only moderately associated with the more widely used metric of maximum curvature. The latter was strongly related to a recently developed area-based measure and to the novel methods based on second- and third-order polynomial fits. The curve-fitting methods showed that although most saccades curved in only one direction, there was a population of trajectories with both a maximum and a minimum (i.e., double-curved saccades). We argue that a curvature metric based on a quadratic polynomial fit deals effectively with both types of trajectories and, because it is based on all the samples of a saccade, is less susceptible to sampling noise.  相似文献   

15.
16.
Using records from the 1,138 males and 1,462 females in the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) restandardization sample (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989), two-point high-point code patterns generated from the original norms were compared to the patterns that these subjects obtained from the new norms. Although some code patterns proved to be quite stable across both norms, code comparability was generally lower in this community-based sample than was true for the records from samples of psychiatric patients also reported in Butcher et al. (1989). The sources of differences between the original and the new norms were reviewed, and the implications for profile interpretation based on code patterns were pointed out. The differences arising from the use of the MMPI-2 norms are appreciable; they highlight the need for new empirical data on the correlates of coding patterns based on these norms.  相似文献   

17.
This article presents a study on Van Tuijl’s (1975) neon effect. The neon effect can be described as an illusory spreading of color around the colored elements of an otherwise black line pattern. The observer has a strong impression of colored light projected onto a lattice of black lines. The hypothesis is advanced that the neon effect will only result if the structural relationships between black and colored line elements in the pattern are such that a neon interpretation is the most efficient interpretation that can be given of the pattern. The necessity of this approach to the neon phenomenon emanates from the inadequacy of alternative, more simple, explanations, such as aberrations of peripheral perceptual mechanisms or the presence in the pattern of easily definable stimulus features. To subject the hypothesis proposed above to experimental test, a precise quantification of its central concept, the efficiency of pattern interpretations, is needed. To that end, Leeuwenberg’s (1971) coding language for sequential patterns is introduced. By means of the coding language, pattern interpretations can be represented in a pattern code, the length of which is inversely proportional to the efficiency of the interpretation coded. Several possible interpretations of color differences between the elements of line patterns are discussed, and it is shown how the efficiency of each of them can be determined. Next, in two experiments, the efficiency of the neon interpretationrelative to that of alternative interpretations of color differences in line patterns is varied, by manipulating the structural relations between black and colored line elements, and the dependency of the neon effect on the relative efficiency of the neon interpretation is demonstrated. Implications of the findings are discussed.  相似文献   

18.
Using records from the 1,138 males and 1,462 females in the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) restandardization sample (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989), two-point high-point code patterns generated from the original norms were compared to the patterns that these subjects obtained from the new norms. Although some code patterns proved to be quite stable across both norms, code comparability was generally lower in this community-based sample than was true for the records from samples of psychiatric patients also reported in Butcher et al. (1989). The sources of differences between the original and the new norms were reviewed, and the implications for profile interpretation based on code patterns were pointed out. The differences arising from the use of the MMPI-2 norms are appreciable; they highlight the need for new empirical data on the correlates of coding patterns based on these norms.  相似文献   

19.
Karmel's check-pattern preference data for 13-week-old infants were reanalyzed using linear systems analysis. The two-dimensional Fourier amplitude spectrum was calculated for each of his eight checkerboard and random check patterns. The mean contrast sensitivity data for 3-month-old infants of Banks and Salapatek and the spatial frequency amplitudes of the patterns were used to derive three metrics to predict the looking times observed by Karmel. One was based on the sensitivity of the visual system to the single pattern component highest above threshold (maximum amplitude), the second was based on the total amount of pattern energy above threshold (total summation), and the third was based on the maximum amplitude with summation over nearby spatial frequency components (limited summation). The predictive power of the maximum amplitude and the total summation metrics depended on whether the pattern type was checkerboard or random check. The limited summation metric predicted looking times well for both pattern types. A linear function of the logarithm of the limited summation metric accounted for 91% of the total variance in looking time.  相似文献   

20.
It has been suggested that neglect patients misrepresent the metric spatial relations along the horizontal axis (anisometry). The "fabric" of their internal spatial medium would be distorted in such a way that physically equal distances appear relatively shorter on the contralesional side (canonical anisometry). The case of GL, a 76-year-old lady with left neglect on visual search tasks, is presented. GL showed severe relative overestimation on the left (contralesional) side on two independent tasks evaluating the metrics of her internal representation. A qualitatively similar pattern was found in two out of 10 other neglect patients who performed the second task. This behavior cannot be accounted for by the canonical anisometry hypothesis. Nevertheless, GL produced a relative left overextension (underestimation) when trying to set the endpoints of a virtual line given its midpoint (Endpoints Task). An interpretation of these results is offered in terms of a misprojection of relevant landmarks onto the internal representation without assuming distortion of its "fabric."  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号