首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Neil Cohn 《Cognitive Science》2014,38(7):1317-1359
Cohn's (2013) theory of “Visual Narrative Grammar” argues that sequential images take on categorical roles in a narrative structure, which organizes them into hierarchic constituents analogous to the organization of syntactic categories in sentences. This theory proposes that narrative categories, like syntactic categories, can be identified through diagnostic tests that reveal tendencies for their distribution throughout a sequence. This paper describes four experiments testing these diagnostics to provide support for the validity of these narrative categories. In Experiment 1, participants reconstructed unordered panels of a comic strip into an order that makes sense. Experiment 2 measured viewing times to panels in sequences where the order of panels was reversed. In Experiment 3, participants again reconstructed strips but also deleted a panel from the sequence. Finally, in Experiment 4 participants identified where a panel had been deleted from a comic strip and rated that strip's coherence. Overall, categories had consistent distributional tendencies within experiments and complementary tendencies across experiments. These results point toward an interaction between categorical roles and a global narrative structure.  相似文献   

2.
摘要:为探讨在无阅读经验时,词切分线索是否促进读者的阅读与词汇识别,本研究操纵了阅读方向、词切分线索与目标词的词频。采用眼动仪记录读者的阅读过程。结果发现:阅读方向与词切分线索的交互作用显著,词切分显著地促进了从右向左呈现文本的阅读,但并不影响阅读从左向右呈现的文本;词切分条件下目标词的注视时间显著地短于无词切分条件。表明词切分线索促进了无阅读经验时的阅读,支持词切分线索的促进与文本不熟悉干扰之间权衡作用的假设。  相似文献   

3.
The study of attention in pictures is mostly limited to individual images. When we ‘read’ a visual narrative (e.g., a comic strip), the pictures have a coherent sequence, but it is not known how this affects attention. In two experiments, we eyetracked participants in order to investigate how disrupting the visual sequence of a comic strip would affect attention. Both when panels were presented one at a time (Experiment 1) and when a sequence was presented all together (Experiment 2), pictures were understood more quickly and with fewer fixations when in their original order. When order was randomised, the same pictures required more attention and additional ‘regressions’. Fixation distributions also differed when the narrative was intact, showing that context affects where we look. This reveals the role of top‐down structures when we attend to pictorial information, as well as providing a springboard for applied research into attention within image sequences.Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
动词理解中空间表征的激活过程   总被引:5,自引:0,他引:5  
伍丽梅  莫雷  王瑞明 《心理学报》2006,38(5):663-671
探讨动词理解中空间表征的激活过程。被试听以不同空间轴向动词作谓语的句子后辨认视觉刺激的形状。实验1以肯定句为材料探讨语义理解对视知觉任务的影响。实验2以客观原因否定句为材料排除句子表征的影响。实验3以主观原因否定句为材料探讨动词空间元素的激活机制。总的结果表明,动词理解过程中激活其表征中的空间元素,这种激活是自动的、非策略性的,不受情境中客观原因或主观意愿否定的影响。动词理解空间效应反映了语言表征中的知觉运动特征  相似文献   

5.
K. Wiegand and E. Wascher (2005) used the lateralized readiness potential (LRP) to investigate the mechanisms underlying spatial stimulus-response (S-R) correspondence. The authors compared spatial S-R correspondence effects obtained with horizontal and vertical S-R arrangements. In some relevant previous investigations on spatial S-R correspondence with the LRP, researchers preferred to use the vertical S-R layout to circumvent methodological issues related to the LRP and horizontal S-R layouts. K. Wiegand and E. Wascher (2005) do not address these complications, and they make comparisons between electroencephalographic (EEG) data collected with horizontal and vertical S-R arrangements that do not take into account the limitations inherent to the LRP derivation. This methodological weakness renders unsound the neurophysiological support for their views on the nature of spatial S-R compatibility effects. In this article, the author discusses the limitations and possibilities of lateralized event-related potentials (ERPs) in the investigation of spatial S-R correspondence effects.  相似文献   

6.
Previous research has shown that na_ve participants display a high level of agreement when asked to choose or drawschematic representations, or image schemas, of concrete and abstract verbs [Proceedings of the 23rd Annual Meeting of the Cognitive Science Society, 2001, Erlbaum, Mawhah, NJ, p. 873]. For example, participants tended to ascribe a horizontal image schema to push, and a vertical image schema to respect. This consistency in offline data is preliminary evidence that language invokes spatial forms of representation. It also provided norms that were used in the present research to investigate the activation of spatial image schemas during online language comprehension. We predicted that if comprehending a verb activates a spatial representation that is extended along a particular horizontal or vertical axis, it will affect other forms of spatial processing along that axis. Participants listened to short sentences while engaged in a visual discrimination task (Experiment 1) and a picture memory task (Experiment 2). In both cases, reaction times showed an interaction between the horizontal/vertical nature of the verb's image schema, and the horizontal/vertical position of the visual stimuli. We argue that such spatial effects of verb comprehension provide evidence for the perceptual–motor character of linguistic representations.  相似文献   

7.
Three experiments investigated whether spatial information acquired from vision and language is maintained in distinct spatial representations on the basis of the input modality. Participants studied a visual and a verbal layout of objects at different times from either the same (Experiments 1 and 2) or different learning perspectives (Experiment 3) and then carried out a series of pointing judgments involving objects from the same or different layouts. Results from Experiments 1 and 2 indicated that participants pointed equally fast on within- and between-layout trials; coupled with verbal reports from participants, this result suggests that they integrated all locations in a single spatial representation during encoding. However, when learning took place from different perspectives in Experiment 3, participants were faster to respond to within- than between-layout trials and indicated that they kept separate representations during learning. Results are compared to those from similar studies that involved layouts learned from perception only.  相似文献   

8.
Across cultures people construct spatial representations of time. However, the particular spatial layouts created to represent time may differ across cultures. This paper examines whether people automatically access and use culturally specific spatial representations when reasoning about time. In Experiment 1, we asked Hebrew and English speakers to arrange pictures depicting temporal sequences of natural events, and to point to the hypothesized location of events relative to a reference point. In both tasks, English speakers (who read left to right) arranged temporal sequences to progress from left to right, whereas Hebrew speakers (who read right to left) arranged them from right to left, replicating previous work. In Experiments 2 and 3, we asked the participants to make rapid temporal order judgments about pairs of pictures presented one after the other (i.e., to decide whether the second picture showed a conceptually earlier or later time-point of an event than the first picture). Participants made responses using two adjacent keyboard keys. English speakers were faster to make "earlier" judgments when the "earlier" response needed to be made with the left response key than with the right response key. Hebrew speakers showed exactly the reverse pattern. Asking participants to use a space-time mapping inconsistent with the one suggested by writing direction in their language created interference, suggesting that participants were automatically creating writing-direction consistent spatial representations in the course of their normal temporal reasoning. It appears that people automatically access culturally specific spatial representations when making temporal judgments even in nonlinguistic tasks.  相似文献   

9.
The magnitude effect, where larger outcomes are discounted proportionally less than smaller outcomes, is a well‐established phenomenon in delay discounting by human participants. To this point in the literature magnitude effects have not been reliably evidenced in nonhuman animals. , however, used a concurrent‐chains arrangement with pigeon and found evidence for a magnitude effect. Grace et al. suggested that in many delay discounting experimental arrangements with nonhuman animals (e.g., adjusting amount, adjusting delay) the organism is not given the opportunity to directly compare outcomes of different sizes. They suggest that because of the lack of direct comparison it is difficult for the organism to determine the relative size of each outcome, which in turn mutes the effect of the amount differences between outcomes. As a test of this “comparison hypothesis,” the present experiment was conducted to assess whether the magnitude effect would be evidenced in pigeon when using an adjusting amount procedure where outcomes of different amounts were presented proximally. In the present arrangement, pigeons were presented two choice panels in an operant chamber where each panel was associated with an independent adjusting amount delay discounting task, but with differing outcome amounts (i.e., a 32‐food pellet panel and an 8‐food pellet panel). In this arrangement the choice panels alternated in their availability within a session from trial block to trial block. The present findings indicate no reliable effect of amount, even when the outcomes were proximal and thus readily comparable. This result suggests that the lack of magnitude effect is not driven by the organism's ability to compare the difference in amount between choice alternatives.  相似文献   

10.
The potential of airborne sonar to provide effective information about three-dimensional (3D) spatial layouts was assessed in four companion experiments. Blindfolded participants, never given visual access to the layout of a large room, were asked to use a sonar device whose output they had never previously encountered to judge the passability (by normal walking) of apertures between two aligned wall panels. Estimates were made from fixed and variable locations, approaches to the apertures were made from orthogonal and oblique angles, and the panels were at different distances and orientations. In each experiment, participants gave evidence of an ability to immediately use the information in structured echoes to make these judgments, though aperture location, approach angles, wall alignment and orientation each had significant effects on performance. The data are compared with performance under visual and nonechoic auditory conditions and are discussed with respect to the notions of potential information and effective information during these perceptually guided tasks.  相似文献   

11.
Previous studies have indicated that temporal concepts such as the past and future are associated with horizontal (left–right) space. This association has been interpreted as reflecting left‐to‐right writing systems. The Japanese language, however, is written both horizontally and vertically and, when texts are presented vertically, the sequence of columns runs from right to left. This study examines whether the associations between time and space are changed by the direction of the character strings using a word categorization task. Consistent with previous studies, response times and error rates indicated left‐past and right‐future associations when participants read words presented horizontally. On the other hand, response times indicated the opposite (i.e., left‐future and right‐past associations) when participants read words presented vertically. These results suggest that temporal concepts are not associated with one's body or physical space in an inflexible manner, but rather the associations can flexibly change through experience.  相似文献   

12.
采用EyeLink Ⅱ眼动仪,以汉-俄双语者(15名)和母语为俄语者(15名)为被试,要求他们阅读有空格、删除空格、去空格隔词加粗三种类型词切分方式的俄语句子,探讨不同词切分方式对两组被试俄语文本阅读的影响。结果表明:和有空格条件相比,去空格隔词加粗条件下的平均注视时间和阅读时间更长,注视次数更多,阅读速度更慢;而删除空格条件下的阅读成绩最低。在各呈现条件下,母语为俄语者的成绩都显著好于汉-俄双语者;删除空格后,汉-俄双语者比母语为俄语者的阅读时间更长,注视次数更多,阅读速度更慢。对于像俄语这种有词间空格的语言来说,删除词间空格不仅对母语读者的阅读产生干扰,对于第二语言读者的阅读产生的干扰会更大。汉-俄双语者的俄语水平和俄语文本的呈现方式影响他们的阅读。  相似文献   

13.
Most people born deaf and exposed to oral language show scant evidence of sensitivity to the phonology of speech when processing written language. In this respect they differ from hearing people. However, occasionally, a prelingually deaf person can achieve good processing of written language in terms of phonological sensitivity and awareness, and in this respect appears exceptional. We report the pattern of event-related fMRI activation in such a deaf reader while performing a rhyme-judgment on written words with similar spelling endings that do not provide rhyme clues. The left inferior frontal gyrus pars opercularis and the left inferior parietal lobe showed greater activation for this task than for a letter-string identity matching task. This participant was special in this regard, showing significantly greater activation in these regions than a group of hearing participants with a similar level of phonological and reading skill. In addition, SR showed activation in the left mid-fusiform gyrus; a region which did not show task-specific activation in the other respondents. The pattern of activation in this exceptional deaf reader was also unique compared with three deaf readers who showed limited phonological processing. We discuss the possibility that this pattern of activation may be critical in relation to phonological decoding of the written word in good deaf readers whose phonological reading skills are indistinguishable from those of hearing readers.  相似文献   

14.
15.
Past research (e.g., J. M. Loomis, Y. Lippa, R. L. Klatzky, & R. G. Golledge, 2002) has indicated that spatial representations derived from spatial language can function equivalently to those derived from perception. The authors tested functional equivalence for reporting spatial relations that were not explicitly stated during learning. Participants learned a spatial layout by visual perception or spatial language and then made allocentric direction and distance judgments. Experiments 1 and 2 indicated allocentric relations could be accurately reported in all modalities, but visually perceived layouts, tested with or without vision, produced faster and less variable directional responses than language. In Experiment 3, when participants were forced to create a spatial image during learning (by spatially updating during a backward translation), functional equivalence of spatial language and visual perception was demonstrated by patterns of latency, systematic error, and variability.  相似文献   

16.
Human visual exploration is not homogeneous but displays spatial biases. Specifically, early after the onset of a visual stimulus, the majority of eye movements target the left visual space. This horizontal asymmetry of image exploration is rather robust with respect to multiple image manipulations, yet can be dynamically modulated by preceding text primes. This characteristic points to an involvement of reading habits in the deployment of visual attention. Here, we report data of native right-to-left (RTL) readers with a larger variation and stronger modulation of horizontal spatial bias in comparison to native left-to-right (LTR) readers after preceding text primes. To investigate the influences of biological and cultural factors, we measure the correlation of the modulation of the horizontal spatial bias for native RTL readers and native LTR readers with multiple factors: age, gender, second language proficiency, and age at which the second language was acquired. The results demonstrate only weak or no correlations between the magnitude of the horizontal bias and the previously mentioned factors. We conclude that the spatial bias of viewing behaviour for native RTL readers is more variable than for native LTR readers, and this variance could not be demonstrated to be associated with interindividual differences. We speculate the role of strength of habit and/or the interindividual differences in the structural and functional brain regions as a cause of the RTL spatial bias among RTL native readers.  相似文献   

17.
Three experiments are reported which address the problem of defining a role for inner speech. Experiments 1 and 2 establish that inner speech is acquired by normally developing readers between the ages of 8 and 11, and that both slow and fast readers show a similar pattern of acquistition, but do so at a different rate from normal readers. We suggest that the development of inner speech accompanies a strategy of reading aloud “with expression”; and that it is a manifestation of the need to prestructure oral utterances. These will thus contain the lexical items visible on the page within an appropriate prosodic envelope. Both segmental and suprasegmental phonemes contribute to meaning of spoken and, by analogy, written language. Experiment 3 showed that children at this critical point in learning to read comprehended text better when certain prosodic features were made visible on the text. Prosodic restructureing may thus be an important skill acquired by young readers as they progress toward fluent, silent adult reading.  相似文献   

18.
A vibrotactile N-back task was used to generate cognitive load while participants were guided along virtual paths without vision. As participants stepped in place, they moved along a virtual path of linear segments. Information was provided en route about the direction of the next turning point, by spatial language ("left," "right," or "straight") or virtual sound (i.e., the perceived azimuth of the sound indicated the target direction). The authors hypothesized that virtual sound, being processed at direct perceptual levels, would have lower load than even simple language commands, which require cognitive mediation. As predicted, whereas the guidance modes did not differ significantly in the no-load condition, participants showed shorter distance traveled and less time to complete a path when performing the N-back task while navigating with virtual sound as guidance. Virtual sound also produced better N-back performance than spatial language. By indicating the superiority of virtual sound for guidance when cognitive load is present, as is characteristic of everyday navigation, these results have implications for guidance systems for the visually impaired and others.  相似文献   

19.
Conceptual metaphor is ubiquitous in language and thought, as we usually reason and talk about abstract concepts in terms of more concrete ones via metaphorical mappings that are hypothesized to arise from our embodied experience. One pervasive example is the conceptual projection of valence onto space, which flexibly recruits the vertical and lateral spatial frames to gain structure (e.g., good is up ‐bad is down and good is right ‐bad is left ). In the current study, we used a valence judgment task to explore the role that exogenous bodily cues (namely response hand positions) play in the allocation of spatial attention and the modulation of conceptual congruency effects. Experiment 1 showed that congruency effects along the vertical axis are weakened when task conditions (i.e., the use of vertical visual cues, on the one hand, and the horizontal alignment of responses, on the other) draw attention to both the vertical and lateral axes making them simultaneously salient. Experiment 2 evidenced that the vertical alignment of participants’ hands while responding to the task—regardless of the location of their dominant hand—facilitates the judgment of positive and negative‐valence words, as long as participants respond in a metaphor–congruent manner (i.e., up responses are good and down responses are bad). Overall, these results support the claim that source domain representations are dynamically activated in response to the context and that bodily states are an integral part of that context.  相似文献   

20.
Item order can bias learners’ study decisions and undermine the use of more effective allocation strategies, such as allocating study time to items in one’s region of proximal learning. In two experiments, we evaluated whether the influence of item order on study decisions reflects habitual responding based on a reading bias. We manipulated the order in which relatively easy, moderately difficult, and difficult items were presented from left to right on a computer screen and examined selection preference as a function of item order and item difficulty. Experiment 1a was conducted with native Arabic readers and in Arabic, and Experiment 1b was conducted with native English readers and in English. Students from both cultures prioritized items for study in the reading order of their native language: Arabic readers selected items for study in a right-to-left fashion, whereas English readers largely selected items from left to right. In Experiment 2, native English readers completed the same task as participants in Experiment 1b, but for some participants, lines of text were rotated upside down to encourage them to read from right to left. Participants who read upside-down text were more likely to first select items on the right side of an array than were participants who studied right-side-up text. These results indicate that reading habits can bias learners’ study decisions and can undermine agenda-based regulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号