首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., "strawberry"), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing "strawberry"? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.  相似文献   

2.
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.  相似文献   

3.
郭晶晶  吕锦程 《心理科学》2014,37(6):1296-1301
本研究通过两个实验考察了语言标识对个体情绪体验的调节作用。实验一探讨了中性标识对不同类型的情绪体验的影响,结果发现与韩文字符标识相比,中性双字词标识时被试对负性图片的消极情绪体验强度显著降低。实验二进一步探讨了不同情绪色彩的词语的标识效应,结果发现与中性标识相比,负性或正性标识时被试对负性图片的消极情绪体验程度更低。结果表明了语言标识对负性情绪体验具有显著的调节作用,并且标识效应的产生依赖于词汇语义信息的通达。  相似文献   

4.
People from Western societies generally find it difficult to name odors. In trying to explain this, the olfactory literature has proposed several theories that focus heavily on properties of the odor itself but rarely discuss properties of the label used to describe it. However, recent studies show speakers of languages with dedicated smell lexicons can name odors with relative ease. Has the role of the lexicon been overlooked in the olfactory literature? Word production studies show properties of the label, such as word frequency and semantic context, influence naming; but this field of research focuses heavily on the visual domain. The current study combines methods from both fields to investigate word production for olfaction in two experiments. In the first experiment, participants named odors whose veridical labels were either high-frequency or low-frequency words in Dutch, and we found that odors with high-frequency labels were named correctly more often. In the second experiment, edibility was used for manipulating semantic context in search of a semantic interference effect, presenting the odors in blocks of edible and inedible odor source objects to half of the participants. While no evidence was found for a semantic interference effect, an effect of word frequency was again present. Our results demonstrate psycholinguistic variables—such as word frequency—are relevant for olfactory naming, and may, in part, explain why it is difficult to name odors in certain languages. Olfactory researchers cannot afford to ignore properties of an odor’s label.  相似文献   

5.
People often talk to themselves, yet very little is known about the functions of this self-directed speech. We explore effects of self-directed speech on visual processing by using a visual search task. According to the label feedback hypothesis (Lupyan, 2007a), verbal labels can change ongoing perceptual processing-for example, actually hearing "chair" compared to simply thinking about a chair can temporarily make the visual system a better "chair detector". Participants searched for common objects, while being sometimes asked to speak the target's name aloud. Speaking facilitated search, particularly when there was a strong association between the name and the visual target. As the discrepancy between the name and the target increased, speaking began to impair performance. Together, these results speak to the power of words to modulate ongoing visual processing.  相似文献   

6.
People often talk to themselves, yet very little is known about the functions of this self-directed speech. We explore effects of self-directed speech on visual processing by using a visual search task. According to the label feedback hypothesis (Lupyan, 2007a), verbal labels can change ongoing perceptual processing—for example, actually hearing “chair” compared to simply thinking about a chair can temporarily make the visual system a better “chair detector”. Participants searched for common objects, while being sometimes asked to speak the target's name aloud. Speaking facilitated search, particularly when there was a strong association between the name and the visual target. As the discrepancy between the name and the target increased, speaking began to impair performance. Together, these results speak to the power of words to modulate ongoing visual processing.  相似文献   

7.
Working memory (WM) is a cognitive system responsible for actively maintaining and processing relevant information and is central to successful cognition. A process critical to WM is the resolution of proactive interference (PI), which involves suppressing memory intrusions from prior memories that are no longer relevant. Most studies that have examined resistance to PI in a process-pure fashion used verbal material. By contrast, studies using non-verbal material are scarce, and it remains unclear whether the effect of PI is domain-general or whether it applies solely to the verbal domain. The aim of the present study was to examine the effect of PI in visual WM using both objects with high and low nameability. Using a Directed-Forgetting paradigm, we varied discriminability between WM items on two dimensions, one verbal (high-nameability vs. low-nameability objects) and one perceptual (colored vs. gray objects). As in previous studies using verbal material, effects of PI were found with object stimuli, even after controlling for verbal labels being used (i.e., low-nameability condition). We also found that the addition of distinctive features (color, verbal label) increased performance in rejecting intrusion probes, most likely through an increase in discriminability between content–context bindings in WM.  相似文献   

8.
A major part of learning a language is learning to map spoken words onto objects in the environment. An open question is what are the consequences of this learning for cognition and perception? Here, we present a series of experiments that examine effects of verbal labels on the activation of conceptual information as measured through picture verification tasks. We find that verbal cues, such as the word "cat," lead to faster and more accurate verification of congruent objects and rejection of incongruent objects than do either nonverbal cues, such as the sound of a cat meowing, or words that do not directly refer to the object, such as the word "meowing." This label advantage does not arise from verbal labels being more familiar or easier to process than other cues, and it does extends to newly learned labels and sounds. Despite having equivalent facility in learning associations between novel objects and labels or sounds, conceptual information is activated more effectively through verbal means than through nonverbal means. Thus, rather than simply accessing nonverbal concepts, language activates aspects of a conceptual representation in a particularly effective way. We offer preliminary support that representations activated via verbal means are more categorical and show greater consistency between subjects. These results inform the understanding of how human cognition is shaped by language and hint at effects that different patterns of naming can have on conceptual structure.  相似文献   

9.
毕翠华  冯欣蕊 《心理科学》2018,(5):1069-1076
时间和空间存在反应编码联合效应(spatial—temporal association of response codes effect, STEARC),该效应的编码是视觉空间性还是言语性还存在争议。本研究借鉴Georges(2015)的研究方式,以2秒内的时距为刺激,实验1采用言语反应和空间反应,词语和空间与时距的关系分为一致和不一致。结果发现言语反应时,短时距用“左边”反应快,长时距用“右边”反应快,空间反应时,时距和空间的一致性效应消失,表明言语编码参与两种反应形式的STEARC效应。实验2将词语改为箭头朝向(视觉编码条件),发现视觉编码和空间编码存在于相对应的反应形式中。研究表明时空关系的编码形式与具体任务要求有关。  相似文献   

10.
In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.  相似文献   

11.
The aims of the study were to assess the availability of verbal coding and its effect on performance in a standard visual matrix task, the Visual Patterns Test (VPT). In a pilot study, participants were presented with the patterns from the VPT and were asked to name the shapes within them. Availability of verbal codes was low overall; however, some patterns resulted in a higher mean number of labels than others. A modified version of the test was created from those patterns that had produced the lowest mean number of labels. A total of 60 participants then took part in an experimental study, which was carried out to assess whether or not the availability of verbal coding affects task performance. It was found that the modified version resulted in a lower visual working-memory span than that of another version in which the availability of verbal coding was higher. The study confirmed that verbal coding does influence visual matrix task performance; however, the modified version now offers a selection of patterns from the VPT where verbal coding has been limited.  相似文献   

12.
The aims of the study were to assess the availability of verbal coding and its effect on performance in a standard visual matrix task, the Visual Patterns Test (VPT). In a pilot study, participants were presented with the patterns from the VPT and were asked to name the shapes within them. Availability of verbal codes was low overall; however, some patterns resulted in a higher mean number of labels than others. A modified version of the test was created from those patterns that had produced the lowest mean number of labels. A total of 60 participants then took part in an experimental study, which was carried out to assess whether or not the availability of verbal coding affects task performance. It was found that the modified version resulted in a lower visual working-memory span than that of another version in which the availability of verbal coding was higher. The study confirmed that verbal coding does influence visual matrix task performance; however, the modified version now offers a selection of patterns from the VPT where verbal coding has been limited.  相似文献   

13.
《Cognitive development》2000,15(2):185-214
The question addressed in this study is whether the claim that children understand the symbolic status of pictures by the middle of their third year is an overestimate of their ability. Specifically, we asked whether children use language if possible to facilitate their performance in graphic symbolic tasks. Language (availability of verbal labels) was manipulated along with iconicity (degree of resemblance between symbol and referent) and perceptual similarity (between choice items) in a series of four experiments. Children 2.5 and 3 years old were presented with a graphic symbol for 4 s and immediately asked to choose the object depicted (referent) from two choice objects. In Study 1, degree of iconicity between picture and referent was varied and both choice objects had the same verbal label. The 2.5-year-olds failed to use any pictures or replicas as symbols. The 3-year-olds performed well with all types of symbols and better with highly iconic symbols. In Study 2, verbal label availability was manipulated by presenting choice objects having the same or different labels and by varying familiarity of labels. The 2.5-year-olds performed at chance when verbal labels were unavailable but above chance when they were available. The 3-year-olds were above chance in all conditions but performed less well when verbal labels were unavailable. Study 3 confirmed that young children use language to mediate picture symbol use. When 2.5-year-olds were provided with subordinate verbal labels in the matching task, subsequent performance was good even when choice objects had the same basic level verbal label. In Study 4, verbal label availability was contrasted with perceptual similarity between choice objects. When verbal labels could be used and choice objects were dissimilar, performance was best, and when verbal labels could not be used and choice objects were similar, it was worst. The results suggest that children's developing understanding of the symbolic function of pictures is tenuous in the third year, and is supported by their use of verbal labels.  相似文献   

14.
Bilingual infants from 6‐ to 24‐months of age are more likely to generalize, flexibly reproducing actions on novel objects significantly more often than age‐matched monolingual infants are. In the current study, we examine whether the addition of novel verbal labels enhances memory generalization in a perceptually complex imitation task. We hypothesized that labels would provide an additional retrieval cue and aid memory generalization for bilingual infants. Specifically, we hypothesized that bilinguals might be more likely than monolinguals to map multiple perceptual features onto a novel label and therefore show enhanced generalization. Eighty‐seven 18‐month‐old monolingual and bilingual infants were randomly assigned to one of two experimental conditions or a baseline control condition. In the experimental conditions, either no label or a novel label was added during demonstration and again at the beginning of the test session. After a 24‐hr delay, infants were tested with the same stimulus set to test cued recall and with a perceptually different but functionally equivalent stimulus set to test memory generalization. Bilinguals performed significantly above baseline on both cued recall and memory generalization in both experimental conditions, whereas monolinguals performed significantly above baseline only on cued recall in both experimental conditions. These findings show a difference between monolinguals and bilinguals in memory generalization and suggest that generalization differences between groups may arise from visual perceptual processing rather than linguistic processing. A video abstract of this article can be viewed at https://youtu.be/yXB4pM3fF2k  相似文献   

15.
Four experiments examined the effect of visual similarity on immediate memory for order. Experiments 1 and 2 used easily nameable line drawings. Following a sequential presentation in either silent or suppression conditions, participants were presented with the drawings in a new, random order and were required to remember their original serial position. In Experiment 3, participants first learned to associate a verbal label with an abstract matrix pattern. Then they completed an immediate memory task in which they had to name the matrices aloud during presentation. At recall, the task required remembering either the order of the matrices or the order of their names. In Experiment 4, participants learned to associate nonword labels with schematic line drawings of faces; the phonemic similarity of the verbal labels was also manipulated. All four experiments indicate that the representations supporting performance comprise both verbal and visual features. The results are consistent with a multiattribute encoding view.  相似文献   

16.
Previous research has found that pictures (e.g., a picture of an elephant) are remembered better than words (e.g., the word "elephant"), an empirical finding called the picture superiority effect (Paivio & Csapo. Cognitive Psychology 5(2):176-206, 1973). However, very little research has investigated such memory differences for other types of sensory stimuli (e.g. sounds or odors) and their verbal labels. Four experiments compared recall of environmental sounds (e.g., ringing) and spoken verbal labels of those sounds (e.g., "ringing"). In contrast to earlier studies that have shown no difference in recall of sounds and spoken verbal labels (Philipchalk & Rowe. Journal of Experimental Psychology 91(2):341-343, 1971; Paivio, Philipchalk, & Rowe. Memory & Cognition 3(6):586-590, 1975), the experiments reported here yielded clear evidence for an auditory analog of the picture superiority effect. Experiments 1 and 2 showed that sounds were recalled better than the verbal labels of those sounds. Experiment 2 also showed that verbal labels are recalled as well as sounds when participants imagine the sound that the word labels. Experiments 3 and 4 extended these findings to incidental-processing task paradigms and showed that the advantage of sounds over words is enhanced when participants are induced to label the sounds.  相似文献   

17.
Recent research has shown that holding telephone conversations disrupts one’s driving ability. We asked whether this effect could be attributed to a visual attention impairment. In Experiment 1, participants conversed on a telephone or listened to a narrative while engaged in multiple object tracking (MOT), a task requiring sustained visual attention. We found that MOT was disrupted in the telephone conversation condition, relative to single-task MOT performance, but that listening to a narrative had no effect. In Experiment 2, we asked which component of conversation might be interfering with MOT performance. We replicated the conversation and single-task conditions of Experiment 1 and added two conditions in which participants heard a sequence of words over a telephone. In the shadowing condition, participants simply repeated each word in the sequence. In the generation condition, participants were asked to generate a new word based on each word in the sequence. Word generation interfered with MOT performance, but shadowing did not. The data indicate that telephone conversation disrupts attention at a central stage, the act of generating verbal stimuli, rather than at a peripheral stage, such as listening or speaking.  相似文献   

18.
Symbols enable people to organize and communicate about the world. However, the ways in which symbolic knowledge is learned and then represented in the mind are poorly understood. We present a formal analysis of symbolic learning-in particular, word learning-in terms of prediction and cue competition, and we consider two possible ways in which symbols might be learned: by learning to predict a label from the features of objects and events in the world, and by learning to predict features from a label. This analysis predicts significant differences in symbolic learning depending on the sequencing of objects and labels. We report a computational simulation and two human experiments that confirm these differences, revealing the existence of Feature-Label-Ordering effects in learning. Discrimination learning is facilitated when objects predict labels, but not when labels predict objects. Our results and analysis suggest that the semantic categories people use to understand and communicate about the world can only be learned if labels are predicted from objects. We discuss the implications of this for our understanding of the nature of language and symbolic thought, and in particular, for theories of reference.  相似文献   

19.
When an object is identified as a specific exemplar, is it analyzed differently than when it is identified at the basic level? On the basis of a previous theory, we predicted that the left hemisphere (LH) is specialized for classifying objects at the basic level and the right hemisphere (RH) is specialized for classifying objects as specific exemplars. To test this prediction, participants were asked to view lateralized pictures of animals, artifacts, and faces of famous people; immediately after each picture was presented, a label was read aloud by the computer, and the participants decided whether the label was correct for that picture. A label could name the object at either the basic level (e.g., bird) or as an exemplar (e.g., robin). As predicted, we found that basic-level labels were matched faster when pictures were presented in the right visual field (and hence encoded initially in the LH), whereas exemplar labels were matched faster when pictures were presented in the left visual field (and hence encoded initially in the RH).  相似文献   

20.
Many authors have argued that word-learning constraints help guide a word-learner's hypotheses as to the meaning of a newly heard word. One such class of constraints derives from the observation that word-learners of all ages prefer to map novel labels to novel objects in situations of referential ambiguity. In this paper I use eye-tracking to document the mental computations that support this word-learning strategy. Adults and preschoolers saw images of known and novel objects, and were asked to find the referent of known and novel labels. Experiment 1 shows that adults systematically reject a known distractor (e.g. brush) before mapping a novel label (e.g. "dax") to a novel object. This is consistent with the proposal that participants worked through a Disjunctive Syllogism (i.e. Process-of-Elimination) to motivate the mapping of the novel label to the novel object. Experiment 2 shows that processing is similar for adults performing an implicit Disjunctive Syllogism (e.g. "the winner is the dax") and an explicit Disjunctive Syllogism (e.g. "the winner is not the iron"). Experiment 3 reveals that similar processes govern preschoolers' mapping of novel labels. Taken together, these results suggest that word-learners use Disjunctive Syllogism to motivate the mapping of novel labels to novel objects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号