首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
雷怡  夏琦  莫志凤  李红 《心理学报》2020,52(7):811-822
近年来, 研究发现, 与成人面孔和其他社会性刺激相比, 成人对婴儿面孔表现出更多的注意偏向。本研究利用点探测范式, 结合眼动技术, 探讨了面孔可爱度和熟悉度对婴儿面孔注意偏向效应的影响。行为结果表明, 成人对高可爱度的婴儿面孔的反应时注意偏向更强; 眼动结果发现, 高可爱度的婴儿面孔的首注视时间偏向和总注视时间偏向更强, 表现为注意维持模式, 并且, 这一效应只出现在低熟悉度条件下; 而在可爱度评分上, 高熟悉度的婴儿面孔的可爱度评分显著高于低熟悉度的婴儿面孔。结果表明, 在低熟悉度条件下, 可爱度才会影响成人对婴儿面孔注意偏向效应; 在偏好行为上, 对婴儿面孔的主观评定和观看行为上可能存在分离的情况。  相似文献   

2.
In two experiments, it was investigated how preverbal infants perceive the relationship between a person and an object she is looking at. More specifically, it was examined whether infants interpret an adult's object-directed gaze as a marker of an intention to act or whether they relate the person and the object via a mechanism of associative learning. Fourteen-month-old infants observed an adult gazing repeatedly at one of two objects. When the adult reached out to grasp this object in the test trials, infants showed no systematic visual anticipations to it (i.e. first visual anticipatory gaze shifts) but only displayed longer looking times for this object than for another before her hand reached the object. However, they showed visual anticipatory gaze shifts to the correct action target when only the grasping action was presented. The second experiment shows that infants also look longer at the object a person has been gazing at when the person is still present, but is not performing any action during the test trials. Looking preferences for the objects were reversed, however, when the person was absent during the test trials. This study provides evidence for the claim that infants around 1 year of age do not employ other people's object-directed gaze to anticipate future actions, but to establish person-object associations. The implications of this finding for theoretical conceptions of infants' social-cognitive development are discussed.  相似文献   

3.
Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.  相似文献   

4.
Infants engage in social interactions that include multiple partners from very early in development. A growing body of research shows that infants visually predict the outcomes of an individual’s intentional actions, such as a person reaching towards an object (e.g., Krogh-Jespersen & Woodward, 2014), and even show sophistication in their predictions regarding failed actions (e.g., Brandone, Horwitz, Aslin, & Wellman, 2014). Less is known about infants’ understanding of actions involving more than one individual (e.g., collaborative actions), which require representing each partners’ actions in light of the shared goal. Using eye-tracking, Study 1 examined whether 14-month-old infants visually predict the actions of an individual based on her previously shared goal. Infants viewed videos of two women engaged in either a collaborative or noncollaborative interaction. At test, only one woman was present and infants’ visual predictions regarding her future actions were measured. Fourteen-month-olds anticipated an individual’s future actions based on her past collaborative behavior. Study 2 revealed that 11-month-old infants only visually predict higher-order shared goals after engaging in a collaborative intervention. Together, our results indicate that by the second year after birth, infants perceive others’ collaborative actions as structured by shared goals and that active engagement in collaboration strengthens this understanding in young infants.  相似文献   

5.
Two factors hypothesized to affect shared visual attention in 9-month-olds were investigated in two experiments. In Experiment 1, we examined the effects of different attention-directing actions (looking, looking and pointing, and looking, pointing and verbalizing) on 9-month-olds’ engagement in shared visual attention. In Experiment 1 we also varied target object locations (i.e., in front, behind, or peripheral to the infant) to test whether 9-month-olds can follow an adult’s gesture past a nearby object to a more distal target. Infants followed more elaborate parental gestures to targets within their visual field. They also ignored nearby objects to follow adults’ attention to a peripheral target, but not to targets behind them. In Experiment 2, we rotated the parent 90° from the infant’s midline to equate the size of the parents’ head turns to targets within as well as outside the infants’ visual field. This manipulation significantly increased infants’ looking to target objects behind them, however, the frequency of such looks did not exceed chance. The results of these two experiments are consistent with perceptual and social experience accounts of shared visual attention.  相似文献   

6.
Infants’ social environment is rich of complex sequences of events and actions. This study investigates whether 12-month-old infants are able to learn statistical regularities from a sequence of human gestures and whether this ability is affected by a social vs non-social context. Using a visual familiarization task, infants were familiarized to a continuous sequence of eight videos in which two women imitated each other performing arm gestures. The sequence of videos in which the two women performed imitative gestures was organized into 4 different gesture units. Videos within a gesture unit had a highly predictable transitional probability, while such transition was less predictable between gesture units. The social context was manipulated varying the mutual gaze of the actors and their body orientation. At test, infants were able to discriminate between the high- and low-predictable gesture units in the social, but not in the non-social condition. Results demonstrate that infants are capable to detect statistical regularities from a sequence of human gestures performed by two different individuals. Moreover, our findings indicate that salient social cues can modulate infants’ ability to extract statistical information from a sequence of gestures.  相似文献   

7.
In this study the ability of newborn infants to learn arbitrary auditory–visual associations in the absence versus presence of amodal (redundant) and contingent information was investigated. In the auditory-noncontingent condition 2-day-old infants were familiarized to two alternating visual stimuli (differing in colour and orientation), each accompanied by its ‘own’ sound: when the visual stimulus was presented the sound was continuously presented, independently of whether the infant looked at the visual stimulus. In the auditory-contingent condition the auditory stimulus was presented only when the infant looked at the visual stimulus: thus, presentation of the sound was contingent upon infant looking. On the post-familiarization test trials attention recovered strongly to a novel auditory–visual combination in the auditory-contingent condition, but remained low, and indistinguishable from attention to the familiar combination, in the auditory-noncontingent condition. These findings are a clear demonstration that newborn infants’ learning of arbitrary auditory–visual associations is constrained and guided by the presence of redundant (amodal) contingent information. The findings give strong support to Bahrick’s theory of early intermodal perception.  相似文献   

8.
The present study examined whether infants’ visual preferences for real objects and pictures are related to their manual object exploration skills. Fifty-nine 7-month-old infants were tested in a preferential looking task with a real object and its pictorial counterpart. All of the infants also participated in a manual object exploration task, in which they freely explored five toy blocks. Results revealed a significant positive relationship between infants’ haptic scan levels in the manual object exploration task and their gaze behavior in the preferential looking task: The higher infants’ haptic scan levels, the longer they looked at real objects compared to pictures. Our findings suggest that the specific exploratory action of haptically scanning an object is associated with infants’ visual preference for real objects over pictures.  相似文献   

9.
Six- and 12-month-old infant’s eye movements were recorded as they observed feeding actions being performed in a rational or non-rational manner. Twelve-month-olds fixated the goal of these actions before the food arrived (anticipation); the latency of these gaze shifts being dependent (r = .69) on infants life experience being feed. In addition, 6- and 12-month-olds dilated their pupil during observation of non-rational feeding actions. This effect could not be attributed to light differences or differences in familiarity, but was interpreted to reflect sympathetic-like activity and arousal caused by a violation of infant’s expectations about rationality. We argue that evaluation of rationality requires less experience than anticipations of action goals, suggesting a dual process account of preverbal infants’ everyday action understanding.  相似文献   

10.
Reaching and looking preferences and movement kinematics were recorded in 5-15-month-old infants, who were divided into 3 age groups. Infants were presented with pairs of cylinders of 3 different diameters: small (1-cm diameter), medium (2.5-cm diameter), and large (6-cm diameter). Whereas infants between 5 and 12 months of age showed a preference for looking first at the large object, a significant preference for reaching to smaller (graspable) objects was observed in 81/2-12-month-old infants. Kinematic measures suggest that the onset of object-oriented action requires a slowing down of the reach and an extended "homing-in" phase. The divergent looking and reaching preferences in infants at different ages may reflect a dissociation during development of visual processing streams subserving object-related action from those related to visual orienting.  相似文献   

11.
The emergence of joint attention is still a matter of vigorous debate. It involves diverse hypotheses ranging from innate modules dedicated to intention reading to more neuro-constructivist approaches. The aim of this study was to assess whether 12-month-old infants are able to recognize a “joint attention” situation when observing such a social interaction. Using a violation-of-expectation paradigm, we habituated infants to a “joint attention” video and then compared their looking time durations between “divergent attention” videos and “joint attention” ones using a 2 (familiar or novel perceptual component) × 2 (familiar or novel conceptual component) factorial design. These results were enriched with measures of pupil dilation, which are considered to be reliable measures of cognitive load. Infants looked longer at test events that involved novel speaker and divergent attention but no changes in infants’ pupil dilation were observed in any conditions. Although looking time data suggest that infants may appreciate discrepancies from expectations related to joint attention behavior, in the absence of clear evidence from pupillometry, the results show no demonstration of understanding of joint attention, even at a tacit level. Our results suggest that infants may be sensitive to relevant perceptual variables in joint attention situations, which would help scaffold social cognitive development. This study supports a gradual, learning interpretation of how infants come to recognize, understand, and participate in joint attention.  相似文献   

12.
Despite substantial evidence indicating a close link between action production and perception in early child development, less is known about how action experience shapes the processes of perceiving and anticipating others’ actions. Here, we developed a novel approach to capture functional connectivity specific to certain brain areas to investigate how action experience changes the networks involved in action perception and anticipation. Nine- and-12-month-old infants observed familiar (grasping) and novel (tool-use) actions while their brain activity was measured using EEG. Infants’ motor competence of both actions was assessed. A link between action experience and connectivity patterns was found, particularly during the anticipation period. During action anticipation, greater motor competence in grasping predicted greater functional connectivity between visual (occipital alpha) and motor (central alpha) regions relative to global levels of whole-brain EEG connectivity. Furthermore, visual and motor regions tended to be more coordinated in response to familiar versus novel actions and for older than younger participants. Critically, these effects were not found in the control networks (frontal-central; frontal-occipital; parietal-central; parietal-occipital), suggesting a unique role of visual-motor networks on the link between motor skills and action encoding.

Highlights

  • Infants’ motor development predicted functional connectivity patterns during action anticipation.
  • Faster graspers, and older infants, showed a stronger ratio of visual-motor neural coherence.
  • Overall whole-brain connectivity was modulated by age and familiarity with the actions.
  • Measuring inter-site relative to whole-brain connectivity can capture specific brain-behavior links.
  • Measures of phase-based connectivity over time are sensitive to anticipatory action.
  相似文献   

13.
Face‐to‐face interaction between infants and their caregivers is a mainstay of developmental research. However, common laboratory paradigms for studying dyadic interaction oversimplify the act of looking at the partner's face by seating infants and caregivers face to face in stationary positions. In less constrained conditions when both partners are freely mobile, infants and caregivers must move their heads and bodies to look at each other. We hypothesized that face looking and mutual gaze for each member of the dyad would decrease with increased motor costs of looking. To test this hypothesis, 12‐month‐old crawling and walking infants and their parents wore head‐mounted eye trackers to record eye movements of each member of the dyad during locomotor free play in a large toy‐filled playroom. Findings revealed that increased motor costs decreased face looking and mutual gaze: Each partner looked less at the other's face when their own posture or the other's posture required more motor effort to gain visual access to the other's face. Caregivers mirrored infants' posture by spending more time down on the ground when infants were prone, perhaps to facilitate face looking. Infants looked more at toys than at their caregiver's face, but caregivers looked at their infant's face and at toys in equal amounts. Furthermore, infants looked less at toys and faces compared to studies that used stationary tasks, suggesting that the attentional demands differ in an unconstrained locomotor task. Taken together, findings indicate that ever‐changing motor constraints affect real‐life social looking.  相似文献   

14.
When teaching infants new actions, parents tend to modify their movements. Infants prefer these infant-directed actions (IDAs) over adult-directed actions and learn well from them. Yet, it remains unclear how parents’ action modulations capture infants’ attention. Typically, making movements larger than usual is thought to draw attention. Recent findings, however, suggest that parents might exploit movement variability to highlight actions. We hypothesized that variability in movement amplitude rather than higher amplitude is capturing infants’ attention during IDAs. Using EEG, we measured 15-month-olds’ brain activity while they were observing action demonstrations with normal, high, or variable amplitude movements. Infants’ theta power (4–5 Hz) in fronto-central channels was compared between conditions. Frontal theta was significantly higher, indicating stronger attentional engagement, in the variable compared to the other conditions. Computational modelling showed that infants’ frontal theta power was predicted best by how surprising each movement was. Thus, surprise induced by variability in movements rather than large movements alone engages infants’ attention during IDAs. Infants with higher theta power for variable movements were more likely to perform actions successfully and to explore objects novel in the context of the given goal. This highlights the brain mechanisms by which IDAs enhance infants’ attention, learning, and exploration.  相似文献   

15.
Behavioral research has shown that infants use both behavioral cues and verbal cues when processing the goals of others’ actions. For instance, 18-month-olds selectively imitate an observed goal-directed action depending on its (in)congruence with a model’s previous verbal announcement of a desired action goal. This EEG-study analyzed the electrophysiological underpinnings of these behavioral findings on the two functional levels of conceptual action processing and motor activation. Mid-latency mean negative ERP amplitude and mu-frequency band power were analyzed while 18-month-olds (N = 38) watched videos of an adult who performed one out of two potential actions on a novel object. In a within-subjects design, the action demonstration was preceded by either a congruent or an incongruent verbally announced action goal (e.g., “up” or “down” and upward movement). Overall, ERP negativity did not differ between conditions, but a closer inspection revealed that in two subgroups, about half of the infants showed a broadly distributed increased mid-latency ERP negativity (indicating enhanced conceptual action processing) for either the congruent or the incongruent stimuli, respectively. As expected, mu power at sensorimotor sites was reduced (indicating enhanced motor activation) for congruent relative to incongruent stimuli in the entire sample. Both EEG correlates were related to infants’ language skills. Hence, 18-month-olds integrate action-goal-related verbal cues into their processing of others’ actions, at the functional levels of both conceptual processing and motor activation. Further, cue integration when inferring others’ action goals is related to infants’ language proficiency.  相似文献   

16.
The eye movements of infants, aged 4–5, 7–8, and 10–11 weeks, were recorded while they viewed either a representation of a face or a nonface stimulus. Presentation of the visual stimulus was paired with the presentation of an auditory stimulus (either voice or tone) or silence. Attention to the visual stimulus was greater for the older two groups than for the youngest group. The effect of the addition of sound was to increase attention to the visual stimulus. In general, the face was looked at more than the nonface stimulus. The difference in visual attention between the face and the nonface stimulus did not appear to be based solely on the physical characteristics of the stimuli. A sharp increase in the amount of looking at the eyes of the face stimulus at 7–8 weeks of age seemed to be related to a developing appreciation of the meaning of the face as a pattern.  相似文献   

17.
Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real-time behaviors required for learning new words during free-flowing toy play, we measured infants’ visual attention and manual actions on to-be-learned toys. Parents and 12-to-26-month-old infants wore wireless head-mounted eye trackers, allowing them to move freely around a home-like lab environment. After the play session, infants were tested on their knowledge of object-label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants’ attention during and around a labeling utterance that predicted whether an object-label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention–when infants’ hands and eyes were attending to the same object–predicted word learning. Our results implicate a causal pathway through which infants’ bodily actions play a critical role in early word learning.  相似文献   

18.
The present study investigated whether facial expressions modulate visual attention in 7-month-old infants. First, infants' looking duration to individually presented fearful, happy, and novel facial expressions was compared to looking duration to a control stimulus (scrambled face). The face with a novel expression was included to examine the hypothesis that the earlier findings of greater allocation of attention to fearful as compared to happy faces could be due to the novelty of fearful faces in infants' rearing environment. The infants looked longer at the fearful face than at the control stimulus, whereas no such difference was found between the other expressions and the control stimulus. Second, a gap/overlap paradigm was used to determine whether facial expressions affect the infants' ability to disengage their fixation from a centrally presented face and shift attention to a peripheral target. It was found that infants disengaged their fixation significantly less frequently from fearful faces than from control stimuli and happy faces. Novel facial expressions did not have a similar effect on attention disengagement. Thus, it seems that adult-like modulation of the disengagement of attention by threat-related stimuli can be observed early in life, and that the influence of emotionally salient (fearful) faces on visual attention is not simply attributable to the novelty of these expressions in infants' rearing environment.  相似文献   

19.
Maternal touch is considered crucial in regulating infants’ internal states when facing unknown or distressing situations. Here, we explored the effects of maternal touch on 7-month-old infants’ preferences towards emotions. Infants’ looking times were measured through a two-trial preferential looking paradigm, while infants observed dynamic videos of happy and angry facial expressions. During the observation, half of the infants received an affective touch (i.e., stroke), while the other half received a non-affective stimulation (i.e., fingertip squeeze) from their mother. Further, we assessed the frequency of maternal touch in the mother-infant dyad through The Parent-Infant Caregiving Touch Scale (PICTS). Our results have shown that infants’ attention to angry and happy facial expressions varied as a function of both present and past experiences with maternal touch. Specifically, in the affective touch condition, as the frequency of previous maternal affective tactile care increased (PICTS), the avoidance of angry faces decreased. Conversely, in the non-affective touch condition, as the frequency of previous maternal affective tactile care increased (PICTS), the avoidance of angry faces increased as well. Thus, past experience with maternal affective touch is a crucial predictor of the regulatory effects that actual maternal touch exerts on infants’ visual exploration of emotional stimuli.  相似文献   

20.
婴儿的动、名词词类学习存在跨文化差异,但是很少研究从注意偏好角度解释这类差异。本研究利用习惯化范式考察汉语婴儿在6-8个月和17-19个月时对事件中人物,动作和物体的区分。结果发现,6-8个月的婴儿仅能区分动作变化,对人物和物体无法区分,而17-19个月的婴儿对三类变化均可以区分。本研究提供了婴儿早期注意偏好发展的实验依据,同时为儿童早期单词获得提供了新的理论基础。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号