首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Bottlenosed dolphin and human recognition of veridical and degraded video displays of an artificial gestural language
Authors:L M Herman  P Morrel-Samuels  A A Pack
Institution:Kewalo Basin Marine Mammal Laboratory, University of Hawaii, Honolulu 96814.
Abstract:2 bottlenosed dolphins proficient in interpreting gesture language signs viewed veridical and degraded gestures via TV without explicit training. In Exp. 1, dolphins immediately understood most gestures: Performance was high throughout degradations successively obscuring the head, torso, arms, and fingers, though deficits occurred for gestures degraded to a point-light display (PLD) of the signer's hands. In Exp. 2, humans of varying gestural fluency saw the PLD and veridical gestures from Exp. 1. Again, performance declined in the PLD condition. Though the dolphin recognized gestures as accurately as fluent humans, effects of the gesture's formational properties were not identical for humans and dolphin. Results suggest that the dolphin uses a network of semantic and gestural representations, that bottom-up processing predominates when the dolphin's short-term memory is taxed, and that recognition is affected by variables germane to grammatical category, short-term memory, and visual perception.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号