首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Because a telepresence robot is intended for use in telecommunications, conveying the presence of a remote sender is an important issue. Even though certain characteristics of robots such as identity could be effective at generating a sense of presence, this risks yielding a distortion of presence in the remote sender. Thus, in order to find effective ways to increase presence, we executed an experiment comparing a telepresence robot with high identity and a telepresence robot with low identity. The 60 participants in this study engaged in a video call with a remote sender using either a telepresence robot with high identity or a telepresence robot with low identity. The results showed that participants felt more remote sender presence when interacting with the telepresence robot with low identity than they did with the one with high identity. On the other hand, when a telepresence robot has high identity, participants felt more presence toward the robot than they did toward the one with low identity. In the second study (N = 72), participants experienced two types of telepresence robots (identity level: a telepresence robot with high identity vs. a telepresence robot with low identity) with two types of remote senders (number of remote senders: single remote sender vs. multiple remote senders). The identity level of the robot and the number of remote senders affected the presence of the remote sender, telepresence, and the presence of the robot. We discuss in detail the implications for the design of telepresence robots in terms of increasing presence.  相似文献   

2.
Arita A  Hiraki K  Kanda T  Ishiguro H 《Cognition》2005,95(3):B49-B57
As technology advances, many human-like robots are being developed. Although these humanoid robots should be classified as objects, they share many properties with human beings. This raises the question of how infants classify them. Based on the looking-time paradigm used by [Legerstee, M., Barna, J., & DiAdamo, C., (2000). Precursors to the development of intention at 6 months: understanding people and their actions. Developmental Psychology, 36, 5, 627-634.], we investigated whether 10-month-old infants expected people to talk to a humanoid robot. In a familiarization period, each infant observed an actor and an interactive robot behaving like a human, a non-interactive robot remaining stationary, and a non-interactive robot behaving like a human. In subsequent test trials, the infants were shown another actor talking to the robot and to the actor. We found that infants who had previously observed the interactive robot showed no difference in looking-time between the two types of test events. Infants in the other conditions, however, looked longer at the test event where the second experimenter talked to the robot rather than where the second experimenter talked to the person. These results suggest that infants interpret the interactive robot as a communicative agent and the non-interactive robot as an object. Our findings imply that infants categorize interactive humanoid robots as a kind of human being.  相似文献   

3.
Compliant robots can be more versatile than traditional robots, but their control is more complex. The dynamics of compliant bodies can however be turned into an advantage using the physical reservoir computing framework. By feeding sensor signals to the reservoir and extracting motor signals from the reservoir, closed loop robot control is possible. Here, we present a novel framework for implementing central pattern generators with spiking neural networks to obtain closed loop robot control. Using the FORCE learning paradigm, we train a reservoir of spiking neuron populations to act as a central pattern generator. We demonstrate the learning of predefined gait patterns, speed control and gait transition on a simulated model of a compliant quadrupedal robot.  相似文献   

4.
This paper conceptualizes human to robot empathy as empathetic arrangements configured in caring spaces. Analyzing empathy towards care robots as arrangements comprising robots, spaces, discourses, bodies and institutions enables recognition of the way empathy is about self-other relationships while eschewing an understanding of empathy in terms of a reciprocal relationship between human and robot. Situating the therapeutic, zoomorphic robot, Paro and the health care support robot, Care-O-Bot as part of empathetic arrangements draws attention to how the cultivation of empathy towards robots governs and regulates patient sociality. In particular, it shows that these robots do not function as substitutes for human carers but instead are dependent on human labor if they are to deliver therapy ethically and effectively. They rely on the affective labor of the patient and the labor of carers and others in the arrangement.  相似文献   

5.
Autonomous mobile robots emerged as an important kind of transportation system in warehouses and factories. In this work, we present the use of MECA cognitive architecture in the development of an artificial mind for an autonomous robot responsible for multiple tasks, including transportation of packages along a factory floor, environment exploration, warehouse inventory, its internal energy management, self-monitoring and dealing with human operators and other robots. The present text provides a detailed specification for the architecture and its software implementation. Future work will present the simulation results under different configurations, together with a detailed analysis of the architecture performance and its generalization for autonomous robot control.  相似文献   

6.
7.
8.
IntroductionRecent research on human–robot interactions (HRI) emphasizes a role of user's attitudes in perceiving robot's with different robot embodiments of varying levels of human likenesses. However, other human factors such as educational background may also help understanding of what conditions contribute to enhance social perception of robot's features.ObjectivesThis study aimed to determine how people's attitudes towards and familiarization with robots influence social perception of particular features of robots.MethodFirst, we measured attitudes towards robots among undergraduate students with diverse educational background (engineering vs. psychology). Then, participants were presented with short movies showing the behaviour of three robots with different levels of sociability. Finally, participants evaluated the characteristics of these robots on a scale.ResultsPeople more familiar with social robots and with more positive attitudes towards them evaluate robots with human traits more highly.ConclusionHuman perception of social robots resembles social phenomena related to human perception of other people.  相似文献   

9.
IntroductionGiven their novelty, social robots (i.e., robots using natural language, displaying and recognizing emotions) will generate uncertainty among users. Social representations allow making sense of the new, drawing from existing knowledge.ObjectiveA free association questionnaire was administered to 212 Portuguese adults to identify the social representation of robot.MethodData was analysed with EVOC 2000 and SIMI 2000 software.ResultsThe social representation of robot is organized around the ideas of technology, help and future. Differences in the representation according to age, gender and level of education where also identified.ConclusionThe social representation of robot is marked by the conception of it as a tool. This contrasts with the concept of social robots as social agents. Implications for social robot's acceptance are discussed.  相似文献   

10.
We present a computational model of grasping of non-fixated (extrafoveal) target objects which is implemented on a robot setup, consisting of a robot arm with cameras and gripper. This model is based on the premotor theory of attention (Rizzolatti et al., 1994) which states that spatial attention is a consequence of the preparation of goal-directed, spatially coded movements (especially saccadic eye movements). In our model, we add the hypothesis that saccade planning is accompanied by the prediction of the retinal images after the saccade. The foveal region of these predicted images can be used to determine the orientation and shape of objects at the target location of the attention shift. This information is necessary for precise grasping. Our model consists of a saccade controller for target fixation, a visual forward model for the prediction of retinal images, and an arm controller which generates arm postures for grasping. We compare the precision of the robotic model in different task conditions, among them grasping (1) towards fixated target objects using the actual retinal images, (2) towards non-fixated target objects using visual prediction, and (3) towards non-fixated target objects without visual prediction. The first and second setting result in good grasping performance, while the third setting causes considerable errors of the gripper orientation, demonstrating that visual prediction might be an important component of eye–hand coordination. Finally, based on the present study we argue that the use of robots is a valuable research methodology within psychology.  相似文献   

11.
The expanding ability of robots to take unsupervised decisions renders it imperative that mechanisms are in place to guarantee the safety of their behaviour. Moreover, intelligent autonomous robots should be more than safe; arguably they should also be explicitly ethical. In this paper, we put forward a method for implementing ethical behaviour in robots inspired by the simulation theory of cognition. In contrast to existing frameworks for robot ethics, our approach does not rely on the verification of logic statements. Rather, it utilises internal simulations which allow the robot to simulate actions and predict their consequences. Therefore, our method is a form of robotic imagery. To demonstrate the proposed architecture, we implement a version of this architecture on a humanoid NAO robot so that it behaves according to Asimov’s laws of robotics. In a series of four experiments, using a second NAO robot as a proxy for the human, we demonstrate that the Ethical Layer enables the robot to prevent the human from coming to harm in simple test scenarios.  相似文献   

12.
Nowadays, robots and humans coexist in real settings where robots need to interact autonomously making their own decisions. Many applications require that robots adapt their behavior to different users and remember each user’s preferences to engage them in the interaction. To this end, we propose a decision making system for social robots that drives their actions taking into account the user and the robot’s state. This system is based on bio-inspired concepts, such as motivations, drives and wellbeing, that facilitate the rise of natural behaviors to ease the acceptance of the robot by the users. The system has been designed to promote the human-robot interaction by using drives and motivations related with social aspects, such as the users’ satisfaction or the need of social interaction. Furthermore, the changes of state produced by the users’ exogenous actions have been modeled as transitional states that are considered when the next robot’s action has to be selected. Our system has been evaluated considering two different user profiles. In the proposed system, user’s preferences are considered and alter the homeostatic process that controls the decision making system. As a result, using reinforcement learning algorithms and considering the robot’s wellbeing as the reward function, the social robot Mini has learned from scratch two different policies of action, one for each user, that fit the users’ preferences. The robot learned behaviors that maximize its wellbeing as well as keep the users engaged in the interactions.  相似文献   

13.
The use of robots in therapy for children with autism spectrum disorder (ASD) raises issues concerning the ethical and social acceptability of this technology and, more generally, about human–robot interaction. However, usually philosophical papers on the ethics of human–robot-interaction do not take into account stakeholders’ views; yet it is important to involve stakeholders in order to render the research responsive to concerns within the autism and autism therapy community. To support responsible research and innovation in this field, this paper identifies a range of ethical, social and therapeutic concerns, and presents and discusses the results of an exploratory survey that investigated these issues and explored stakeholders’ expectations about this kind of therapy. We conclude that although in general stakeholders approve of using robots in therapy for children with ASD, it is wise to avoid replacing therapists by robots and to develop and use robots that have what we call supervised autonomy. This is likely to create more trust among stakeholders and improve the quality of the therapy. Moreover, our research suggests that issues concerning the appearance of the robot need to be adequately dealt with by the researchers and therapists. For instance, our survey suggests that zoomorphic robots may be less problematic than robots that look too much like humans.  相似文献   

14.
In this article, the authors examine whether and how robot caregivers can contribute to the welfare of children with various cognitive and physical impairments by expanding recreational opportunities for these children. The capabilities approach is used as a basis for informing the relevant discussion. Though important in its own right, having the opportunity to play is essential to the development of other capabilities central to human flourishing. Drawing from empirical studies, the authors show that the use of various types of robots has already helped some children with impairments. Recognizing the potential ethical pitfalls of robot caregiver intervention, however, the authors examine these concerns and conclude that an appropriately designed robot caregiver has the potential to contribute positively to the development of the capability to play while also enhancing the ability of human caregivers to understand and interact with care recipients.  相似文献   

15.
Considering the growing acceptance of humanoid robots in the service industry, this study aimed to examine their negative impact on service evaluation, as well as the underlying mechanism of perceived effort and the moderating role of consumer mindset. Three experiments that used different service scenarios revealed that humanoid service robots negatively affected service evaluation compared to human employees, and this effect was mediated by decreased perceived effort. Furthermore, this negative impact was attenuated when consumers had a concrete mindset compared to abstract. This work contributes to both consumer service and robot literature by elaborating on the possible adverse influence of replacing human employees with humanoid service robots. It also offers managerial implications for how and when to adopt a robot service in this machine age.  相似文献   

16.
ABSTRACT

The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely future psychological contract partners for human employees, given these entities transform notions of workplace technology from being a tool to being an active partner. We first overview the increasing role of robots in the workplace, particularly through the advent of sociable AI, and synthesize the literature on human–robot interaction. We then develop an account of a human-social robot psychological contract and zoom in on the implications of this exchange for the enactment of reciprocity. Given the future-focused nature of our work we utilize a thought experiment, a commonly used form of conceptual and mental model reasoning, to expand on our theorizing. We then outline potential implications of human-social robot psychological contracts and offer a range of pathways for future research.  相似文献   

17.
Impedance control has been suggested as the strategy employed by the central nervous system to control human postures and movements. A realization of this strategy is presented that uses a model predictive control algorithm as a higher motor controller. External disturbances are explicitly included in the model. The combination of 3 key factors-joint impedance control, model predictive controller, and external disturbance input-forms the basis for the generality of this model. The model was applied to 3 different types of joint movements: a tracking movement with an unpredicted disturbance, a rhythmic movement, and an unstable biped model of human walking. Computer simulation results showed excellent performance of the model in all 3 cases for optimal values of active joint impedances and an exact match between the musculoskeletal system and the model internal to the model predictive controller. The controller was also able to maintain acceptable performance in the presence of a 25% mismatch between the musculoskeletal system and its internal model.  相似文献   

18.
The interactivist-constructivist (IC) approach offers an attractive framework for the development of intelligent robots. However, we still lack genuinely intelligent robots, capable of representing the world, in the IC sense. Here we argue that the reason for this situation is the lack of learning mechanisms that would allow the components of the robotic controller to learn constructively while they direct the robot's action in accordance to its value system. We also suggest that spike-timing-dependent plasticity (STDP) may be such a learning mechanism that operates in the brain.  相似文献   

19.
The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.  相似文献   

20.
Interesting systems, whether biological or artificial, develop. Starting from some initial conditions, they respond to environmental changes, and continuously improve their capabilities. Developmental psychologists have dedicated significant effort to studying the developmental progression of infant imitation skills, because imitation underlies the infant's ability to understand and learn from his or her social environment. In a converging intellectual endeavour, roboticists have been equipping robots with the ability to observe and imitate human actions because such abilities can lead to rapid teaching of robots to perform tasks. We provide here a comparative analysis between studies of infants imitating and learning from human demonstrators, and computational experiments aimed at equipping a robot with such abilities. We will compare the research across the following two dimensions: (a) initial conditions-what is innate in infants, and what functionality is initially given to robots, and (b) developmental mechanisms-how does the performance of infants improve over time, and what mechanisms are given to robots to achieve equivalent behaviour. Both developmental science and robotics are critically concerned with: (a) how their systems can and do go 'beyond the stimulus' given during the demonstration, and (b) how the internal models used in this process are acquired during the lifetime of the system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号