The WYSIWYD project will advance a number of strategic elements to achieve transparency and communication in Human Robot Interaction (HRI), building on the strong track record of the project partners in robotics, cognitive science, psychology and computational neuroscience.
a well defined experimental paradigm
an integrated architecture for perception, cognition and action that, among others, provides the backbone for the acquisition of an autonomous communication structure
a mechanism of robot self that together with mirroring mechanisms allows for mind reading (the inference of the mental states) of others
an autobiographical memory that compresses data streams and develops a personal narrative of the interaction history
a conceptual space that provides for an interface from memory to linguistic structures and their expression in speech and communicative actions.
HUMAN-ROBOT INTERACTION WITH THE DAC-H3 COGNITIVE ARCHITECTURE.
We present below four demonstrations of the DAC-h3 cognitive architecture. They show how the system adapts to various environments (different robots, labs and human partners). In Demonstrations 2 and 3, the internal states of the robot are displayed in an inset for a better understanding of the robot's internal dynamics. Demonstrations 1 and 4 correspond to live demonstrations performed at the review meetings of the WYSIWYD European Project. Demonstration 1 is the most complete, showing all the abilities of the current HRI system.
The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects, agents and body parts, as well as associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: passing objects, showing the learned kinematic structure, recognizing actions, pointing to the human body parts. A complex narrative dialog about the robot's past experiences is also demonstrated at the end of the video.
The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects and associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: taking an object, showing the learned kinematic structure, expressing a narrative. The goal for taking an object is executed twice, showing how the action plan execution adapts to the initial state of the object.
The inset in the top right part of the screen displays information about the current state of the robot. From left to right:
First row: Third person view of the iCub with detected objects and human body parts; view from the iCub's left eye camera indicating the detected objects and their current associated linguistic label; perception of the human skeleton tracked by the Kinect; learned kinematic structure (only appears when requested by the human).
Second row: Drive dynamics, recognition score of the most salient object, tactile sensations of the iCub's right hand.
The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labelling the perceived objects, agents an body parts, as well as associating body part touch and motor information. In addition, a number goal-oriented behavior is executed through human requests for taking and giving an object.
The inset in the top right part of the screen displays information about the current state of the robot. From left to right:
First row: Third person view of the iCub with detected objects and human body parts (actually not moving in this video due to a minor technical issue); view from the iCub's left eye camera indicating the detected objects and their current associated linguistic label; perception of the human skeleton tracked by the Kinect.
Second row: Drive dynamics, recognition score of the most salient object.
The robot is self-regulating two drives for knowledge acquisition and knowledge expression. Acquired information is about labeling the perceived objects, agents an body parts, as well as associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: showing the learned kinematic structure, expressing a narrative, recognizing an action, and playing with a ball.
Compared to the two previous demonstrations above, this video was recorded with another iCub robot, in a another lab and with another interacting human, demonstrating the robustness of the system to varying conditions.
iCub discovers its environment: WYSIWYD full integration at year 2!
The project has achieved all its objectives as expressed in our deliverables. In particular we have: (i) showed the progress toward anchoring self-learned representations to those of other agents in the robot's environments, leading to the emergence of a mirror neuron system (WP1 Sensorimotor Self and Mirroring), (ii) improved the Synthetic Autobiographical Memory (SAM) model with the addition of new features such as the presence of proactive tagging capabilities, the generation of fantasy memories and quantification of memory quality or similarity (WP2 The development of autobiographical memory and emergence of narrative self), (iii) developed the notion of narrative construction through an analogy with the notion of grammatical construction (WP3 Verbal self and intentional communication), (iv) extended the implementation of the WR-DAC architecture at the adaptive and contextual levels, integrating the contributions from the whole consortium into a coherent cognitive architecture (WP5 WR-DAC Intentional Architecture and Development) and (v) developed efficient algorithms for reaching while exploiting whole-body multisensory, in particular tactile, information (WP6 Motor control with whole-body awareness).
iCub learns Social interaction
Intelligent artifacts and robots are expected to operate in complex physical and social environments
. Whereas robots are slowly but surely being readied for the physical world, the social world is still at the horizon. The deployment of service and companion robots, however, requires that humans and robots do can understand each other and can communicate.
Moulin-Frier C, Sánchez-Fibla M, Verschure PFMJ. 2015. Autonomous development of turn-taking behaviors in agent populations: a computational study. 5th International Conference on Development and Learning and on Epigenetic Robotics. Google Scholar BibTex RTF XML RIS
Humaniod Robot Understands and Describes Actions
Here we see the iCub humanoid robot responding to complex instructions, performing actions, and then responding to questions in a pertinent manner, by making the proper "construal" of its mental model.
Related Publications:
Jouen A.L, Ellmore T.M, Madden C.J, Pallier C., Dominey P.F, Ventre-Dominey J.. 2015. Beyond the word and image: characteristics of a common meaning system for language and vision revealed by functional and structural imaging.. NeuroImage. 106. Google Scholar BibTex RTF XML RIS
Sorce M, Pointeau G, Petit M, Mealier A-L, Gibert G, Dominey PF. Robot and Human Interactive Communication (RO-MAN), 2015 24th IEEE International Symposium on2015: 776-83. IEEE.
Knowledge Transmission by a Humanoid Robot
Allowing humanoid robots to transmit knowledge with humans, with applications for space flight operations on the ISS. This is a practical application of autobiographical memory and narrative self.
Work by Marwin Sorce with the INSERM team.
Related publication:
Sorce M., Pointeau G., Petit M., Mealier A.L, Gibert G., Dominey P.F &. 2015. Proof of concept for a user-centered system for sharing cooperative plan knowledge over extended periods and crew changes in space-flight operations.. In Robot and Human Interactive Communication (RO-MAN), 2015 24th IEEE International Symposium on (pp. 776-783).
A companion emerges from integrating a layered architecture.
By implementing the DAC cognitive architecture on the iCub robot we can generate a meaningful human robot interactions. We show how the robot reacts and adapts to its environment in the context of continuous interactive scenarios such as interactive social games. As an artificial agent, the robot needs to maintain a self-model in terms of emotions and drives which need to be expressed in order to affect the social interaction pursued by the robot.
A social robot may need to fulfill the following requirements: (i) intrinsic needs to socially engage, successful interaction requires an agent that is socially motivated; (ii) action repertoire that supports communication and interaction, in a way that the agent is able to perform actions such as object manipulation, produce linguistic responses, recognize and identify a social agent, establish and maintain interaction etc. and finally (iii) the core ingredients of social competence: actions, goals and drives. We define as drives the intrinsic needs of the robot. Goals define the functional ontology of the robot and depend on the drives whereas actions are generated to satisfy goals. .A socially competent android requires a combination of drives and goals cou-pled together with an emotional system. Drives and goals motivate the robot’s behavior and evaluate action outcomes and emotions aim at appraising situations (epistemic emotions) and define communicative signals (utilitarian emotions). The robot’s behavior is guided by its internal drives and goals in order to satisfy its needs. Drives set the robot’ goals and contribute to the process of action-selection. The overall system is based on the Distributive Adaptive Control (DAC) architecture which consists of four coupled layers: soma, reactive, adaptive and contextual.