<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Vouloutsi, Vasiliki</style></author><author><style face="normal" font="default" size="100%">Grechuta, Klaudia</style></author><author><style face="normal" font="default" size="100%">Lallée, Stéphane</style></author><author><style face="normal" font="default" size="100%">Verschure, Paul F.M.J.</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Duff, Armin</style></author><author><style face="normal" font="default" size="100%">Lepora, NathanF.</style></author><author><style face="normal" font="default" size="100%">Mura, Anna</style></author><author><style face="normal" font="default" size="100%">Prescott, TonyJ.</style></author><author><style face="normal" font="default" size="100%">Verschure, PaulF.M.J.</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">The Influence of Behavioral Complexity on Robot Perception</style></title><secondary-title><style face="normal" font="default" size="100%">Biomimetic and Biohybrid Systems, Third International Conference, Living Machines 2014, Milan, Italy</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">allostatic control</style></keyword><keyword><style  face="normal" font="default" size="100%">behavioral modulation</style></keyword><keyword><style  face="normal" font="default" size="100%">human-robot interaction</style></keyword><keyword><style  face="normal" font="default" size="100%">social robots</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">09/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://dx.doi.org/10.1007/978-3-319-09435-9\_29</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer International Publishing</style></publisher><volume><style face="normal" font="default" size="100%">8608</style></volume><pages><style face="normal" font="default" size="100%">332–343</style></pages><isbn><style face="normal" font="default" size="100%">978-3-319-09434-2</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Since robots’ capabilities increase, they will soon be present in our daily lives and will be required to interact with humans in a natural way. Furthermore, robots will need to be removed from controlled environments and tested in public places where untrained people will be able to freely interact with them. Such needs raise a number of issues: what kind of behaviors are considered important in promoting interaction and how these behaviors affect people’s perception regarding the robot in terms of anthropomorphism, likeability, animacy and perceived intelligence. In this paper, we propose a motivational and emotional system that drives the robot’s behavior and test it against six interaction scenarios of varying complexity. In addition, we evaluate our system in two different environments: a controlled (laboratory) environment and a public space. Results suggest that the perception of the robot significantly changes depending on the complexity of the interaction but does not change depending on the environment.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Verschure, Paul F.M.J.</style></author><author><style face="normal" font="default" size="100%">Pennartz, Cyriel</style></author><author><style face="normal" font="default" size="100%">Pezzulo, Giovanni</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The Why, What, Where, When and How of Goal Directed Choice: neuronal and computational principles</style></title><secondary-title><style face="normal" font="default" size="100%">Philosophical Transactions of The Royal Society - Biological Sciences</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">cognition</style></keyword><keyword><style  face="normal" font="default" size="100%">computational modelling</style></keyword><keyword><style  face="normal" font="default" size="100%">control</style></keyword><keyword><style  face="normal" font="default" size="100%">decision-making</style></keyword><keyword><style  face="normal" font="default" size="100%">distributed adaptive</style></keyword><keyword><style  face="normal" font="default" size="100%">embodied</style></keyword><keyword><style  face="normal" font="default" size="100%">goal</style></keyword><keyword><style  face="normal" font="default" size="100%">goal-directed behaviour</style></keyword><keyword><style  face="normal" font="default" size="100%">reward</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">11/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://dx.doi.org/10.1098/rstb.2013.0483</style></url></web-urls></urls><number><style face="normal" font="default" size="100%">20130483</style></number><volume><style face="normal" font="default" size="100%">369</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;To define the How of action, goal-directed animals must Q1 solve four specific: ‘What do I need, Why, Where and When can this be obtained?’ or the H4W problem. Here, we elucidate the principles underlying the neuronal solutions to H4W using a combination of neurobiological and neurorobotic approaches. First, we analyse H4W from a system-level perspective by mapping its objectives onto the Distributed Adaptive Control embodied cognitive architecture which sees the generation of adaptive action in the real world as the primary task of the brain rather than optimally solving abstract problems. We next map this functional decomposition to the architecture of the rodent brain to test its consistency. Following this approach, we propose that the mammalian brain solves the H4W problem on the basis of multiple kinds of outcome predictions, integrating central representations of needs and drives (e.g. hypothalamus), valence (e.g. amygdala), world, self and task state spaces (e.g. neocortex, hippocampus and prefrontal cortex, respectively) combined with multi-modal selection (e.g. basal ganglia). In our analysis, goal-directed behaviour results from a well-structured architecture in which goals are bootstrapped on the basis of predefined needs, valence and multiple learning, memory and planning mechanisms rather than being generated by a singular computation.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">1655</style></issue></record></records></xml>