Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Cully AHR, Mouret J-B, 2013,

    Behavioral repertoire learning in robotics

    , Proceedings of the 15th annual conference on Genetic and evolutionary computation, Publisher: ACM, Pages: 175-182

    Behavioral Repertoire Learning in RoboticsAntoine CullyISIR, Université Pierre et Marie Curie-Paris 6,CNRS UMR 72224 place Jussieu, F-75252, Paris Cedex 05,Francecully@isir.upmc.frJean-Baptiste MouretISIR, Université Pierre et Marie Curie-Paris 6,CNRS UMR 72224 place Jussieu, F-75252, Paris Cedex 05,Francemouret@isir.upmc.frABSTRACTLearning in robotics typically involves choosing a simple goal(e.g. walking) and assessing the performance of each con-troller with regard to this task (e.g. walking speed). How-ever, learning advanced, input-driven controllers (e.g. walk-ing in each direction) requires testing each controller on alarge sample of the possible input signals. This costly pro-cess makes difficult to learn useful low-level controllers inrobotics.Here we introduce BR-Evolution, a new evolutionary learn-ing technique that generates a behavioral repertoire by tak-ing advantage of the candidate solutions that are usuallydiscarded. Instead of evolving a single, general controller,BR-evolution thus evolves a collection of simple controllers,one for each variant of the target behavior; to distinguishsimilar controllers, it uses a performance objective that al-lows it to produce a collection of diverse but high-performingbehaviors. We evaluated this new technique by evolving gaitcontrollers for a simulated hexapod robot. Results show thata single run of the EA quickly finds a collection of controllersthat allows the robot to reach each point of the reachablespace. Overall, BR-Evolution opens a new kind of learningalgorithm that simultaneously optimizes all the achievablebehaviors of a robot.

  • Conference paper
    Kryczka P, Hashimoto K, Takanishi A, Kormushev P, Tsagarakis N, Caldwell DGet al., 2013,

    Walking Despite the Passive Compliance: Techniques for Using Conventional Pattern Generators to Control Instrinsically Compliant Humanoid Robots

  • Conference paper
    Carrera A, Carreras M, Kormushev P, Palomeras N, Nagappa Set al., 2013,

    Towards valve turning with an AUV using Learning by Demonstration

  • Conference paper
    Sykes D, Corapi D, Magee J, Kramer J, Russo A, Inoue Ket al., 2013,

    Learning Revised Models For Planning In Adaptive Systems

    , 35th IEEE/ACM International Conference on Software Engineering, Publisher: IEEE/ACM, Pages: 63-71
  • Conference paper
    Maimari N, Krams R, Turliuc C-R, Broda K, Russo A, Kakas Aet al., 2013,

    ARNI: Abductive inference of complex regulatory network structures

    , 11th International Conference, CMSB 2013, Pages: 235-237, ISSN: 0302-9743

    Physical network inference methods use a template of molecular interaction to infer biological networks from high throughput datasets. Current inference methods have limited applicability, relying on cause-effect pairs or systematically perturbed datasets and fail to capture complex network structures. Here we present a novel framework, ARNI, based on abductive inference, that addresses these limitations. © Springer-Verlag 2013.

  • Conference paper
    Kryczka P, Kormushev P, Hashimoto K, Lim H-O, Tsagarakis NG, Caldwell DG, Takanishi Aet al., 2013,

    Hybrid gait pattern generator capable of rapid and dynamically consistent pattern regeneration

    , Publisher: IEEE, Pages: 475-480
  • Journal article
    Kormushev P, Calinon S, Caldwell DG, 2013,

    Reinforcement Learning in Robotics: Applications and Real-World Challenges

    , Robotics, Vol: 2, Pages: 122-148, ISSN: 2218-6581
  • Conference paper
    Dallali H, Mosadeghzad M, Medrano-Cerda GA, Docquier N, Kormushev P, Tsagarakis N, Li Z, Caldwell Det al., 2013,

    Development of a dynamic simulator for a compliant humanoid robot based on a symbolic multibody approach

    , Pages: 598-603
  • Conference paper
    Kryczka P, Shiguematsu YM, Kormushev P, Hashimoto K, Lim H-O, Takanishi Aet al., 2013,

    Towards dynamically consistent real-time gait pattern generation for full-size humanoid robots

  • Journal article
    Deisenroth MP, Turner RD, Huber MF, Hanebeck UD, Rasmussen CEet al., 2012,

    Robust Filtering and Smoothing with Gaussian Processes

    , IEEE TRANSACTIONS ON AUTOMATIC CONTROL, Vol: 57, Pages: 1865-1871, ISSN: 0018-9286
  • Conference paper
    Kormushev P, Caldwell DG, 2012,

    Direct policy search reinforcement learning based on particle filtering

  • Journal article
    Colasanto L, Kormushev P, Tsagarakis N, Caldwell DGet al., 2012,

    Optimization of a compact model for the compliant humanoid robot COMAN using reinforcement learning

    , International Journal of Cybernetics and Information Technologies, Vol: 12, Pages: 76-85, ISSN: 1311-9702

    COMAN is a compliant humanoid robot. The introduction of passive compliance in some of its joints affects the dynamics of the whole system. Unlike traditional stiff robots, there is a deflection of the joint angle with respect to the desired one whenever an external torque is applied. Following a bottom up approach, the dynamic equations of the joints are defined first. Then, a new model which combines the inverted pendulum approach with a three-dimensional (Cartesian) compliant model at the level of the center of mass is proposed. This compact model is based on some assumptions that reduce the complexity but at the same time affect the precision. To address this problem, additional parameters are inserted in the model equation and an optimization procedure is performed using reinforcement learning. The optimized model is experimentally validated on the COMAN robot using several ZMP-based walking gaits.

  • Conference paper
    Kormushev P, Caldwell DG, 2012,

    Simultaneous discovery of multiple alternative optimal policies by reinforcement learning

    , Pages: 202-207
  • Journal article
    Shen H, Yosinski J, Kormushev P, Caldwell DG, Lipson Het al., 2012,

    Learning Fast Quadruped Robot Gaits with the RL PoWER Spline Parameterization

    , International Journal of Cybernetics and Information Technologies, Vol: 12
  • Conference paper
    Lane DM, Maurelli F, Kormushev P, Carreras M, Fox M, Kyriakopoulos Ket al., 2012,

    Persistent Autonomy: the Challenges of the PANDORA Project

  • Journal article
    Leonetti M, Kormushev P, Sagratella S, 2012,

    Combining Local and Global Direct Derivative-free Optimization for Reinforcement Learning

    , International Journal of Cybernetics and Information Technologies, Vol: 12
  • Journal article
    Carrera A, Ahmadzadeh SR, Ajoudani A, Kormushev P, Carreras M, Caldwell DGet al., 2012,

    Towards Autonomous Robotic Valve Turning

    , Cybernetics and Information Technologies, Vol: 12, Pages: 17-26
  • Conference paper
    Kormushev P, Calinon S, Ugurlu B, Caldwell DGet al., 2012,

    Challenges for the policy representation when applying reinforcement learning in robotics

    , Pages: 1-8
  • Conference paper
    Kormushev P, Ugurlu B, Colasanto L, Tsagarakis NG, Caldwell DGet al., 2012,

    The anatomy of a fall: Automated real-time analysis of raw force sensor data from bipedal walking robots and humans

    , Pages: 3706-3713
  • Journal article
    Calinon S, Kormushev P, Caldwell DG, 2012,

    Compliant skills acquisition and multi-optima policy search with EM-based reinforcement learning

    , Robotics and Autonomous Systems

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=989&limit=20&page=20&respub-action=search.html Current Millis: 1732304072355 Current Time: Fri Nov 22 19:34:32 GMT 2024

Contact us

Artificial Intelligence Network
South Kensington Campus
Imperial College London
SW7 2AZ

To reach the elected speaker of the network, Dr Rossella Arcucci, please contact:

ai-speaker@imperial.ac.uk

To reach the network manager, Diana O'Malley - including to join the network - please contact:

ai-net-manager@imperial.ac.uk