Results
- Showing results for:
- Reset all filters
Search results
-
Conference paperCarrera A, Palomeras N, Ribas D, et al., 2014,
An Intervention-AUV learns how to perform an underwater valve turning
-
Journal articleDeisenroth MP, Fox D, Rasmussen CE, 2014,
Gaussian Processes for Data-Efficient Learning in Robotics and Control
, IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN: 0162-8828Autonomous learning has been a promising direction in control and robotics for more than a decade since data-drivenlearning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcementlearning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in realsystems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learningapproaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, orspecific knowledge about the underlying dynamics. In this article, we follow a different approach and speed up learning by extractingmore information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system.By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of modelerrors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves anunprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
-
Conference paperLaw M, Russo A, Broda K, 2014,
Inductive Learning of Answer Set Programs
, 14th European Conference on Logics in Artificial Intelligence (JELIA), Publisher: Springer, Pages: 311-325, ISSN: 0302-9743 -
Book chapterMaimari N, Broda K, Kakas A, et al., 2014,
Symbolic Representation and Inference of Regulatory Network Structures
, Logical Modeling of Biological Systems, Publisher: John Wiley & Sons, Inc., Pages: 1-48, ISBN: 9781119005223 -
Conference paperCarrera A, Palomeras N, Hurtos N, et al., 2014,
Learning by demonstration applied to underwater intervention
-
Conference paperTurliuc C-R, Maimari N, Russo A, et al., 2013,
On Minimality and Integrity Constraints in Probabilistic Abduction
, LPAR Logic for Programming,Artificial Intelligence and Reasoning, Publisher: Springer Verlag -
Journal articleGoodman DF, Benichoux V, Brette R, 2013,
Decoding neural responses to temporal cues for sound localization
, eLife, Vol: 2, ISSN: 2050-084XThe activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI: http://dx.doi.org/10.7554/eLife.01312.001.
-
Conference paperAhmadzadeh SR, Kormushev P, Caldwell DG, 2013,
Autonomous robotic valve turning: A hierarchical learning approach
, 2013 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 4629-4634, ISSN: 1050-4729Autonomous valve turning is an extremely challenging task for an Autonomous Underwater Vehicle (AUV). To resolve this challenge, this paper proposes a set of different computational techniques integrated in a three-layer hierarchical scheme. Each layer realizes specific subtasks to improve the persistent autonomy of the system. In the first layer, the robot acquires the motor skills of approaching and grasping the valve by kinesthetic teaching. A Reactive Fuzzy Decision Maker (RFDM) is devised in the second layer which reacts to the relative movement between the valve and the AUV, and alters the robot's movement accordingly. Apprenticeship learning method, implemented in the third layer, performs tuning of the RFDM based on expert knowledge. Although the long-term goal is to perform the valve turning task on a real AUV, as a first step the proposed approach is tested in a laboratory environment. © 2013 IEEE.
-
Conference paperAhmadzadeh SR, Kormushev P, Caldwell DG, 2013,
Visuospatial Skill Learning for Object Reconfiguration Tasks
-
Conference paperAhmadzadeh SR, Kormushev P, Caldwell DG, 2013,
Interactive Robot Learning of Visuospatial Skills
-
Conference paperKormushev P, Caldwell DG, 2013,
Improving the Energy Efficiency of Autonomous Underwater Vehicles by Learning to Model Disturbances
-
Conference paperKarras GC, Bechlioulis CP, Leonetti M, et al., 2013,
On-Line Identification of Autonomous Underwater Vehicles through Global Derivative-Free Optimization
-
Journal articleKoos S, Cully A, Mouret J-B, 2013,
Fast damage recovery in robotics with the T-resilience algorithm
, The International Journal of Robotics Research, Vol: 32, Pages: 1700-1723, ISSN: 0278-3649Damage recovery is critical for autonomous robots that need to operate for a long time without assistance. Most current methods are complex and costly because they require anticipating potential damage in order to have a contingency plan ready. As an alternative, we introduce the T-resilience algorithm, a new algorithm that allows robots to quickly and autonomously discover compensatory behavior in unanticipated situations. This algorithm equips the robot with a self-model and discovers new behavior by learning to avoid those that perform differently in the self-model and in reality. Our algorithm thus does not identify the damaged parts but it implicitly searches for efficient behavior that does not use them. We evaluate the T-resilience algorithm on a hexapod robot that needs to adapt to leg removal, broken legs and motor failures; we compare it to stochastic local search, policy gradient and the self-modeling algorithm proposed by Bongard et al. The behavior of the robot is assessed on-board thanks to an RGB-D sensor and a SLAM algorithm. Using only 25 tests on the robot and an overall running time of 20 min, T-resilience consistently leads to substantially better results than the other approaches.
-
Conference paperKormushev P, Caldwell DG, 2013,
Towards Improved AUV Control Through Learning of Periodic Signals
-
Conference paperAhmadzadeh SR, Leonetti M, Kormushev P, 2013,
Online Direct Policy Search for Thruster Failure Recovery in Autonomous Underwater Vehicles
-
Conference paperJamali N, Kormushev P, Caldwell DG, 2013,
Contact State Estimation using Machine Learning
-
Conference paperKormushev P, Caldwell DG, 2013,
Comparative Evaluation of Reinforcement Learning with Scalar Rewards and Linear Regression with Multidimensional Feedback
-
Conference paperLeonetti M, Ahmadzadeh SR, Kormushev P, 2013,
On-line Learning to Recover from Thruster Failures on Autonomous Underwater Vehicles
-
BookDeisenroth MP, Neumann G, Peters J, 2013,
A Survey on Policy Search for Robotics
, Publisher: now PublishersPolicy search is a subfield in reinforcement learning which focuses onfinding good parameters for a given policy parametrization. It is wellsuited for robotics as it can cope with high-dimensional state and actionspaces, one of the main challenges in robot learning. We review recentsuccesses of both model-free and model-based policy search in robotlearning.Model-free policy search is a general approach to learn policiesbased on sampled trajectories. We classify model-free methods based ontheir policy evaluation strategy, policy update strategy, and explorationstrategy and present a unified view on existing algorithms. Learning apolicy is often easier than learning an accurate forward model, and,hence, model-free methods are more frequently used in practice. How-ever, for each sampled trajectory, it is necessary to interact with the robot, which can be time consuming and challenging in practice. Model-based policy search addresses this problem by first learning a simulatorof the robot’s dynamics from data. Subsequently, the simulator gen-erates trajectories that are used for policy learning. For both model-free and model-based policy search methods, we review their respectiveproperties and their applicability to robotic systems.
-
Conference paperKormushev P, Caldwell DG, 2013,
Reinforcement Learning with Heterogeneous Policy Representations
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Contact us
Artificial Intelligence Network
South Kensington Campus
Imperial College London
SW7 2AZ
To reach the elected speaker of the network, Dr Rossella Arcucci, please contact:
To reach the network manager, Diana O'Malley - including to join the network - please contact: