|
Learning of feature points without additional supervision improves reinforcement learning from images
Rinu Boney,
Alexander Ilin,
Juho Kannala
arXiv
End-to-end learning of 2D keypoint representations to extract the geometry features which are most relevant for the task of continuous control from images.
Code
|
|
RealAnt: An Open-Source Low-Cost Quadruped for Research in Real-World Reinforcement Learning
Rinu Boney*,
Jussi Sainio*,
Mikko Kaivola,
Arno Solin,
Juho Kannala
arXiv
We develop RealAnt, a minimal low-cost physical version of the popular 'Ant' benchmark used in reinforcement learning. RealAnt costs only $410 in materials and can learn to walk in less than 10 minutes.
Summary Video / Code
|
|
Learning to Drive (L2D) as a Low-Cost Benchmark for Real-World Reinforcement Learning
Ari Viitala*,
Rinu Boney*,
Yi Zhao,
Alexander Ilin,
Juho Kannala
ICAR 2021
L2D involves a simple reproducible setup where an RL agent has to learn to drive a Donkey car from disengagements, using monocular image observations and speed of the car. We open-source our training pipeline and state-of-the-art RL baselines.
Project Page / Code
|
|
Learning to Play Imperfect-Information Games by Imitating an Oracle Planner
Rinu Boney,
Alexander Ilin,
Juho Kannala,
Jarno Seppänen
IEEE Transactions on Games
Learning to play Clash Royale (a popular mobile game from Supercell) and Pommerman by first building an (oracle) planner that has access to the full state of the game and then distilling the knowledge of the oracle to a (follower) agent.
Project Page / Code
|
|
Regularizing Model-Based Planning with Energy-Based Models
Rinu Boney,
Juho Kannala,
Alexander Ilin
CoRL 2019
Regularize model-based planning using energy estimates of state transitions in the environment, leading to
improved planning with pre-trained dynamics models and sample-efficient learning from scratch in popular motor control tasks.
Project Page
|
|
Regularizing Trajectory Optimization with Denoising Autoencoders
Rinu Boney*,
Norman Di Palo*,
Mathias Berglund,
Alexander Ilin,
Juho Kannala,
Antti Rasmus,
Harri Valpola
NeurIPS 2019
Regularize trajectory optimization using denoising autoencoders to improve planning,
leading to rapid initial learning in a set of popular motor control tasks.
Project Page
|
|
Active One-Shot Learning with Prototypical Networks
Rinu Boney and
Alexander Ilin
ESANN 2019
Extended Prototypical Networks to show that the
adaptation performance can be significantly improved by requesting the few labels through user feedback.
|
|
Fast Adaptation of Neural Networks
Rinu Boney
Master's Thesis
Prototypical Networks and Model-Agnostic Meta-Learning (MAML)
enables machines to learn to recognize new objects with very little supervision from the user.
Extended these methods to the make use of unlabeled data and user feedback.
|
|
Semi-Supervised Few-Shot Learning with MAML
Rinu Boney and
Alexander Ilin
ICLR Workshop 2018
Preliminary results on extending Model-Agnostic Meta-Learning (MAML) to fast adaptation to new classification tasks in the presence of unlabeled data.
|
|
Semi-Supervised Few-Shot Learning with Prototypical Networks
Rinu Boney and
Alexander Ilin
NIPS Workshop on Meta-Learning 2018
Extended Prototypical Networks to the problem of semi-supervised few-shot classification where a classifier needs
to adapt to new tasks using a few labeled examples and (potentially many) unlabeled examples.
|
|
Recurrent Ladder Networks
Isabeau Prémont-Schwarz,
Alexander Ilin,
Tele Hao,
Antti Rasmus,
Rinu Boney,
Harri Valpola
NIPS 2017
A recurrent extension of the Ladder networks that shows close-to-optimal
results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping
based on higher order abstractions, such as stochastic textures and motion cues.
|