Noisy Agents: Self-supervised Exploration
by Predicting Auditory Events


Chuang Gan      Xiaoyu Chen       Phillip Isola      Antonio Torralba      Joshua B. Tenenbaum     



Abstract:


Humans integrate multiple sensory modalities (\e.g., visual and audio) to build a causal understanding of the physical world. In this work, we propose a novel type of intrinsic motivation for Reinforcement Learning (RL) that encourages the agent to understand the causal effect of its actions through auditory event prediction. First, we allow the agent to collect a small amount of acoustic data and use K-means to discover underlying auditory event clusters. We then train a neural network to predict the auditory events and use the prediction errors as intrinsic rewards to guide RL exploration. Experimental results on Atari games show that our new intrinsic motivation significantly outperforms several state-of-the-art baselines. We further visualize our noisy agents' behavior in a physics environment and demonstrate that our newly designed intrinsic reward leads to the emergence of physical interaction behaviors (e.g. contact with objects).


Video:




Paper:


Noisy Agents: Self-supervised Exploration by Predicting Auditory Events
Chuang Gan*, Xiaoyu Chen*, Phillip Isola, Antonio Torralba, Joshua B. Tenenbaum
(* indicates equal contributions)
[PDF]