In most real world scenarios, a policy trained by reinforcement learning in
one environment needs to be deployed in another, potentially quite different
environment. However, generalization across different environments is known to
be hard. A natural solution would be to keep training after deployment in the
new environment, but this cannot be done if the new environment offers no
reward signal. Our work explores the use of self-supervision to allow the
policy to continue training after deployment without using any rewards. While
previous methods explicitly anticipate changes in the new environment, we
assume no prior knowledge of those changes yet still obtain significant
improvements. Empirical evaluations are performed on diverse simulation
environments from DeepMind Control suite and ViZDoom, as well as real robotic
manipulation tasks in continuously changing environments, taking observations
from an uncalibrated camera. Our method improves generalization in 31 out of 36
environments across various tasks and outperforms domain randomization on a
majority of environments.