Detecting Adversarial Attacks on Neural
Network Policies with Visual Foresight
 

Yen-Chen Lin     Ming-Yu Liu     Min Sun     Jia-Bin Huang    


(Click on image to see how our defense work.)

Abstract

Deep reinforcement learning has shown promising results in learning control policies for complex sequential decision-making tasks. However, these neural network-based policies are known to be vulnerable to adversarial examples. This vulnerability poses a potentially serious threat to safety-critical systems such as autonomous vehicles. In this paper, we propose a defense mechanism to defend reinforcement learning agents from adversarial attacks by leveraging an action-conditioned frame prediction module. Our core idea is that the adversarial examples targeting at a neural network-based policy are not effective for the frame prediction model. By comparing the action distribution produced by a policy from processing the current observed frame to the action distribution produced by the same policy from processing the predicted frame from the action-conditioned frame prediction module, we can detect the presence of adversarial examples. Beyond detecting the presences of adversarial examples, our method allows the agent to continue performing the task using the predicted frame when the agent is under attack. We evaluate the performance of our algorithm using five games in Atari 2600. Our results demonstrate that the proposed defense mechanism achieves favorable performance against baseline algorithms in detecting adversarial examples and in earning rewards when the agents are under attack.

  Download Paper

@article{Lin2017RLAttackDetection,
        title={Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight},
        author={Lin, Yen-Chen and Liu, Ming-Yu and Sun, Min and Huang, Jia-Bin},
        journal={arXiv preprint arXiv:1710.00814},
        year={2017}
}



Code

    https://github.com/yenchenlin/rl-attack-detection


Results



Referecnes

  • Attacking Machine Learning with Adversarial Examples
  • by Ian Goodfellow et al.

    Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
    by Yen-Chen Lin et al.

  • awesome-adversarial-machine-learning
  • by Yen-Chen Lin