サブロウ丸

主にプログラミングと数学

Detecting Adversarial Examples

Adversarial example ... 入力にごくわずかな摂動(ノイズ)を加えてモデルの出力を狂わせたもの. 特にneural networkモデルに対してadversarial exampleが存在しやすい. このAdversarial exampleを"検出する"という観点の研究をまとめました.

参考

  • [1] Xu, Weilin, David Evans, and Yanjun Qi. "Feature squeezing: Detecting adversarial examples in deep neural networks." arXiv preprint arXiv:1704.01155 (2017).
  • [2] Carlini, Nicholas, and David Wagner. "Adversarial examples are not easily detected: Bypassing ten detection methods." Proceedings of the 10th ACM workshop on artificial intelligence and security. 2017.
  • [3] He, Warren, et al. "Adversarial example defense: Ensembles of weak defenses are not strong." 11th {USENIX} workshop on offensive technologies ({WOOT} 17). 2017.
  • [4] Grosse, Kathrin, et al. "On the (statistical) detection of adversarial examples." arXiv preprint arXiv:1702.06280 (2017).