ICML 2018読み会の資料. Overview of NLP/ Adversarial Attacks - Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples - Synthesizing Robust Adversarial Examples - Black-box Adversarial Attacks with Limited Queries and Information