Towards Evaluating the Robustness of Neural Networks – Nicholas Carlini, David Wagner, University of California, Berkeley – Link
Defense against Universal Adversarial Perturbations – Naveed Akhtar, Jian Liu, Ajmal Mian – Link
Local Gradients Smoothing: Defense against localized adversarial attacks – Muzammal Naseer, Salman H. Khan – Link
Sparse and Imperceivable Adversarial Attacks – Francesco Croce, Matthias Hein – Link
Interpretable and Fine-Grained Visual Explanations for
Convolutional Neural Networks – Jorg Wagner, Jan Mathias Kohler, Tobias Gindele, Leon Hetzel – Link
Interpreting Black Box Models via Hypothesis Testing – Collin Burns, Jesse Thomason, Wesley Tansey – Link
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks – Jan Svoboda, Jonathan Masci, Federico Monti, Michael M. Bronstein, Leonidas Guibas – Link
Adversarial Defense via Learning to Generate Diverse Attacks – Yunseok Jang, Tianchen Zhao, Seunghoon Hong, Honglak Lee – Link
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition – Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu – Link
There and Back Again: Revisiting Backpropagation Saliency Methods – Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Andrea Vedaldi – Link
Adversarial camera stickers: A physical camera-based attack on deep learning systems – Juncheng B. Li, Frank R. Schmidt, J. Zico Kolter – Link
Are Labels Required for Improving Adversarial Robustness? – Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth Alhussein Fawzi Pushmeet Kohli – Link
Simple Black-box Adversarial Attacks – Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, Kilian Q. Weinberger – Link
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks – Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami – Link
Practical No-box Adversarial Attacks against DNNs – Qizhang Li, Yiwen Guo, Hao Chen – Link