Repository logo

Deeply Supervised Discriminative Learning for Adversarial Defense.

Accepted version



Change log


Khan, Salman H 
Hayat, Munawar 
Goecke, Roland 
Shen, Jianbing 


Deep neural networks can easily be fooled by an adversary with minuscule perturbations added to an input image. The existing defense techniques suffer greatly under white-box attack settings, where an adversary has full knowledge of the network and can iterate several times to find strong perturbations. We observe that the main reason for the existence of such vulnerabilities is the close proximity of different class samples in the learned feature space of deep models. This allows the model decisions to be completely changed by adding an imperceptible perturbation to the inputs. To counter this, we propose to class-wise disentangle the intermediate feature representations of deep networks, specifically forcing the features for each class to lie inside a convex polytope that is maximally separated from the polytopes of other classes. In this manner, the network is forced to learn distinct and distant decision regions for each class. We observe that this simple constraint on the features greatly enhances the robustness of learned models, even against the strongest white-box attacks, without degrading the classification performance on clean images. We report extensive evaluations in both black-box and white-box attack scenarios and show significant gains in comparison to state-of-the-art defenses.



Robustness, Perturbation methods, Training, Linear programming, Optimization, Marine vehicles, Prototypes, Adversarial defense, adversarial robustness, white-box attack, distance metric learning, deep supervision

Journal Title

IEEE Trans Pattern Anal Mach Intell

Conference Name

Journal ISSN


Volume Title



Institute of Electrical and Electronics Engineers (IEEE)


All rights reserved