Proceedings of the 2020 IEEE International Conference on Software Quality, Reliability, and Security (QRS '20),
IEEE Computer Society Press, Los Alamitos, CA, pp. 406-413 (2020)

PEACEPACT: Prioritizing Examples to Accelerate
Perturbation-Based Adversary Generation for DNN Classification Testing

Zijie Li 1, 2 , Long Zhang 1, 2 , Jun Yan 3 , Jian Zhang 1, 2 , Zhenyu Zhang 4 , and T.H. Tse 5

[free download from QRS '20]

 ABSTRACT

Deep neural networks (DNNs) have been widely used in classification tasks. Studies have shown that DNNs may be fooled by artificial examples known as adversaries. A common technique for testing the robustness of a classification is to apply perturbations (such as random noise) to existing examples and try many of them iteratively, but it is very tedious and time-consuming. In this paper, we propose a technique to select adversaries more effectively. We study the vulnerability of examples by exploiting their class distinguishability. In this way, we can evaluate the probability of generating adversaries from each example, and prioritize all the examples accordingly. We have conducted an empirical study using two DNN models on four common datasets. The results reveal that the vulnerability of examples has a strong relationship with distinguishability. The effectiveness of our technique is demonstrated through 79.67 to 99.68% improvements in the F-measure.

1. State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China.
2. University of Chinese Academy of Sciences, Beijing 100039, China
3. Technology Center of Software Engineering, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
4. (Corresponding author.)
State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China.
Email:
5. Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong.

 EVERY VISITOR COUNTS:

  Cumulative visitor count