In this paper, a novel black box adversarial computer vision attack is proposed. The introduced attack is based on removing from images some components described by their Tchebichef discrete orthogonal moments, rather than to perturb them. The contribution of this work is focused on the addition of one more clue, supporting the critical hypothesis that computer vision systems fail because they support their decisions not only in robust features but also in others non-robust ones. In this, context non-robust image features described in terms of Tchebichef moments are excluded from the original images and the approximated reconstructed versions of them are used as adversarial examples in order to attack some popular deep learning models. The experiments justify the effectiveness of the proposed adversarial attack in terms of imperceptibility and recognition error rate of the deep learning classifiers. It is worth noting that the top-1 accuracy of the attacked models was degraded by a factor between 9.48%-70.89% for adversarial images of 65dB to 57dB PSNR values. The corresponding degradation of the top-5 models’ accuracy was between 6.9% and 55.14% for the same quality images. Moreover, the proposed attack seems to have more strength than the Fast Gradient Sign Method (FGSM) attacking method traditionally applying in most cases. These results reveal that the proposed attack is able to exploit the vulnerability of the deep learning models’ towards degrading their generalization abilities.


T. Maliamanis and G.A. Papakostas, “DOME-T: Adversarial computer vision attack on deep learning models based on Tchebichef image moments,” The 13th International Conference on Machine Vision (ICMV 2020), November 02-06, 2020, Rome, Italy.