Аннотация:Quality assessment is an essential part of image and video processing, such as super-resolution, compression, content generation, and similar tasks. Most modern quality metrics are learning-based methods, and although they exhibit a higher correlation with subjective scores than traditional methods do, they are vulnerable to adversarial attacks. Today, no-reference image-quality assessment (NR-IQA) metrics are becoming more popular, as they can assess images and videos without any additional information. This paper presents results of conducting various adversarial attacks on different NR-IQA metrics and introduces an Image Robustness to Adversarial Attacks model that estimates an image's vulnerability to attacks. Our analysis of adversarial attacks on NR-IQA metrics revealed an image class that is robust to these attacks and, conversely, an image class that is vulnerable to most of them. We analyzed several image datasets and found that distortions such as denoising and various types of blur reduce an image's robustness to adversarial attacks.