Stable VMAF: investigating VMAF’s vulnerabilities to adversarial attacksстатьяИсследовательская статья
Статья опубликована в высокорейтинговом журнале
Информация о цитировании статьи получена из
Scopus
Статья опубликована в журнале из списка Web of Science и/или Scopus
Дата последнего поиска статьи во внешних источниках: 1 октября 2025 г.
Аннотация:In recent years, Video Multimethod Assessment Fusion (VMAF) has become a prominent metric thanks to its high correlation with subjective video-quality assessments, making it preferable for evaluating video codecs and video-processing algorithms. Like many machine-learning-based metrics, however, it is susceptible to adversarial attacks, which can manipulate scores while preserving or even degrading visual quality. This paper investigates VMAF’s vulnerabilities to such attacks and proposes a novel, stable modification to enhance its robustness. We propose two adversarial attacks: an evolutionary-based attack, which achieves an average VMAF gain of 9.27 with a processing speed of 21.116 FPS, and a distillation-based neural attack, yielding a 6.86 average VMAF gain at 7.016 FPS. Using these methods, we created a dataset for pseudoadversarial training of our stable-VMAF modification, which incorporates additional features and a multilayer perceptron for better score prediction. Extensive experiments demonstrate that our approach improves correlation with subjective quality scores by up to 5%, while also showing statistically significant robustness gains over both the original VMAF and the VMAF NEG variant. These results highlight the practical effectiveness and resilience of our proposed metric in adversarial settings.