Аннотация:The paper touches on the problem of adversarial augmentation of a dataset of images in order to improve the robustness of neural network classifiers to adversarial attacks. In recent years, neural network methods have developed significantly, new neural network methods show impressive results, but they are subject to so-called adversarial attacks - that is, they make incorrect predictions on inputs obtained as a result of imposing carefully crafted noise on the image. Because of this, the reliability of neural network methods is still a relevant area of study. This paper compares two image augmentation methods: by means of adversarial augmentation and by means of increasing the training set with new images. In particular, results suggest that a neural network trained on 50% of training images with adversarial augmentation demonstrates a higher accuracy (89.14%) on the test data with an adversarial attack imposed than a similar neural network trained on 100% of training images without adversarial augmentation (87.53%).