Аннотация:Neural networks are known to be vulnerable against adversarial attacks – images with carefully crafted ad- versarial perturbations that are imperceptible to the human eye. In medical imaging tasks this can be a major threat for making predictions based on deep neural network solutions. In this paper we propose a pipeline for augmenting a histological image dataset using weak adversarial attacks and demonstrate an increase in accuracy on the clean test set of a neural classifier trained on the augmented dataset. When trained on the clean train set, the neural network achieves an accuracy of 90.65 on the clean test set, and an accuracy of 93.56 on the same test set when trained using the proposed augmentation method.