An ASR (Automatic Speech Recognition) adversarial attack repository.
-
Updated
Nov 7, 2023 - Jupyter Notebook
An ASR (Automatic Speech Recognition) adversarial attack repository.
vanilla training and adversarial training in PyTorch
This work is based on enhancing the robustness of targeted classifier models against adversarial attacks. To achieve this, a convolutional autoencoder-based approach is employed that effectively counters adversarial perturbations introduced to the input images.
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
Implementation of PGD attack on a model trained on cifar10 dataset in TensorFlow. Also, FID between original images and generated images has been calculated.
A classical or convolutional neural network model with adversarial defense protection
Adversarial Network Attacks (PGD, pixel, FGSM) Noise on MNIST Images Dataset using Python (Pytorch)
Implementations for several white-box and black-box attacks.
A classical-quantum or hybrid neural network with adversarial defense protection
Adversarial defense by retreaval-based methods
"Neural Computing and Applications" Published Paper (2023)
Developed robust image classification models to prevent the effect of adversarial attacks
Add a description, image, and links to the pgd-attack topic page so that developers can more easily learn about it.
To associate your repository with the pgd-attack topic, visit your repo's landing page and select "manage topics."