Skip to content

Tutorial on Adversarial Attacks as final assignment of the course Deep Learning at the Hertie School

Notifications You must be signed in to change notification settings

dinahrabe/tutorial_adversarial_attacks

Repository files navigation

Adversarial Machine Learning Tutorial

This tutorial provides users with a fundamental understanding for Adversarial Machine Learning (AML), why it is dangerous, and how its risks can be mitigated. It does so by providing both mathematical and theoretical explanations as well as hands-on coding examples. Two attacks and their countermeasures are implemented, namely an evasion attack and a membership inference attack. The user will be introduced not only in the intricacies of these attacks but get an idea of their risks in general. Machine Learning systems applied in all domains can be subject to adversarial attacks, making the topic paramount to practitioners and policy-makers alike.

In addition to the tutorial itself, we also provide you with a presentation taking a more conceptual angel to introducing Adversarial Machine Learning, a video walking you through the notebook, and a link to directly open the file in Google Colab:

Authors:

Open In Colab

Note: This project was created in the context of the course Deep Learning at the Hertie School, Berlin. This repository was worked on in a github classroom environment and therefore had to be re-located into my personal github, therefore the commit-history was lost.

About

Tutorial on Adversarial Attacks as final assignment of the course Deep Learning at the Hertie School

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published