Wednesday, August 21

4:00 pm - 4:40 pm

Defending Deep Learning from Adversarial Attacks

Hard Coded Stage


About the event

Adversarial examples in AI pose an asymmetrical challenge with respect to attackers and defenders. There is a need to empower AI developers to defend deep neural networks against adversarial attacks, and to allow rapid crafting and analysis of attack and defense methods for machine learning models. In this talk we are going to discuss how to provide an implementation for many state-of-the-art methods for attacking and defending classifiers using open source Adversarial Robustness Toolbox. For AI developers, the library provides interfaces that support the composition of comprehensive defense systems using individual methods as building blocks. In addition, using Jupyter notebook we will also show how to leverage attack methods from the Adversarial Robustness Toolbox (ART) into a model training pipeline on Fabric for Deep Learning (FfDL). This notebook trains a CNN model on the Fashion MNIST data set and the generated adversarial samples are used to evaluate the robustness of the trained model.

Speakers

Svetlana Levitan

Developer Advocate with IBM Center for Open Source Data and AI Technologies (CODAIT), Svetlana has been a software engineer, architect, and technical lead for SPSS Analytic components for many years. She represents IBM at the Data Mining Group and is the release manager for PMML and PFA, open standards for predictive model deployment. She is also working with other companies on ONNX, an open model exchange format for deep learning models. Svetlana is a co-organizer of several Chicagoland Meetup groups, including Big Data Developers in Chicago and Chicago Cloud Developers. She has authored several blogs and presented at many conferences and other events. Svetlana loves to learn new technologies and to share her expertise, to encourage girls and women in STEM.