Interactive Methods for Improving Robustness of Neural Networks Against Adversarial Attacks
presentationposted on 2020-12-07, 14:10 authored by Andrew McCarthy
Neural network based Machine Learning Systems are improving the efficiency of real-world tasks including, speech recognition, network intrusion detection, and autonomous vehicles. For example, network intrusion detection systems are well suited to machine learning, giving highly accurate classification. However, nefarious actors, ranging from lone hackers to advanced persistent threats seek to fool classifiers through influencing the output of the model. Unfortunately, most well trained neural network models may be fooled using gradient descent attacks algorithmically producing perturbed images known as adversarial examples.
Bad actors wish to fool classifiers across application domains including Image recognition, speech recognition, and network intrusion detection. Humans and computers perceive the same data in different ways. Humans generally overlook minor differences in data. For example, minor changes in pixel size and colour. People easily overlook the visual difference between colour codes rgb(255,0,255) and rgb(254,0,254); whereas the numeric difference is strongly evident in computers algorithms, even within large quantities of data. Adversarial examples exploit this difference. Humans have difficulty detecting anything improper in a successful attack, because the perturbations are so small.
Consequentially successful attacks against neural networks mean systems are vulnerable and therefore dangerously deployed in application domains. For example, incorrect classifications of road signs in autonomous could have dire consequences. Moreover, the increasing size of data being processed by neural networks enlarges the attack surface available to attackers whilst obfuscating the attack to humans. If unaddressed future mature attack methods will facilitate more destructive attacks. I therefore address the urgent research need in this area. My research explores the robustness of neural networks, aiming to understand the principles behind successful attacks and consider mitigations in key domains of network intrusion detection and image and speech recognition. I am designing tools to aid visualization of weak points in training datasets, and neural network models, to unearth attacks. Discovering ways to improve robustness of neural network models whilst retaining acceptable classification accuracy. Improving robustness of neural networks enables safe deployment across a wider range of domains.