International Congress on Big Data

Dr. Murat Kantarcıoğlu

Dr. Murat Kantarcıoğlu

Dr. Murat Kantarcioglu is a Professor in the Computer Science Department and Director of the Data Security and Privacy Lab at The University of Texas at Dallas (UTD).  He received a PhD in Computer Science from Purdue University in 2005 where he received the Purdue CERIAS Diamond Award for Academic excellence. He is also a visiting scholar at Harvard Data Privacy Lab since 2013. Dr. Kantarcioglu’s research focuses on the integration of cyber security, data science and blockchains, creating technologies that can efficiently and securely process and share data.

His research has been supported by grants including from NSF, AFOSR, ARO, ONR, NSA, and NIH. He has published over 170 peer reviewed papers in top tier venues such as ACM KDD, SIGMOD, IEEE ICDM, ICDE, PVLDB, NDSS, USENIX Security and several IEEE/ACM Transactions as well as served as program chair for conferences such as ACM SACMAT. Some of his research work has been covered by the media outlets such as the Boston Globe, ABC News, PBS/KERA, DFW Television, and has received multiple best paper awards.

He is the recipient of various awards including NSF CAREER award, the AMIA (American Medical Informatics Association) 2014 Homer R Warner Award and the IEEE ISI (Intelligence and Security Informatics) 2017 Technical Achievement Award presented jointly by IEEE SMC and IEEE ITS societies for his research in data security and privacy. He is also a Distinguished Scientist of ACM.

Details of his work can be found at http://www.utdallas.edu/~muratk/ .

 


Abstract of the Presentation:

Adversarial Machine Learning: A Game Theoretic Approach.

Many real world applications, ranging from spam filtering to intrusion detection, are facing malicious adversaries who actively transform the objects under their control to avoid detection. Machine Learning (ML) techniques are highly useful tools for cyber defense, since they play an important role in distinguishing the legitimate from the destructive. Unfortunately, traditional ML techniques are insufficient to handle such adversarial problems directly. The adversaries adapt to the ML model’s reactions, and ML algorithms constructed based on a training dataset will degrade quickly. Our proposed adversarial ML framework addresses the challenges posed by malicious adversaries.

In this talk, we discuss the theory, the techniques, and the applications of our proposed adversarial ML framework. We model the adversarial ML applications as a Stackelberg game, with an emphasis on the sequential actions of the adversary and the ML model, allowing both parties to maximize their own utilities. We analyze the equilibrium behavior of both parties under our proposed game theoretic framework, which offers insight into the long term effectiveness of a ML algorithm. Furthermore, we apply the equilibrium information to cost sensitive attribute selection. We then derive optimal support vector machine models against an adversary whose attack strategy is defined under general and reasonable assumptions. We investigate how the performance of the resulting optimal solutions changes in two different attack models. The empirical results suggest that our adversarial support vector machine algorithms are robust against various degrees of attacks. Finally, we will discuss our recent work on game theory inspired deep learning defenses against adversarial attacks.

Media Sponsors