I am an assistant professor of Computer Science at Loyola University Chicago. I received a Ph.D. degree in Computer Science from the University of Central Florida (UCF) in 2020. I also received a Ph.D. degree in Electrical and Computer Engineering from INHA University, (Incheon, Republic of Korea) in 2020. I received a Master degree in Information Technology (Artificial Intelligence) from the National University of Malaysia, (Bangi, Malaysia) in 2013.

During my doctoral study, I worked at the Security Analytics Research Lab , with my advisor Prof. Mohaisen. At INHA, I worked at the Information Security Research Lab, with my advisor Prof. Nyang. I am interested in AI/Deep-Learning-based Information Security, especially Software and Mobile/IoT Security. I am also interested in Machine Learning-based Applications and Adversarial Machine Learning. I have published several peer-reviewed research papers in top-tier conferences and journals such as ACM CCS, PoPETS, IEEE ICDCS, and IEEE IoT-J.

Research Lab: Cybersecurity lab and AI for Secure Computing Research Lab (AISeC).
Teaching (Spring 2022): COMP 487: Deep Learning and COMP 358: Big Data Analytics.
Office Hours: Fridays 09:00 - 11:00 AM (and by appointment.)

RESEARCH

To improve our understanding of systems, and to guide security analytics towards secure systems design, I employ advances in deep machine learning to systems security. The emergence of such learning techniques promises various avenues for attribution in data-driven security analytics, and those techniques constitute the cornerstone of my current and future research interests. My current research contributions have been focused on creating efficient and accurate methods for software security, building, customizing, optimizing, and leveraging deep machine learning techniques for security and privacy.

OPENINGS

Contact us for details.

I am always looking for motivated undergraduate and graduate students to work with me on exciting projects in areas such as machine learning for social good, adversarial machine learning, and information security and privacy.

RESEARCH PUBLICATIONS

This paper analyzes the reaction of social network users in terms of different aspects including sentiment analysis, topic detection, emotions, and the geo-temporal characteristics of our dataset. We show that the dominant sentiment reactions on social media are neutral, while the most discussed topics by social network users are about health issues. This paper examines the countries that attracted a higher number of posts and reactions from people, as well as the distribution of health-related topics discussed in the most mentioned countries. We shed light on the temporal shift of topics over countries. Our results show that posts from the top-mentioned countries influence and attract more reactions worldwide than posts from other parts of the world.

This work proposed W-Net, a CNN-based architecture with a small number of layers, to accurately classify the five WBC types. We evaluated W-Net on a real-world large-scale dataset and addressed several challenges such as the transfer learning property and the class imbalance. W-Net achieved an average classification accuracy of 97%. We synthesized a dataset of new WBC image samples using DCGAN, which we released to the public for education and research purposes.

This research utilizes various characterizing features extracted from each malware using static and dynamic analysis to build seven machine learning models to detect and analyze packed windows malware. We use a large-scale dataset of over 107,000 samples covering unpacked and packed malware using ten different packers. We examined the performance of seven machine learning techniques using 50 dynamic and static features. Our results show that packed malware can circumvent detection when a single analysis is performed while applying both static and dynamic methods can help improve the detection accuracy around 2% to 3%.

Deep neural network models are susceptible to malicious manipulations even in the black-box settings. Providing explanations for DNN models offers a sense of security by human involvement, which reveals whether the sample is benign or adversarial even though previous studies achieved a high attack success rate. However, interpretable deep learning systems (IDLSes) are shown to be susceptible to adversarial manipulations in white-box settings. Attacking IDLSes in black-box settings is challenging and remains an open research domain. In this work, we propose a black-box version of the white-box AdvEdge approach against IDLSes, which is query-efficient and gradient-free without obtaining any knowledge of the target DNN model and its coupled interpreter. Our approach takes advantage of transfer-based and score-based techniques using the effective microbial genetic algorithm (MGA). We achieve a high attack success rate with a small number of queries and high similarity in interpretations between adversarial and benign samples.

The rapid pace of malware development and the widespread use of code obfuscation, polymorphism, and morphing techniques pose a considerable challenge to detecting and analyzing malware. Today, it is difficult for antivirus applications to use traditional signature-based detection methods to detect morphing malware. Thus, the emergence of structure graph-based detection methods has become a hope to solve this challenge. In this work, we propose a method for detecting malware using graphs’ spectral heat and wave signatures, which are efficient and size- and permutation-invariant. We extracted 250 and 1,000 heat and wave representations, and we trained and tested heat and wave representations on eight machine learning classifiers. We used a dataset of 37,537 unpacked Windows malware executables and extracted the control flow graph (CFG) of each windows malware to obtain the spectral representations. Our experimental results showed that by using heat and wave spectral graph theory, the best malware analysis accuracy reached 95.9%.

There is an increasing demand for comprehensive and in-depth analysis of behaviors of various attacks and the possible defenses against common deep learning models under several adversarial scenarios. In this study, we conducted four separate investigations. First, we examine the relationship between the model’s complexity and its robustness against the studied attacks. Second, the connection between the performance and diversity of models is examined. Third, the first and second experiments were tested across different datasets to explore the impact of the dataset on the performance of the model. Four, throughout the defense strategies, the model behavior is extensively investigated.

In this article, we survey more than 140 recent behavioral biometric-based approaches for continuous user authentication, including motion-based methods (28 studies), gait-based methods (19 studies), keystroke dynamics-based methods (20 studies), touch gesture-based methods (29 studies), voice-based methods (16 studies), and multimodal-based methods (34 studies). The survey provides an overview of the current state-of-the-art approaches for continuous user authentication using behavioral biometrics captured by smartphones’ embedded sensors, including insights and open challenges for adoption, usability, and performance.

For high dose rate prostate (HDR) brachytherapy at our institution, both CT and MRI are acquired to identify catheters and delineate the prostate, respectively. We propose to build a deep-learning model to generate synthetic magnetic resonance imaging (sMRI) with enhanced soft-tissue contrast from computed tomography (CT) scans. sMRI would assist physicians to accurately delineate the prostate without needing to acquire additional planning MRI. 58 paired post-implant CT and T2-weighted MRI sets acquired on the same day for HDR prostate patients were retrospectively curated to train and validate the conditional Generative Adversarial Networks (cGAN) algorithm Pix2pix to generate sMRI.

In this paper, we investigate the exposure of those users to inappropriate comments posted on YouTube videos targeting this demographic. We collected a large-scale dataset of approximately four million records and studied the presence of five age-inappropriate categories and the amount of exposure to each category. Using natural language processing and machine learning techniques, we constructed ensemble classifiers that achieved high accuracy in detecting inappropriate comments. Our results show a large percentage of worrisome comments with inappropriate content: we found 11% of the comments on children’s videos to be toxic, highlighting the importance of monitoring comments, particularly on children’s platforms.

In this work, we provide a first look at the tasks managed by shell commands in Linux-based IoT malware toward detection. We analyze malicious shell commands found in IoT malware and build a neural network-based model, ShellCore, to detect malicious shell commands. Namely, we collected a large data set of shell commands, including malicious commands extracted from 2891 IoT malware samples and benign commands collected from real-world network traffic analysis and volunteered data from Linux users. Using conventional machine and deep learning-based approaches trained with a term- and character-level features, ShellCore is shown to achieve an accuracy of more than 99% in detecting malicious shell commands and files (i.e., binaries).

This study proposes a malware detection system robust to adversarial attacks. We examine the performance of the state-of-the-art methods against adversarial IoT software crafted using the graph embedding and augmentation techniques; namely, we study the robustness of such methods against two black-box adversarial methods, GEA and SGEA, to generate Adversarial Examples (AEs) with reduced overhead, and keeping their practicality intact. Our comprehensive experimentation with GEA-based AEs show the relation between misclassification and the graph size of the injected sample. Upon optimization and with small perturbation, by use of SGEA, all IoT malware samples are misclassified as benign. This highlights the vulnerability of current detection systems under adversarial settings. With the landscape of possible adversarial attacks, we then propose DL-FHMC, a fine-grained hierarchical learning approach for malware detection and classification, that is robust to AEs with a capability to detect 88.52% of the malicious AEs.

This work proposes a deep Learning-based approach for software authorship attribution, that facilitates large-scale, format-independent, language-oblivious, and obfuscation-resilient software authorship identification. This proposed approach incorporates the process of learning deep authorship attribution using a recurrent neural network, and ensemble random forest classifier for scalability to de-anonymize programmers.

Utilizing interpretation models enables a better understanding of how DNN models work, and offers a sense of security. However, interpretations are also vulnerable to malicious manipulation. We present AdvEdge and AdvEdge+, two attacks to mislead the target DNNs and deceive their combined interpretation models. We evaluate the proposed attacks against two DNN model architectures coupled with four representatives of different categories of interpretation models. The experimental results demonstrate our attacks’ effectiveness in deceiving the DNN models and their interpreters.

Machine Learning (ML) based Network Intrusion Systems (NIDSs) operate on flow features which are obtained from flow exporting protocols (i.e., NetFlow). Recent success of ML and Deep Learning (DL) based NIDS solutions assume such flow information (e.g., avg. packet size) is obtained from all packets of the flow. However, often in practice flow exporter is deployed on commodity devices where packet sampling is inevitable. As a result, applicability of such ML based NIDS solutions in the presence of sampling (i.e., when flow information is obtained from sampled set of packets instead of full traffic) is an open question. In this study, we explore the impact of packet sampling on the performance and efficiency of ML-based NIDSs. Unlike previous work, our proposed evaluation procedure is immune to different settings of flow export stage. Hence, it can provide a robust evaluation of NIDS even in the presence of sampling. Through sampling experiments we established that malicious flows with shorter size (i.e., number of packets) are likely to go unnoticed even with mild sampling rates such as 1/10 and 1/100. Next, using the proposed evaluation procedure we investigated the impact of various sampling techniques on NIDS detection rate and false alarm rate. Detection rate and false alarm rate is computed for three sampling rates (i.e., 1/10, 1/100, 1/1000), for four different sampling techniques and for three (two tree-based, one deep learning based) classifiers. Experimental results show that systematic linear sampler - SketFlow performs better compared to non-linear samplers such as Sketch Guided and Fast Filtered sampling. We also found that random forest classifier with SketchFlow sampling was a better combination. The combination showed higher detection rate and lower false alarm rate across multiple sampling rates compared to other sampler-classifier combinations. Our results are consistent in multiple sampling rates, exceptional case is observed for Sketch Guided Samp...

In this article, we propose AUToSen, a deep-learning-based active authentication approach that exploits sensors in consumer-grade smartphones to authenticate a user. Unlike conventional approaches, AUToSen is based on deep learning to identify user distinct behavior from the embedded sensors with and without the user's interaction with the smartphone. We investigate different deep learning architectures in modeling and capturing users' behavioral patterns for the purpose of authentication. Moreover, we explore the sufficiency of sensory data required to accurately authenticate users. We evaluate AUToSen on a real-world data set that includes sensors data of 84 participants' smartphones collected using our designed data-collection application.

In this paper, we systematically tackle the problem of adversarial examples detection in the control flow graph (CFG) based classifiers for malware detection using Soteria. Unique to Soteria, we use both density-based and level-based labels for CFG labeling to yield a consistent representation, a random walk-based traversal approach for feature extraction, and n-gram based module for feature representation. End-to-end, Soteria's representation ensures a simple yet powerful randomization property of the used classification features, making it difficult even for a powerful adversary to launch a successful attack. Soteria also employs a deep learning approach, consisting of an auto-encoder for detecting adversarial examples, and a CNN architecture for detecting and classifying malware samples.

Most authorship identification schemes assume that code samples are written by a single author. However, real software projects are typically the result of a team effort, making it essential to consider a fine-grained multi-author identification in a single code sample, which we address with Multi-χ. Multi-χ leverages a deep learning-based approach for multi-author identification in source code, is lightweight, uses a compact representation for efficiency, and does not require any code parsing, syntax tree extraction, nor feature selection.

In this work, we explore the ecosystem of proxies by understanding their affinities and distributions comparatively. We compare residential and open proxies in various ways, including country-level and city-level analyses to highlight their geospatial distributions, similarities, and differences against a large number of blacklists and categories therein, i.e., spam and maliciousness analysis, to understand their characteristics and attributes.

In this study, we investigate the correlation of different toxic behaviors such as identity hate, and obscenity with different news topics. To do that, we collected a large-scale dataset of approximately 7.3 million comments and more than 10,000 news video captions, utilized deep learning-based techniques to construct an ensemble of classifiers tested on a manually-labeled dataset for label prediction, achieved high accuracy, uncovered a large number of toxic comments on news videos across 15 topics obtained using Latent Dirichlet Allocation (LDA) over the captions of the news videos. Our analysis shows that religion and crime-related news have the highest rate of toxic comments, while economy-related news has the lowest rate. We highlight the necessity of effective tools to address topic-driven toxicity impacting interactions and public discourse on the platform.

This work aims to model seven Spatio-temporal behavioral characteristics of DDoS attacks, including the attack magnitude, the adversaries’ botnet information, and the attack’s source locality down to the organization. We leverage four state-of-the-art deep learning methods to construct an ensemble of models to capture and predict behavioral patterns of the attack. The proposed ensemble operates in two frequencies, hourly and daily, to actively model and predict the attack behavior and involvement, and oversee the effect of implementing a defense mechanism.

In this study, we focus on understanding the shift in the behavior of Twitter users, a major social media platform used by millions daily to share thoughts and discussions. In particular, we collected 26 million tweets for a period of seven months, three months before the pandemic outbreak, and four months after. Using topic modeling and state-of-the-art deep learning techniques, the trending topics within the tweets on monthly-bases, including their sentiment and user’s perception, were analyzed. This study highlights the change of the public behavior and concerns during the pandemic. Users expressed their concerns on health services, with an increase of 59.24% in engagement, and economical effects of the pandemic (34.43% increase). Topics such as online shopping have had a remarkable increase in popularity, perhaps due to the social distancing, while crime and sports topics witnessed a decrease.

This work proposes a convolutional neural network (CNN) based code authorship identification system. Our proposed system exploits term frequency-inverse document frequency, word embedding modeling, and feature learning techniques for code representation. This representation is then fed into a CNN-based code authorship identification model to identify the code’s author.

The purpose of this research is to introduce language model adaptation approach that combines both categories (data selection and weighting criterion) of language model adaptation. This approach applies data selection for specific-task translations by dividing the corpus into smaller and topic-related corpora using clustering process. We investigate the effect of different approaches for clustering the bilingual data on the language model adaptation process in terms of translation quality using the Europarl corpus WMT07 that includes bilingual data for English-Spanish, English-German and English-French.

In this study, we investigate the robustness of such models against adversarial attacks. Our approach crafts the adversarial IoT software using the Subgraph Embedding and Augmentation (SGEA) method that reduces the embedded size required to cause misclassification. Intensive experiments are conducted to evaluate the performance of the proposed method. We observed that SGEA approach is able to misclassify all IoT malware samples as benign by embedding an average size of 6.8 nodes. This highlights that the current detection systems are prone to adversarial examples attacks; thus, there is a need to build more robust systems to detect such manipulated features generated by adversarial examples.

This work proposes W-Net, a CNN-based method for WBC classification. We evaluate W-Net on a real-world large-scale dataset, obtained from The Catholic University of Korea, that includes 6,562 real images of the five WBC types. W-Net achieves an average accuracy of 97%.

This work proposes a Deep Learning-based Code Authorship Identification System (DL-CAIS) for code authorship attribution that facilitates large-scale, language-oblivious, and obfuscation-resilient code authorship identification. The deep learning architecture adopted in this work includes TF-IDF-based deep representation using multiple Recurrent Neural Network (RNN) layers and fully-connected layers dedicated to authorship attribution learning. The deep representation then feeds into a random forest classifier for scalability to de-anonymize the author. Comprehensive experiments are conducted to evaluate DL-CAIS over the entire Google Code Jam (GCJ) dataset across all years (from 2008 to 2016) and over real-world code samples from 1987 public repositories on GitHub.

This work proposes a Deep Learning-based Code Authorship Identification System (DL-CAIS) for code authorship attribution that facilitates large-scale, language-oblivious, and obfuscation-resilient code authorship identification. The deep learning architecture adopted in this work includes TF-IDF-based deep representation using multiple Recurrent Neural Network (RNN) layers and fully-connected layers dedicated to authorship attribution learning. The deep representation then feeds into a random forest classifier for scalability to de-anonymize the author. Comprehensive experiments are conducted to evaluate DL-CAIS over the entire Google Code Jam (GCJ) dataset across all years (from 2008 to 2016) and over real-world code samples from 1987 public repositories on GitHub.

RESEARCH PROJECTS

Robustness and Adversarial Machine Learning

Deep Neural Networks have achieved state-of-the-art performance in various applications. It is crucial to verify that the high accuracy prediction for a given task is derived from the correct problem representation and not from the misuse of artifacts in the data. Investigating the security proprieties of machine learning models, recent studies have shown various categories of adversarial attacks such as model leakage, data membership inference, model confidence reduction, evasion, and poisoning attacks. We believe it is important to investigate and understand the implications of such attacks on sensitive applications in the field of information security and privacy.

DL-FHMC: Robust classification AdvEdge: AML against IDLSes SGEA: AML against malware detectors Soteria: Robust malware detectors Black-box and Target-specific AML BExploration of Black-box AML

Malware Analysis

Malware is one of the serious computer security threats. To protect computers from infection, accurate detection of malware is essential. At the same time, malware detection faces two main practical challenges: the speed of malware development and their distribution continues to increase with complex methods to evade detection (such as a metamorphic or polymorphic malware). This project utilizes various characterizing features extracted from each malware using static and dynamic analysis to build seven machine learning models to detect and analyze malware. We investigate the robustness of such machine learning models against adversarial attacks.

MLxPack DL-FHMC: Robust malware classification SGEA: AML against malware detectors Soteria: Robust malware detectors Spectral Representations of CFG ShellCore

Continuous User Authentication

Smartphones have become crucial for our daily life activities and are increasingly loaded with our personal information to perform several sensitive tasks, including mobile banking, communication, and storing private photos and files. Therefore, there is a high demand for applying usable continuous authentication techniques that prevent unauthorized access. We work on a deep learning-based active authentication approach that exploits sensors in consumer-grade smartphones to authenticate a user. We addressed various aspects regarding accuracy, efficiency, and usability.

AUToSen: Continuous Authentication Contemporary Survey on Sensor-Based Continuous Authentication

Machine Learning for Medical Imaging and Social Good

Computer-aided methods for analyzing white blood cells (WBC) are popular due to the complexity of the manual alternatives. We work on building a deep learning-based method for WBC classification. We proposed W-Net that we evaluated on a real-world large-scale dataset that includes 6,562 real images of the five WBC types. The dataset was provided by The Catholic University of Korea (The CUK), and approved by the Institutional Review Board (IRB) of The CUK. For further benefits, we generate synthetic WBC images using Generative Adversarial Network to be used for education and research purposes through sharing.
Another project is about synthesizing magnetic resonance images (MRI) from computed tomography (CT) simulation scans using deep-learning models for high-dose-rate prostate brachytherapy. For high dose rate prostate brachytherapy, both CT and MRI are acquired to identify catheters and delineate the prostate, respectively. We propose to build a deep-learning model to generate synthetic MRI (sMRI) with enhanced soft-tissue contrast from CT scans. sMRI would assist physicians to accurately delineate the prostate without needing to acquire additional planning MRI.

W-Net: WBC Classification Online Toxicity in Users Interactions with Mainstream Media Children Exposure to Inappropriate Comments in YouTube Synthetic MRI from CT Sentiment analysis of Users' Reactions on Social Media

Network Security and Online Privacy

Network monitoring applications such as flow analysis, intrusion detection, and performance monitoring have become increasingly popular owing to the continuous increase in the speed and volume of network traffic. We work on investigating the feasibility of an in-network intrusion detection system that leverages the computation capabilities of commodity switches to facilitate fast and real-time response. Moreover, we explore the traffic sampling techniques that preserve flows’ behavior to apply intelligence in network monitoring and management. We also address the increased privacy concerns regarding website fingerprinting attacks despite the popular anonymity tools such as Tor and VPNs.

Traffic Sampling for NIDSs Multi-X Exploring the Proxy Ecosystem Studying the DDoS Attacks Progression

CONTACT