Classification of malware families is crucial for a comprehensive
understanding of how they can infect devices, computers, or systems. Thus,
malware identification enables security researchers and incident responders to
take precautions against malware and accelerate mitigation. API call sequences
made by malware are widely utilized features by machine and deep learning
models for malware classification as these sequences represent the behavior of
malware. However, traditional machine and deep learning models remain incapable
of capturing sequence relationships between API calls. On the other hand, the
transformer-based models process sequences as a whole and learn relationships
between API calls due to multi-head attention mechanisms and positional
embeddings. Our experiments demonstrate that the transformer model with one
transformer block layer surpassed the widely used base architecture, LSTM.
Moreover, BERT or CANINE, pre-trained transformer models, outperformed in
classifying highly imbalanced malware families according to evaluation metrics,
F1-score, and AUC score. Furthermore, the proposed bagging-based random
transformer forest (RTF), an ensemble of BERT or CANINE, has reached the
state-of-the-art evaluation scores on three out of four datasets, particularly
state-of-the-art F1-score of 0.6149 on one of the commonly used benchmark
dataset.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Demirkiran_F/0/1/0/all/0/1">Ferhat Demirk&#x131;ran</a>, <a href="http://arxiv.org/find/cs/1/au:+Cayir_A/0/1/0/all/0/1">Aykut &#xc7;ay&#x131;r</a>, <a href="http://arxiv.org/find/cs/1/au:+Unal_U/0/1/0/all/0/1">U&#x11f;ur &#xdc;nal</a>, <a href="http://arxiv.org/find/cs/1/au:+Dag_H/0/1/0/all/0/1">Hasan Da&#x11f;</a>

By admin