Full Program »
Assessing Model-free Anomaly Detection in Industrial Control Systems Against Generic Concealment Attacks
In recent years, a number of model-free process-based anomaly detection schemes for Industrial Control Systems (ICS) were proposed. Model-free anomaly detectors are trained directly from process data and do not require process knowledge. They are validated based on a set of public data with limited attacks present.
As result, the resilience of those schemes against general concealment attacks is unclear.
In addition, no structured discussion on the properties verified by the detectors exist.
In this work, we provide the first systematic analysis of such anomaly detection schemes, focusing on six model-free process-based anomaly detectors. We hypothesize that the detectors verify a combination of temporal, spatial, and statistical consistencies. To test this, we systematically analyse their resilience against generic concealment attacks. Our generic concealment attacks are designed to violate a specific consistency verified by the detector, and require no knowledge of the attacked physical process or the detector. In addition, we compare against prior work attacks that were designed to attack neural network-based detectors.
Our results demonstrate that the evaluated model-free detectors (neural network-based and others) are in general susceptible to generic concealment attacks.
For each evaluated detector, at least one of our generic concealment attacks performs better than prior work attacks.
In particular, the results allow us to show which specific consistencies are verified by each detector.
We also find that prior work attacks that target neural-network architectures transfer surprisingly well against other architectures.