Annual Computer Security Applications Conference (ACSAC) 2023

FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal Adversarial Masks

We propose FLARE, the first fingerprinting mechanism to verify whether a suspected Deep Reinforcement Learning (DRL) policy is an illegitimate copy of another (victim) policy . We first show that it is possible to find non-transferable, universal adversarial masks, i.e., perturbations, to generate adversarial examples that can successfully transfer from a victim policy to its modified versions but not to independently trained policies. FLARE employs these masks as fingerprints, and verifies the true ownership of stolen DRL policies by measuring an action agreement value over states perturbed via fingerprints. Our empirical evaluations show that FLARE is effective (100% action agreement on stolen copies) and does not falsely accuse independent policies (no false positives). FLARE is also robust to model modification attacks and cannot be easily evaded by more informed adversaries without negatively impacting agent performance. We also show that not all universal adversarial masks are suitable candidates for fingerprints due to the inherent characteristics of DRL policies.

Buse Gul Atli Tekgul
Nokia Bell Labs & Aalto University

N. Asokan
University of Waterloo & Aalto University

Paper (ACM DL)

Slides