Differentially Private Resource Allocation
Recent studies have shown that systems with limited resources like Metadata-private Messenger (MPM) suffer from side-channel attacks under resource allocation (RA). In the case of MPM, which is supposed to keep the identities and activities of callers and callees private against a network adversary, the attacker can compromise the friends of a victim and keep calling the victim to infer whether the victim is busy, which breaks the privacy guarantee of MPM. Essentially, any resource allocator that manages limited resources could leak private information when the resource request cannot be fulfilled.
In this work, we systematically study how to protect the privacy of RA against the aforementioned attacks with differential privacy (DP), which provides a formal privacy guarantee. Though DP has been tested by Angel et al. (IEEE S&P 2020) in protecting RA, which lets the allocator add dummy requests following a biased Laplace distribution to hide the existence of the victim and then assigns resources randomly, we identify that this approach does not leverage the uncertainty from the attacker's view, thus resulting in a loose bound of DP. In the end, more than 40% of the resources have to be wasted to satisfy DP. To make the DP solutions more practical, we precisely model the process of RA from the attacker's view and present a thorough study of the noisy allocation mechanisms by considering different distributions, scales, and biases of noise. We identify four new mechanisms and prove them all follow $\epsilon$-DP (Angel et al. follow ($\epsilon$, $\delta$)-DP). Through theoretical and empirical analysis, we found these approaches can outperform Angel et al. by a large margin in terms of utility, which paves the way for privacy-preserving RA to become practical.