Secure Softmax/Sigmoid for Machine-learning Computation
Softmax and Sigmoid, composing exponential functions ($e^x$) with division ($1/𝑥$), are activation functions often required in training. Secure computation on non-linear, unbounded 1/𝑥 and 𝑒 𝑥 through multiparty computation (MPC) is already challenging, not to say their combination. Prior arts compute softmax exactly, via iteration, or with ASM approximation. All of them are not satisfactory in terms of efficiency, accuracy, or both. Sigmoid was replaced with piecewise functions, incurring logarithmic communication rounds.
We explore a rarely-explored approach of numerical approximations through ordinary differential equations and Fourier series. We extend MPC-friendly functions to composition rings and rational/trigonometric polynomials and provide the abstraction for laying the foundation for future MPC-based learning. Our results include 1) the first constant-round protocol for Softmax, reducing the communication by 83 ∼ 85%, and 2) the first 1-round error-bounded protocol for approximating Sigmoid, reducing communication by 95% compared with prior piecewise-approximated solutions. This allows private training with much less communication than state-of-the-art frameworks, namely, CryptGPU (IEEE S&P’21), Piranha (Usenix Security’22), and Secure Quantized Training (ICML’22), with a shortened training process but competitive accuracy.