DeepContract: Controllable Authorization of Deep Learning Models
Well-trained deep learning (DL) models are widely used in various fields and recognized as valuable intellectual property. However, most existing efforts to fully exploit their value either require users to upload input data to provide machine learning services, which raises serious privacy concerns, or deploy DL models on the user side, resulting in a loss of control over the models. While a few active model authorization methods protect the model from unauthorized users, they cannot prevent the model from being redistributed or abused by authorized users. To address the urgent need to efficiently protect both model confidentiality and input data privacy, and achieve uninterrupted model controllability, we propose a contract-based model authorization framework called DeepContract. This framework enables model owners to deploy their models on the user side for local inference without revealing original model weights. Moreover, it allows them to grant and revoke the right to use their models at any time. Specifically, we propose a generic model encryption method that significantly outperforms the state-of-the-art method in both security and efficiency. Leveraging the integrity verification in the Trusted Execution Environment, contract-based and verifiable enclave codes generated by DeepContract can perform controlled inference with the distributed encrypted model on the user side. Our extensive evaluations show that DeepContract can achieve secure and efficient controllable model authorization for the pre-signed contract.