TY - GEN
T1 - Lasagna: Accelerating secure deep learning inference in SGX-enabled edge cloud
AU - Li, Yuepeng
AU - Zeng, Deze
AU - Gu, Lin
AU - Chen, Quan
AU - Guo, Song
AU - Zomaya, Albert
AU - Guo, Minyi
N1 - Funding Information:
The corresponding author is Deze Zeng ([email protected]). This research was supported by the funding from the National Natural Science Foundation of China (61772480, 61972171, 62022057, 61872310), Hong Kong RGC Research Impact Fund (RIF) with the Project No. R5060-19, General Research Fund (GRF) with the Project No. 152221/19E and 15220320/20E, and Shenzhen Science and Technology Innovation Commission (R2020A045).
Publisher Copyright:
© 2021 Association for Computing Machinery.
PY - 2021/11/1
Y1 - 2021/11/1
N2 - Edge intelligence has already been widely regarded as a key enabling technology in a variety of domains. Along with the prosperity, increasing concern is raised on the security and privacy of intelligent applications. As these applications are usually deployed on shared and untrusted edge servers, malicious co-located attackers, or even untrustworthy infrastructure providers, may acquire highly security-sensitive data and code (i.e., the pre-trained model). Software Guard Extensions (SGX) provides an isolated Trust Execution Environment (TEE) for task security guarantee. However, we notice that DNN inference performance in SGX is severely affected by the limited enclave memory space due to the resultant frequent page swapping operations and the high enclave call overhead. To tackle this problem, we propose Lasagna, an SGX oriented DNN inference performance acceleration framework without compromising the task security. Lasagna consists of a local task scheduler and a global task balancer to optimize the system performance by exploring the layered-structure of DNN models. Our experiment results show that our layer-aware Lasagna effectively speeds up the well-known DNN inference in SGX by 1.31x-1.97x.
AB - Edge intelligence has already been widely regarded as a key enabling technology in a variety of domains. Along with the prosperity, increasing concern is raised on the security and privacy of intelligent applications. As these applications are usually deployed on shared and untrusted edge servers, malicious co-located attackers, or even untrustworthy infrastructure providers, may acquire highly security-sensitive data and code (i.e., the pre-trained model). Software Guard Extensions (SGX) provides an isolated Trust Execution Environment (TEE) for task security guarantee. However, we notice that DNN inference performance in SGX is severely affected by the limited enclave memory space due to the resultant frequent page swapping operations and the high enclave call overhead. To tackle this problem, we propose Lasagna, an SGX oriented DNN inference performance acceleration framework without compromising the task security. Lasagna consists of a local task scheduler and a global task balancer to optimize the system performance by exploring the layered-structure of DNN models. Our experiment results show that our layer-aware Lasagna effectively speeds up the well-known DNN inference in SGX by 1.31x-1.97x.
KW - DNN Inference
KW - Edge intelligence
KW - SGX
KW - Task scheduling
UR - http://www.scopus.com/inward/record.url?scp=85119250748&partnerID=8YFLogxK
U2 - 10.1145/3472883.3486988
DO - 10.1145/3472883.3486988
M3 - Conference article published in proceeding or book
AN - SCOPUS:85119250748
T3 - SoCC 2021 - Proceedings of the 2021 ACM Symposium on Cloud Computing
SP - 533
EP - 545
BT - SoCC 2021 - Proceedings of the 2021 ACM Symposium on Cloud Computing
PB - Association for Computing Machinery, Inc
T2 - 12th Annual ACM Symposium on Cloud Computing, SoCC 2021
Y2 - 1 November 2021 through 4 November 2021
ER -