AI-as-a-Service (AIaaS) is an innovative cloud-based paradigm that empowers service providers to deliver advanced artificial intelligence capabilities—such as facial recognition, financial prediction, and epidemic modeling—through accessible online platforms. Despite its benefits, this model presents serious data privacy challenges, as users must transmit sensitive personal or corporate information to external servers. Among these threats, inference leakage attacks are particularly concerning, as they can compromise both user data and the integrity of the AI models.Traditional methods often struggle to strike an optimal balance between efficiency and data confidentiality, leading to security gaps and increased exposure to unauthorized access or leaks. To mitigate these issues, this project introduces a privacy-preserving solution based on Fully Homomorphic Encryption (FHE), which allows computations to be performed directly on encrypted data.With FHE, user inputs remain encrypted throughout the AI processing workflow. When a user submits encrypted data, the cloud server processes it using the AI model without ever seeing the original content. The resulting output is also encrypted and can only be decrypted by the user holding the appropriate private key. This ensures that the service provider cannot access either the input or the output in plain form.By employing FHE, this framework secures the inference process, blocks leakage of sensitive information, and preserves the proprietary nature of AI models. The method is especially suitable for applications requiring rapid decision-making, such as live facial recognition, while also reinforcing trust and privacy in AI cloud services.
Introduction
Artificial Intelligence-as-a-Service (AIaaS) enables users to access advanced AI capabilities via the cloud without costly local infrastructure. However, it faces critical security challenges, especially inference leakage attacks where attackers can extract sensitive data or proprietary AI model details by analyzing outputs.
To address this, the project proposes integrating Fully Homomorphic Encryption (FHE) into AIaaS. FHE allows computations directly on encrypted data, ensuring data privacy throughout the entire process—from input encryption, through AI model inference, to output encryption—without ever decrypting data on the server. Only the user holding the private key can decrypt results, preserving confidentiality and control.
The system architecture includes modules for secure model deployment, encrypted data handling, key management, and privacy-preserving AI inference. Model Owners upload encrypted AI models, while Model Users submit encrypted inputs. The AI processes encrypted data and returns encrypted outputs, preventing exposure of sensitive information to the cloud provider or attackers.
Results show that this FHE-based approach effectively protects user data and AI model confidentiality, preventing inference leakage while maintaining prediction accuracy. The system enforces strict access control and logging, ensuring secure, auditable AI service usage without compromising privacy.
Conclusion
In summary, this project presents a robust and privacy-focused AI-as-a-Service (AIaaS) framework built upon Fully Homomorphic Encryption (FHE), enabling secure deployment and utilization of AI models without exposing sensitive information. The architecture incorporates critical components including the AIaaS Service Provider Module, End-User Interface, Key Management System, Data Encryption Layer, Secure Model Computation Engine, and Output Decryption Unit. This design guarantees complete data encryption from input submission to result delivery, ensuring privacy throughout the entire computational workflow.
Model Providers are able to upload encrypted models securely, while End Users can input encrypted data and receive results in an encrypted format—ensuring neither party reveals any unencrypted information. The implementation of FHE ensures that all model inference operations are conducted on encrypted data, eliminating the need for decryption and thus fortifying user and model confidentiality.
Additionally, the framework incorporates strong key governance, access permissions, and usage logging to promote accountability and ensure secure AI service management. Although the current system addresses major privacy and data protection challenges, future work can aim to improve processing efficiency, support scalable cloud environments, and broaden compatibility with various AI architectures. Ultimately, this solution marks a meaningful step toward establishing secure, privacy-respecting AI services applicable in critical domains such as healthcare, finance, and national security.
References
[1] X. Pei, X. Deng, S. Tian, J. Liu, and K. Xue, “A privacy-preserving graph neural network design tailored for decentralized local graph scenarios,” IEEE Transactions on Information Forensics and Security, vol. 19, pp. 1614–1629, 2024.
[2] L. Bergerat, A. Boudi, Q. Bourgerie, I. Chillotti, D. Ligier, J.-B. Orfila, et al., “Tuning parameters and enhancing precision for (T)FHE computations,” Journal of Cryptology, vol. 36, no. 3, article 28, Jun. 2023.
[3] A. El Ouadrhiri and A. Abdelhadi, “Survey on differential privacy approaches in deep and federated learning environments,” IEEE Access, vol. 10, pp. 22359–22380, 2022.
[4] C. A. Choquette-Choo, F. Tramèr, N. Carlini, et al., “Membership inference with access to labels only,” in Proc. International Conference on Machine Learning (ICML), pp. 1964–1974, 2021.
[5] A. Kumar, R. S. Raj, P. Yadav, and M. Singh, “Blockchain-enhanced secure deployment methodology for AIaaS,” Journal of Cryptographic Engineering, vol. 15, no. 2, pp. 125–142, 2024.
[6] M. S. Rahman, T. Ahmed, and M. M. Rahman, “Comprehensive survey on homomorphic encryption usage in secure AI evaluation,” International Journal of Information Security, vol. 23, no. 4, pp. 379–394, 2023.
[7] P. Sharma, S. Bansal, and S. Jain, “Strengthening AI-as-a-Service using fully homomorphic encryption alongside federated learning,” Journal of Cloud Computing: Advances, Systems, and Applications, vol. 13, no. 1, article 45, 2024.
[8] L. Liu, J. Ma, and F. Zhao, “Frameworks and strategies for AI model confidentiality using homomorphic encryption,” IEEE Access, vol. 12, pp. 10433–10445, 2024.