Enhancing Differential Privacy for Federated Learning at Scale

Federated learning (FL) is an emerging technique that trains machine learning models across multiple de-centralized systems. It enables local devices to collaboratively learn a model by aggregating locally computed updates via a server. Privacy is a core aspect of FL, and recent works in this area a...

Description complète

Enregistré dans:
Détails bibliographiques
Auteurs principaux: Chunghun Baek, Sungwook Kim, Dongkyun Nam, Jihoon Park
Format: article
Langue:EN
Publié: IEEE 2021
Sujets:
Accès en ligne:https://doaj.org/article/ca4f5cebcb204fb6adf064ce48a9d83f
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
Description
Résumé:Federated learning (FL) is an emerging technique that trains machine learning models across multiple de-centralized systems. It enables local devices to collaboratively learn a model by aggregating locally computed updates via a server. Privacy is a core aspect of FL, and recent works in this area are advancing the privacy guarantee of an FL network. To ensure rigorous privacy guarantee for FL, prior works have focused on methods to securely aggregate local updates and provide differential privacy (DP). In this paper, we investigate a new privacy risk for FL. Specifically, FL may frequently encounter unexpected user dropouts because it is implemented over a large-scale network. We first observe that user dropouts of an FL network may lead to failure in achieving the desired level of privacy protection, i.e., over-consumption of the privacy budget. Subsequently, we develop a DP mechanism robust to user dropouts by dynamically calibrating noise with account of the dropout rate. We evaluate the proposed technique to train convolutional neural network models on MNIST and FEMNIST datasets over a simulated FL network. Our results show that our approach significantly improves privacy guarantee for user dropouts compared to existing DP algorithms on FL networks.