Enhancing Differential Privacy for Federated Learning at Scale

Federated learning (FL) is an emerging technique that trains machine learning models across multiple de-centralized systems. It enables local devices to collaboratively learn a model by aggregating locally computed updates via a server. Privacy is a core aspect of FL, and recent works in this area a...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Chunghun Baek, Sungwook Kim, Dongkyun Nam, Jihoon Park
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/ca4f5cebcb204fb6adf064ce48a9d83f
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:ca4f5cebcb204fb6adf064ce48a9d83f
record_format dspace
spelling oai:doaj.org-article:ca4f5cebcb204fb6adf064ce48a9d83f2021-11-18T00:11:10ZEnhancing Differential Privacy for Federated Learning at Scale2169-353610.1109/ACCESS.2021.3124020https://doaj.org/article/ca4f5cebcb204fb6adf064ce48a9d83f2021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9592806/https://doaj.org/toc/2169-3536Federated learning (FL) is an emerging technique that trains machine learning models across multiple de-centralized systems. It enables local devices to collaboratively learn a model by aggregating locally computed updates via a server. Privacy is a core aspect of FL, and recent works in this area are advancing the privacy guarantee of an FL network. To ensure rigorous privacy guarantee for FL, prior works have focused on methods to securely aggregate local updates and provide differential privacy (DP). In this paper, we investigate a new privacy risk for FL. Specifically, FL may frequently encounter unexpected user dropouts because it is implemented over a large-scale network. We first observe that user dropouts of an FL network may lead to failure in achieving the desired level of privacy protection, i.e., over-consumption of the privacy budget. Subsequently, we develop a DP mechanism robust to user dropouts by dynamically calibrating noise with account of the dropout rate. We evaluate the proposed technique to train convolutional neural network models on MNIST and FEMNIST datasets over a simulated FL network. Our results show that our approach significantly improves privacy guarantee for user dropouts compared to existing DP algorithms on FL networks.Chunghun BaekSungwook KimDongkyun NamJihoon ParkIEEEarticleDifferential privacyfederated learninguser dropoutnoise calibrationElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 148090-148103 (2021)
institution DOAJ
collection DOAJ
language EN
topic Differential privacy
federated learning
user dropout
noise calibration
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
spellingShingle Differential privacy
federated learning
user dropout
noise calibration
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
Chunghun Baek
Sungwook Kim
Dongkyun Nam
Jihoon Park
Enhancing Differential Privacy for Federated Learning at Scale
description Federated learning (FL) is an emerging technique that trains machine learning models across multiple de-centralized systems. It enables local devices to collaboratively learn a model by aggregating locally computed updates via a server. Privacy is a core aspect of FL, and recent works in this area are advancing the privacy guarantee of an FL network. To ensure rigorous privacy guarantee for FL, prior works have focused on methods to securely aggregate local updates and provide differential privacy (DP). In this paper, we investigate a new privacy risk for FL. Specifically, FL may frequently encounter unexpected user dropouts because it is implemented over a large-scale network. We first observe that user dropouts of an FL network may lead to failure in achieving the desired level of privacy protection, i.e., over-consumption of the privacy budget. Subsequently, we develop a DP mechanism robust to user dropouts by dynamically calibrating noise with account of the dropout rate. We evaluate the proposed technique to train convolutional neural network models on MNIST and FEMNIST datasets over a simulated FL network. Our results show that our approach significantly improves privacy guarantee for user dropouts compared to existing DP algorithms on FL networks.
format article
author Chunghun Baek
Sungwook Kim
Dongkyun Nam
Jihoon Park
author_facet Chunghun Baek
Sungwook Kim
Dongkyun Nam
Jihoon Park
author_sort Chunghun Baek
title Enhancing Differential Privacy for Federated Learning at Scale
title_short Enhancing Differential Privacy for Federated Learning at Scale
title_full Enhancing Differential Privacy for Federated Learning at Scale
title_fullStr Enhancing Differential Privacy for Federated Learning at Scale
title_full_unstemmed Enhancing Differential Privacy for Federated Learning at Scale
title_sort enhancing differential privacy for federated learning at scale
publisher IEEE
publishDate 2021
url https://doaj.org/article/ca4f5cebcb204fb6adf064ce48a9d83f
work_keys_str_mv AT chunghunbaek enhancingdifferentialprivacyforfederatedlearningatscale
AT sungwookkim enhancingdifferentialprivacyforfederatedlearningatscale
AT dongkyunnam enhancingdifferentialprivacyforfederatedlearningatscale
AT jihoonpark enhancingdifferentialprivacyforfederatedlearningatscale
_version_ 1718425191552385024