ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking

Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is la...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Anuj Dubey, Afzal Ahmad, Muhammad Adeel Pasha, Rosario Cammarota, Aydin Aysu
Formato: article
Lenguaje:EN
Publicado: Ruhr-Universität Bochum 2021
Materias:
Acceso en línea:https://doaj.org/article/9b4a0f8a04d94628a67a94e0eb278e22
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:9b4a0f8a04d94628a67a94e0eb278e22
record_format dspace
spelling oai:doaj.org-article:9b4a0f8a04d94628a67a94e0eb278e222021-11-19T14:36:06ZModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking10.46586/tches.v2022.i1.506-5562569-2925https://doaj.org/article/9b4a0f8a04d94628a67a94e0eb278e222021-11-01T00:00:00Zhttps://tches.iacr.org/index.php/TCHES/article/view/9306https://doaj.org/toc/2569-2925 Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography such as masking to ML frameworks. Existing works, however, revealed that a straightforward adaptation of such defenses either provides partial security or leads to high area overheads. To address those limitations, this work proposes a fundamentally new direction to construct neural networks that are inherently more compatible with masking. The key idea is to use modular arithmetic in neural networks and then efficiently realize masking, in either Boolean or arithmetic fashion, depending on the type of neural network layers. We demonstrate our approach on the edge-computing friendly binarized neural networks (BNN) and show how to modify the training and inference of such a network to work with modular arithmetic without sacrificing accuracy. We then design novel masking gadgets using Domain-Oriented Masking (DOM) to efficiently mask the unique operations of ML such as the activation function and the output layer classification, and we prove their security in the glitch-extended probing model. Finally, we implement fully masked neural networks on an FPGA, quantify that they can achieve a similar latency while reducing the FF and LUT costs over the state-of-the-art protected implementations by 34.2% and 42.6%, respectively, and demonstrate their first-order side-channel security with up to 1M traces. Anuj DubeyAfzal AhmadMuhammad Adeel PashaRosario CammarotaAydin AysuRuhr-Universität BochumarticleNeural networksSide-channel attacksHardware MaskingComputer engineering. Computer hardwareTK7885-7895Information technologyT58.5-58.64ENTransactions on Cryptographic Hardware and Embedded Systems, Vol 2022, Iss 1 (2021)
institution DOAJ
collection DOAJ
language EN
topic Neural networks
Side-channel attacks
Hardware Masking
Computer engineering. Computer hardware
TK7885-7895
Information technology
T58.5-58.64
spellingShingle Neural networks
Side-channel attacks
Hardware Masking
Computer engineering. Computer hardware
TK7885-7895
Information technology
T58.5-58.64
Anuj Dubey
Afzal Ahmad
Muhammad Adeel Pasha
Rosario Cammarota
Aydin Aysu
ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking
description Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography such as masking to ML frameworks. Existing works, however, revealed that a straightforward adaptation of such defenses either provides partial security or leads to high area overheads. To address those limitations, this work proposes a fundamentally new direction to construct neural networks that are inherently more compatible with masking. The key idea is to use modular arithmetic in neural networks and then efficiently realize masking, in either Boolean or arithmetic fashion, depending on the type of neural network layers. We demonstrate our approach on the edge-computing friendly binarized neural networks (BNN) and show how to modify the training and inference of such a network to work with modular arithmetic without sacrificing accuracy. We then design novel masking gadgets using Domain-Oriented Masking (DOM) to efficiently mask the unique operations of ML such as the activation function and the output layer classification, and we prove their security in the glitch-extended probing model. Finally, we implement fully masked neural networks on an FPGA, quantify that they can achieve a similar latency while reducing the FF and LUT costs over the state-of-the-art protected implementations by 34.2% and 42.6%, respectively, and demonstrate their first-order side-channel security with up to 1M traces.
format article
author Anuj Dubey
Afzal Ahmad
Muhammad Adeel Pasha
Rosario Cammarota
Aydin Aysu
author_facet Anuj Dubey
Afzal Ahmad
Muhammad Adeel Pasha
Rosario Cammarota
Aydin Aysu
author_sort Anuj Dubey
title ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking
title_short ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking
title_full ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking
title_fullStr ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking
title_full_unstemmed ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking
title_sort modulonet: neural networks meet modular arithmetic for efficient hardware masking
publisher Ruhr-Universität Bochum
publishDate 2021
url https://doaj.org/article/9b4a0f8a04d94628a67a94e0eb278e22
work_keys_str_mv AT anujdubey modulonetneuralnetworksmeetmodulararithmeticforefficienthardwaremasking
AT afzalahmad modulonetneuralnetworksmeetmodulararithmeticforefficienthardwaremasking
AT muhammadadeelpasha modulonetneuralnetworksmeetmodulararithmeticforefficienthardwaremasking
AT rosariocammarota modulonetneuralnetworksmeetmodulararithmeticforefficienthardwaremasking
AT aydinaysu modulonetneuralnetworksmeetmodulararithmeticforefficienthardwaremasking
_version_ 1718420086131261440