Neural Network Physically Unclonable Function: A Trainable Physically Unclonable Function System with Unassailability against Deep Learning Attacks Using Memristor Array

The dissemination of edge devices drives new requirements for security primitives for privacy protection and chip authentication. Memristors are promising entropy sources for realizing hardware‐based security primitives due to their intrinsic randomness and stochastic properties. With the adoption o...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Junkyu Park, Yoonji Lee, Hakcheon Jeong, Shinhyun Choi
Formato: article
Lenguaje:EN
Publicado: Wiley 2021
Materias:
Acceso en línea:https://doaj.org/article/6bc8ebf71f8b468eab3f7d56c64aa47b
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:The dissemination of edge devices drives new requirements for security primitives for privacy protection and chip authentication. Memristors are promising entropy sources for realizing hardware‐based security primitives due to their intrinsic randomness and stochastic properties. With the adoption of memristors among several technologies that meet essential requirements, the neural network physically unclonable function (NNPUF) is proposed, a novel PUF design that takes advantage of deep learning algorithms. The proposed design integrated with the memristor array can be constructed easily because the system does not depend on write operation accuracy. To contemplate a nondifferentiable module during training, an original concept of loss called PUF loss is devised. Iterations of weight update with the loss function bring about optimal NNPUF performance. It is shown that the design achieves a near‐ideal 50% average value for security metrics, including uniformity, diffuseness, and uniqueness. This means that the NNPUF satisfies practical quality standards for security primitives by training with PUF loss. It is also demonstrated that the NNPUF response has an unassailable resistance against deep learning‐based modeling attacks, which is verified by the near‐50% prediction model accuracy.