Training deep quantum neural networks
It is hard to design quantum neural networks able to work with quantum data. Here, the authors propose a noise-robust architecture for a feedforward quantum neural network, with qudits as neurons and arbitrary unitary operations as perceptrons, whose training procedure is efficient in the number of...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | article |
Language: | EN |
Published: |
Nature Portfolio
2020
|
Subjects: | |
Online Access: | https://doaj.org/article/d9ba2b872323438c9255d08c30daee30 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
oai:doaj.org-article:d9ba2b872323438c9255d08c30daee30 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:d9ba2b872323438c9255d08c30daee302021-12-02T15:37:08ZTraining deep quantum neural networks10.1038/s41467-020-14454-22041-1723https://doaj.org/article/d9ba2b872323438c9255d08c30daee302020-02-01T00:00:00Zhttps://doi.org/10.1038/s41467-020-14454-2https://doaj.org/toc/2041-1723It is hard to design quantum neural networks able to work with quantum data. Here, the authors propose a noise-robust architecture for a feedforward quantum neural network, with qudits as neurons and arbitrary unitary operations as perceptrons, whose training procedure is efficient in the number of layers.Kerstin BeerDmytro BondarenkoTerry FarrellyTobias J. OsborneRobert SalzmannDaniel ScheiermannRamona WolfNature PortfolioarticleScienceQENNature Communications, Vol 11, Iss 1, Pp 1-6 (2020) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Science Q |
spellingShingle |
Science Q Kerstin Beer Dmytro Bondarenko Terry Farrelly Tobias J. Osborne Robert Salzmann Daniel Scheiermann Ramona Wolf Training deep quantum neural networks |
description |
It is hard to design quantum neural networks able to work with quantum data. Here, the authors propose a noise-robust architecture for a feedforward quantum neural network, with qudits as neurons and arbitrary unitary operations as perceptrons, whose training procedure is efficient in the number of layers. |
format |
article |
author |
Kerstin Beer Dmytro Bondarenko Terry Farrelly Tobias J. Osborne Robert Salzmann Daniel Scheiermann Ramona Wolf |
author_facet |
Kerstin Beer Dmytro Bondarenko Terry Farrelly Tobias J. Osborne Robert Salzmann Daniel Scheiermann Ramona Wolf |
author_sort |
Kerstin Beer |
title |
Training deep quantum neural networks |
title_short |
Training deep quantum neural networks |
title_full |
Training deep quantum neural networks |
title_fullStr |
Training deep quantum neural networks |
title_full_unstemmed |
Training deep quantum neural networks |
title_sort |
training deep quantum neural networks |
publisher |
Nature Portfolio |
publishDate |
2020 |
url |
https://doaj.org/article/d9ba2b872323438c9255d08c30daee30 |
work_keys_str_mv |
AT kerstinbeer trainingdeepquantumneuralnetworks AT dmytrobondarenko trainingdeepquantumneuralnetworks AT terryfarrelly trainingdeepquantumneuralnetworks AT tobiasjosborne trainingdeepquantumneuralnetworks AT robertsalzmann trainingdeepquantumneuralnetworks AT danielscheiermann trainingdeepquantumneuralnetworks AT ramonawolf trainingdeepquantumneuralnetworks |
_version_ |
1718386263148462080 |