A multichannel learning-based approach for sound source separation in reverberant environments
Abstract In this paper, a multichannel learning-based network is proposed for sound source separation in reverberant field. The network can be divided into two parts according to the training strategies. In the first stage, time-dilated convolutional blocks are trained to estimate the array weights...
Enregistré dans:
Auteurs principaux: | You-Siang Chen, Zi-Jie Lin, Mingsian R. Bai |
---|---|
Format: | article |
Langue: | EN |
Publié: |
SpringerOpen
2021
|
Sujets: | |
Accès en ligne: | https://doaj.org/article/a947a00711ff4aee90a2d3f7edda03a6 |
Tags: |
Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
|
Documents similaires
-
A recursive expectation-maximization algorithm for speaker tracking and separation
par: Ofer Schwartz, et autres
Publié: (2021) -
U2-VC: one-shot voice conversion using two-level nested U-structure
par: Fangkun Liu, et autres
Publié: (2021) -
Towards modelling active sound localisation based on Bayesian inference in a static environment
par: McLachlan Glen, et autres
Publié: (2021) -
Full waveform inversion for bore reconstruction of woodwind-like instruments
par: Ernoult Augustin, et autres
Publié: (2021) -
Multi-frequency sonoreactor characterisation in the frequency domain using a semi-empirical bubbly liquid model
par: Jin Kiat Chu, et autres
Publié: (2021)