A multichannel learning-based approach for sound source separation in reverberant environments
Abstract In this paper, a multichannel learning-based network is proposed for sound source separation in reverberant field. The network can be divided into two parts according to the training strategies. In the first stage, time-dilated convolutional blocks are trained to estimate the array weights...
Guardado en:
Autores principales: | You-Siang Chen, Zi-Jie Lin, Mingsian R. Bai |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
SpringerOpen
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/a947a00711ff4aee90a2d3f7edda03a6 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
A recursive expectation-maximization algorithm for speaker tracking and separation
por: Ofer Schwartz, et al.
Publicado: (2021) -
U2-VC: one-shot voice conversion using two-level nested U-structure
por: Fangkun Liu, et al.
Publicado: (2021) -
Towards modelling active sound localisation based on Bayesian inference in a static environment
por: McLachlan Glen, et al.
Publicado: (2021) -
Full waveform inversion for bore reconstruction of woodwind-like instruments
por: Ernoult Augustin, et al.
Publicado: (2021) -
Multi-frequency sonoreactor characterisation in the frequency domain using a semi-empirical bubbly liquid model
por: Jin Kiat Chu, et al.
Publicado: (2021)