Localizing a target inside an enclosed cylinder with a single chaotic cavity transducer augmented with supervised machine learning

Ultrasound is employed in, e.g., non-destructive testing and environmental sensing. Unfortunately, conventional single-element ultrasound probes have a limited acoustic aperture. To overcome this limitation, we employ a modern method to increase the field-of-view of a commercial transducer and to te...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Tom Sillanpää, Krista Longi, Joni Mäkinen, Timo Rauhala, Arto Klami, Ari Salmi, Edward Hæggström
Formato: article
Lenguaje:EN
Publicado: AIP Publishing LLC 2021
Materias:
Acceso en línea:https://doaj.org/article/d7a116cc0e8c4851af0b56ddf3e01310
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Ultrasound is employed in, e.g., non-destructive testing and environmental sensing. Unfortunately, conventional single-element ultrasound probes have a limited acoustic aperture. To overcome this limitation, we employ a modern method to increase the field-of-view of a commercial transducer and to test the approach by localizing a target. In practice, we merge the transducer with a chaotic cavity to increase the effective aperture of the transducer. In conventional pulse-echo ultrasound signal analysis, location estimation is based on determining the time-of-flight with known propagation speed in the medium. In the present case, the dispersing field induces complexity to this inverse problem, also in 2D. To tackle this issue, we use a convolutional neural network-based machine learning approach to study the feasibility of employing one single chaotic cavity transducer to localize an object in 2D. We show that we indeed can localize an inclusion inside a water-filled cylinder. The localization accuracy is one diameter of the inclusion. The area that we can infer increases by 49% in comparison to using the same transducer without applying the proposed chaotic cavity method.