A general framework of multiple coordinative data fusion modules for real-time and heterogeneous data sources

Designing a data-responsive system requires accurate input to ensure efficient results. The growth of technology in sensing methods and the needs of various kinds of data greatly impact data fusion (DF)-related study. A coordinative DF framework entails the participation of many subsystems or module...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Kashinath Shafiza Ariffin, Mostafa Salama A., Lim David, Mustapha Aida, Hafit Hanayanti, Darman Rozanawati
Formato: article
Lenguaje:EN
Publicado: De Gruyter 2021
Materias:
Q
Acceso en línea:https://doaj.org/article/01a1a295371249048dda7a997654376b
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Designing a data-responsive system requires accurate input to ensure efficient results. The growth of technology in sensing methods and the needs of various kinds of data greatly impact data fusion (DF)-related study. A coordinative DF framework entails the participation of many subsystems or modules to produce coordinative features. These features are utilized to facilitate and improve solving certain domain problems. Consequently, this paper proposes a general Multiple Coordinative Data Fusion Modules (MCDFM) framework for real-time and heterogeneous data sources. We develop the MCDFM framework to adapt various DF application domains requiring macro and micro perspectives of the observed problems. This framework consists of preprocessing, filtering, and decision as key DF processing phases. These three phases integrate specific purpose algorithms or methods such as data cleaning and windowing methods for preprocessing, extended Kalman filter (EKF) for filtering, fuzzy logic for local decision, and software agents for coordinative decision. These methods perform tasks that assist in achieving local and coordinative decisions for each node in the network of the framework application domain. We illustrate and discuss the proposed framework in detail by taking a stretch of road intersections controlled by a traffic light controller (TLC) as a case study. The case study provides a clearer view of the way the proposed framework solves traffic congestion as a domain problem. We identify the traffic features that include the average vehicle count, average vehicle speed (km/h), average density (%), interval (s), and timestamp. The framework uses these features to identify three congestion periods, which are the nonpeak period with a congestion degree of 0.178 and a variance of 0.061, a medium peak period with a congestion degree of 0.588 and a variance of 0.0593, and a peak period with a congestion degree of 0.796 and a variance of 0.0296. The results of the TLC case study show that the framework provides various capabilities and flexibility features of both micro and macro views of the scenarios being observed and clearly presents viable solutions.