Hadoop Data Reduction Framework: Applying Data Reduction at the DFS Layer

Big-data processing systems such as Hadoop, which usually utilize distributed file systems (DFSs), require data reduction schemes to maximize storage space efficiency. These schemes have different tradeoffs, and there are no all-purpose schemes applicable to all data. Users must select a suitable sc...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Ryan Nathanael Soenjoto Widodo, Hirotake Abe, Kazuhiko Kato
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/7c5afee34aef437296b7e64a59170164
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Big-data processing systems such as Hadoop, which usually utilize distributed file systems (DFSs), require data reduction schemes to maximize storage space efficiency. These schemes have different tradeoffs, and there are no all-purpose schemes applicable to all data. Users must select a suitable scheme in accordance with their data. To accommodate this requirement, application software or file system (FS) have a fixed selection of these schemes. However, these provided schemes are insufficient for all data types, and when novel schemes emerge, extending the selection can be problematic. If the source code of the application or FS is available, the source code could potentially be extended with extensive labor, but could be virtually impossible without the code maintainers’ assistance. If the source code is unavailable, there is no way to tackle the problem. This paper proposes an unexplored solution through a modular DFS design that eases data reduction scheme usage through existing programming techniques. The advantages of this presented approach are threefold. First, adding new schemes is easy and they are transparent to the application code requiring no extensions to it. Second, the modular structure requires minimal modification to the existing DFSs and performance overhead. Third, users can compile schemes separately from the DFS without the FS or DFS source code. To demonstrate the design’s effectiveness, we implemented it by minimally extending the Hadoop DFS (HDFS) and named it the Hadoop Data Reduction Framework (HDRF). We designed HDRF to work with minimal overhead and tested it extensively. Experimental results indicate that it has negligible overhead over existing approaches. In a number of cases, it can offer up to 48.96% higher throughput while achieving the best result in storage reduction within our tested setups because of the incorporated data reduction schemes.