MEML: A Deep Data Augmentation Method by Mean Extrapolation in Middle Layers
Data augmentation, generating new data that are similar but not same with the original data by making a series of transformations to the original data, is one of the mainstream methods to alleviate the problem of insufficient data. Instead of augmenting input data, this paper proposes a method for a...
Guardado en:
Autores principales: | , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/434a0ebb58f24674a9e21dac64037db1 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Data augmentation, generating new data that are similar but not same with the original data by making a series of transformations to the original data, is one of the mainstream methods to alleviate the problem of insufficient data. Instead of augmenting input data, this paper proposes a method for augmenting features in the middle layers of deep models, called MEML (Mean Extrapolation in Middle Layers). It gets the features outputted by any middle layer of deep models and then generates new features by doing mean extrapolation with some randomly selected features. After that, it replaces these selected features with their corresponding new ones and keeps the labels unchanged, and then lets the new composed output continue to propagate forward. Experiments on two classic deep neural network models and three image datasets show that our MEML method can significantly improve the model classification accuracy and outperform in most experiments the state-of-the-art feature space augmentation methods that factor dropout and K Nearest Neighbors extrapolation. Interestingly, when coupled with some input space augmentation methods, e.g., rotation and horizontal flip, our MEML method further improves the performance of two deep models on three datasets, implying that the input space augmentation methods and MEML could complement each other. |
---|