An Improved Self-Training Method for Positive Unlabeled Time Series Classification Using DTW Barycenter Averaging

Traditional supervised time series classification (TSC) tasks assume that all training data are labeled. However, in practice, manually labelling all unlabeled data could be very time-consuming and often requires the participation of skilled domain experts. In this paper, we concern with the positiv...

Full description

Saved in:
Bibliographic Details
Main Authors: Jing Li, Haowen Zhang, Yabo Dong, Tongbin Zuo, Duanqing Xu
Format: article
Language:EN
Published: MDPI AG 2021
Subjects:
Online Access:https://doaj.org/article/8e7f01c330cb45f9aa4fc325ef7b54b0
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Traditional supervised time series classification (TSC) tasks assume that all training data are labeled. However, in practice, manually labelling all unlabeled data could be very time-consuming and often requires the participation of skilled domain experts. In this paper, we concern with the positive unlabeled time series classification problem (<i>PUTSC</i>), which refers to automatically labelling the large unlabeled set <i>U</i> based on a small positive labeled set <i>PL</i>. The self-training (<i>ST</i>) is the most widely used method for solving the <i>PUTSC</i> problem and has attracted increased attention due to its simplicity and effectiveness. The existing <i>ST</i> methods simply employ the <i>one-nearest-neighbor</i> (<i>1NN)</i> formula to determine which unlabeled time-series should be labeled. Nevertheless, we note that the <i>1NN</i> formula might not be optimal for <i>PUTSC</i> tasks because it may be sensitive to the initial labeled data located near the boundary between the positive and negative classes. To overcome this issue, in this paper we propose an exploratory methodology called <i>ST-average</i>. Unlike conventional <i>ST</i>-based approaches, <i>ST-average</i> utilizes the average sequence calculated by DTW barycenter averaging technique to label the data. Compared with any individuals in <i>PL</i> set, the average sequence is more representative. Our proposal is insensitive to the initial labeled data and is more reliable than existing <i>ST</i>-based methods. Besides, we demonstrate that <i>ST-average</i> can naturally be implemented along with many existing techniques used in original <i>ST</i>. Experimental results on public datasets show that <i>ST-average</i> performs better than related popular methods.