• Title/Summary/Keyword: Time Series Data Processing

Search Result 322, Processing Time 0.03 seconds

A novel window strategy for concept drift detection in seasonal time series (계절성 시계열 자료의 concept drift 탐지를 위한 새로운 창 전략)

  • Do Woon Lee;Sumin Bae;Kangsub Kim;Soonhong An
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.377-379
    • /
    • 2023
  • Concept drift detection on data stream is the major issue to maintain the performance of the machine learning model. Since the online stream is to be a function of time, the classical statistic methods are hard to apply. In particular case of seasonal time series, a novel window strategy with Fourier analysis however, gives a chance to adapt the classical methods on the series. We explore the KS-test for an adaptation of the periodic time series and show that this strategy handles a complicate time series as an ordinary tabular dataset. We verify that the detection with the strategy takes the second place in time delay and shows the best performance in false alarm rate and detection accuracy comparing to that of arbitrary window sizes.

Time Series Data Cleaning Method Based on Optimized ELM Prediction Constraints

  • Guohui Ding;Yueyi Zhu;Chenyang Li;Jinwei Wang;Ru Wei;Zhaoyu Liu
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.149-163
    • /
    • 2023
  • Affected by external factors, errors in time series data collected by sensors are common. Using the traditional method of constraining the speed change rate to clean the errors can get good performance. However, they are only limited to the data of stable changing speed because of fixed constraint rules. Actually, data with uneven changing speed is common in practice. To solve this problem, an online cleaning algorithm for time series data based on dynamic speed change rate constraints is proposed in this paper. Since time series data usually changes periodically, we use the extreme learning machine to learn the law of speed changes from past data and predict the speed ranges that change over time to detect the data. In order to realize online data repair, a dual-window mechanism is proposed to transform the global optimal into the local optimal, and the traditional minimum change principle and median theorem are applied in the selection of the repair strategy. Aiming at the problem that the repair method based on the minimum change principle cannot correct consecutive abnormal points, through quantitative analysis, it is believed that the repair strategy should be the boundary of the repair candidate set. The experimental results obtained on the dataset show that the method proposed in this paper can get a better repair effect.

The Data Processing Method for Small Samples and Multi-variates Series in GPS Deformation Monitoring

  • Guo-Lin, Liu;Wen-Hua, Zheng;Xin-Zhou, Wang;Lian-Peng, Zhang
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.1
    • /
    • pp.185-189
    • /
    • 2006
  • Time series analysis is a frequently effective method of constructing model and prediction in data processing of deformation monitoring. The monitoring data sample must to be as more as possible and time intervals are equal roughly so as to construct time series model accurately and achieve reliable prediction. But in the project practice of GPS deformation monitoring, the monitoring data sample can't be obtained too much and time intervals are not equal because of being restricted by all kinds of factors, and it contains many variates in the deformation model moreover. It is very important to study the data processing method for small samples and multi-variates time series in GPS deformation monitoring. A new method of establishing small samples and multi-variates deformation model and prediction model are put forward so as to resolve contradiction of small samples and multi-variates encountered in constructing deformation model and improve formerly data processing method of deformation monitoring. Based on the system theory, a deformation body is regarded as a whole organism; a time-dependence linear system model and a time-dependence bilinear system model are established. The dynamic parameters estimation is derived by means of prediction fit and least information distribution criteria. The final example demonstrates the validity and practice of this method.

  • PDF

Reverse Engineering of a Gene Regulatory Network from Time-Series Data Using Mutual Information

  • Barman, Shohag;Kwon, Yung-Keun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.849-852
    • /
    • 2014
  • Reverse engineering of gene regulatory network is a challenging task in computational biology. To detect a regulatory relationship among genes from time series data is called reverse engineering. Reverse engineering helps to discover the architecture of the underlying gene regulatory network. Besides, it insights into the disease process, biological process and drug discovery. There are many statistical approaches available for reverse engineering of gene regulatory network. In our paper, we propose pairwise mutual information for the reverse engineering of a gene regulatory network from time series data. Firstly, we create random boolean networks by the well-known $Erd{\ddot{o}}s-R{\acute{e}}nyi$ model. Secondly, we generate artificial time series data from that network. Then, we calculate pairwise mutual information for predicting the network. We implement of our system on java platform. To visualize the random boolean network graphically we use cytoscape plugins 2.8.0.

Time Series Classification of Cryptocurrency Price Trend Based on a Recurrent LSTM Neural Network

  • Kwon, Do-Hyung;Kim, Ju-Bong;Heo, Ju-Sung;Kim, Chan-Myung;Han, Youn-Hee
    • Journal of Information Processing Systems
    • /
    • v.15 no.3
    • /
    • pp.694-706
    • /
    • 2019
  • In this study, we applied the long short-term memory (LSTM) model to classify the cryptocurrency price time series. We collected historic cryptocurrency price time series data and preprocessed them in order to make them clean for use as train and target data. After such preprocessing, the price time series data were systematically encoded into the three-dimensional price tensor representing the past price changes of cryptocurrencies. We also presented our LSTM model structure as well as how to use such price tensor as input data of the LSTM model. In particular, a grid search-based k-fold cross-validation technique was applied to find the most suitable LSTM model parameters. Lastly, through the comparison of the f1-score values, our study showed that the LSTM model outperforms the gradient boosting model, a general machine learning model known to have relatively good prediction performance, for the time series classification of the cryptocurrency price trend. With the LSTM model, we got a performance improvement of about 7% compared to using the GB model.

Performance Comparison of LSTM-Based Groundwater Level Prediction Model Using Savitzky-Golay Filter and Differential Method (Savitzky-Golay 필터와 미분을 활용한 LSTM 기반 지하수 수위 예측 모델의 성능 비교)

  • Keun-San Song;Young-Jin Song
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.84-89
    • /
    • 2023
  • In water resource management, data prediction is performed using artificial intelligence, and companies, governments, and institutions continue to attempt to efficiently manage resources through this. LSTM is a model specialized for processing time series data, which can identify data patterns that change over time and has been attempted to predict groundwater level data. However, groundwater level data can cause sen-sor errors, missing values, or outliers, and these problems can degrade the performance of the LSTM model, and there is a need to improve data quality by processing them in the pretreatment stage. Therefore, in pre-dicting groundwater data, we will compare the LSTM model with the MSE and the model after normaliza-tion through distribution, and discuss the important process of analysis and data preprocessing according to the comparison results and changes in the results.

  • PDF

Introduction and Utilization of Time Series Data Integration Framework with Different Characteristics (서로 다른 특성의 시계열 데이터 통합 프레임워크 제안 및 활용)

  • Jisoo, Hwanga;Jaewon, Moon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.872-884
    • /
    • 2022
  • With the development of the IoT industry, different types of time series data are being generated in various industries, and it is evolving into research that reproduces and utilizes it through re-integration. In addition, due to data processing speed and issues of the utilization system in the actual industry, there is a growing tendency to compress the size of data when using time series data and integrate it. However, since the guidelines for integrating time series data are not clear and each characteristic such as data description time interval and time section is different, it is difficult to use it after batch integration. In this paper, two integration methods are proposed based on the integration criteria setting method and the problems that arise during integration of time series data. Based on this, integration framework of a heterogeneous time series data was constructed that is considered the characteristics of time series data, and it was confirmed that different heterogeneous time series data compressed can be used for integration and various machine learning.

Queuing Time Computation Algorithm for Sensor Data Processing in Real-time Ubiquitous Environment (실시간 유비쿼터스 환경에서 센서 데이터 처리를 위한 대기시간 산출 알고리즘)

  • Kang, Kyung-Woo;Kwon, Oh-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.1-16
    • /
    • 2011
  • The real-time ubiquitous environment is required to be able to process a series of sensor data within limited time. The whole sensor data processing consists of several phases : getting data out of sensor, acquiring context and responding to users. The ubiquitous computing middleware is aware of the context using the input sensor data and a series of data from database or knowledge-base, makes a decision suitable for the context and shows a response according to the decision. When the real-time ubiquitous environment gets a set of sensor data as its input, it needs to be able to estimate the delay-time of the sensor data considering the available resource and the priority of it for scheduling a series of sensor data. Also the sensor data of higher priority can stop the processing of proceeding sensor data. The research field for such a decision making is not yet vibrant. In this paper, we propose a queuing time computation algorithm for sensor data processing in real-time ubiquitous environment.

QP-DTW: Upgrading Dynamic Time Warping to Handle Quasi Periodic Time Series Alignment

  • Boulnemour, Imen;Boucheham, Bachir
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.851-876
    • /
    • 2018
  • Dynamic time warping (DTW) is the main algorithms for time series alignment. However, it is unsuitable for quasi-periodic time series. In the current situation, except the recently published the shape exchange algorithm (SEA) method and its derivatives, no other technique is able to handle alignment of this type of very complex time series. In this work, we propose a novel algorithm that combines the advantages of the SEA and the DTW methods. Our main contribution consists in the elevation of the DTW power of alignment from the lowest level (Class A, non-periodic time series) to the highest level (Class C, multiple-periods time series containing different number of periods each), according to the recent classification of time series alignment methods proposed by Boucheham (Int J Mach Learn Cybern, vol. 4, no. 5, pp. 537-550, 2013). The new method (quasi-periodic dynamic time warping [QP-DTW]) was compared to both SEA and DTW methods on electrocardiogram (ECG) time series, selected from the Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) public database and from the PTB Diagnostic ECG Database. Results show that the proposed algorithm is more effective than DTW and SEA in terms of alignment accuracy on both qualitative and quantitative levels. Therefore, QP-DTW would potentially be more suitable for many applications related to time series (e.g., data mining, pattern recognition, search/retrieval, motif discovery, classification, etc.).

A Pre-processing Process Using TadGAN-based Time-series Anomaly Detection (TadGAN 기반 시계열 이상 탐지를 활용한 전처리 프로세스 연구)

  • Lee, Seung Hoon;Kim, Yong Soo
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.3
    • /
    • pp.459-471
    • /
    • 2022
  • Purpose: The purpose of this study was to increase prediction accuracy for an anomaly interval identified using an artificial intelligence-based time series anomaly detection technique by establishing a pre-processing process. Methods: Significant variables were extracted by applying feature selection techniques, and anomalies were derived using the TadGAN time series anomaly detection algorithm. After applying machine learning and deep learning methodologies using normal section data (excluding anomaly sections), the explanatory power of the anomaly sections was demonstrated through performance comparison. Results: The results of the machine learning methodology, the performance was the best when SHAP and TadGAN were applied, and the results in the deep learning, the performance was excellent when Chi-square Test and TadGAN were applied. Comparing each performance with the papers applied with a Conventional methodology using the same data, it can be seen that the performance of the MLR was significantly improved to 15%, Random Forest to 24%, XGBoost to 30%, Lasso Regression to 73%, LSTM to 17% and GRU to 19%. Conclusion: Based on the proposed process, when detecting unsupervised learning anomalies of data that are not actually labeled in various fields such as cyber security, financial sector, behavior pattern field, SNS. It is expected to prove the accuracy and explanation of the anomaly detection section and improve the performance of the model.