• 제목/요약/키워드: Time Series Data Processing

검색결과 317건 처리시간 0.024초

계절성 시계열 자료의 concept drift 탐지를 위한 새로운 창 전략 (A novel window strategy for concept drift detection in seasonal time series)

  • 이도운;배수민;김강섭;안순홍
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.377-379
    • /
    • 2023
  • Concept drift detection on data stream is the major issue to maintain the performance of the machine learning model. Since the online stream is to be a function of time, the classical statistic methods are hard to apply. In particular case of seasonal time series, a novel window strategy with Fourier analysis however, gives a chance to adapt the classical methods on the series. We explore the KS-test for an adaptation of the periodic time series and show that this strategy handles a complicate time series as an ordinary tabular dataset. We verify that the detection with the strategy takes the second place in time delay and shows the best performance in false alarm rate and detection accuracy comparing to that of arbitrary window sizes.

Time Series Data Cleaning Method Based on Optimized ELM Prediction Constraints

  • Guohui Ding;Yueyi Zhu;Chenyang Li;Jinwei Wang;Ru Wei;Zhaoyu Liu
    • Journal of Information Processing Systems
    • /
    • 제19권2호
    • /
    • pp.149-163
    • /
    • 2023
  • Affected by external factors, errors in time series data collected by sensors are common. Using the traditional method of constraining the speed change rate to clean the errors can get good performance. However, they are only limited to the data of stable changing speed because of fixed constraint rules. Actually, data with uneven changing speed is common in practice. To solve this problem, an online cleaning algorithm for time series data based on dynamic speed change rate constraints is proposed in this paper. Since time series data usually changes periodically, we use the extreme learning machine to learn the law of speed changes from past data and predict the speed ranges that change over time to detect the data. In order to realize online data repair, a dual-window mechanism is proposed to transform the global optimal into the local optimal, and the traditional minimum change principle and median theorem are applied in the selection of the repair strategy. Aiming at the problem that the repair method based on the minimum change principle cannot correct consecutive abnormal points, through quantitative analysis, it is believed that the repair strategy should be the boundary of the repair candidate set. The experimental results obtained on the dataset show that the method proposed in this paper can get a better repair effect.

The Data Processing Method for Small Samples and Multi-variates Series in GPS Deformation Monitoring

  • Guo-Lin, Liu;Wen-Hua, Zheng;Xin-Zhou, Wang;Lian-Peng, Zhang
    • 한국항해항만학회:학술대회논문집
    • /
    • 한국항해항만학회 2006년도 International Symposium on GPS/GNSS Vol.1
    • /
    • pp.185-189
    • /
    • 2006
  • Time series analysis is a frequently effective method of constructing model and prediction in data processing of deformation monitoring. The monitoring data sample must to be as more as possible and time intervals are equal roughly so as to construct time series model accurately and achieve reliable prediction. But in the project practice of GPS deformation monitoring, the monitoring data sample can't be obtained too much and time intervals are not equal because of being restricted by all kinds of factors, and it contains many variates in the deformation model moreover. It is very important to study the data processing method for small samples and multi-variates time series in GPS deformation monitoring. A new method of establishing small samples and multi-variates deformation model and prediction model are put forward so as to resolve contradiction of small samples and multi-variates encountered in constructing deformation model and improve formerly data processing method of deformation monitoring. Based on the system theory, a deformation body is regarded as a whole organism; a time-dependence linear system model and a time-dependence bilinear system model are established. The dynamic parameters estimation is derived by means of prediction fit and least information distribution criteria. The final example demonstrates the validity and practice of this method.

  • PDF

Reverse Engineering of a Gene Regulatory Network from Time-Series Data Using Mutual Information

  • Barman, Shohag;Kwon, Yung-Keun
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2014년도 추계학술발표대회
    • /
    • pp.849-852
    • /
    • 2014
  • Reverse engineering of gene regulatory network is a challenging task in computational biology. To detect a regulatory relationship among genes from time series data is called reverse engineering. Reverse engineering helps to discover the architecture of the underlying gene regulatory network. Besides, it insights into the disease process, biological process and drug discovery. There are many statistical approaches available for reverse engineering of gene regulatory network. In our paper, we propose pairwise mutual information for the reverse engineering of a gene regulatory network from time series data. Firstly, we create random boolean networks by the well-known $Erd{\ddot{o}}s-R{\acute{e}}nyi$ model. Secondly, we generate artificial time series data from that network. Then, we calculate pairwise mutual information for predicting the network. We implement of our system on java platform. To visualize the random boolean network graphically we use cytoscape plugins 2.8.0.

Time Series Classification of Cryptocurrency Price Trend Based on a Recurrent LSTM Neural Network

  • Kwon, Do-Hyung;Kim, Ju-Bong;Heo, Ju-Sung;Kim, Chan-Myung;Han, Youn-Hee
    • Journal of Information Processing Systems
    • /
    • 제15권3호
    • /
    • pp.694-706
    • /
    • 2019
  • In this study, we applied the long short-term memory (LSTM) model to classify the cryptocurrency price time series. We collected historic cryptocurrency price time series data and preprocessed them in order to make them clean for use as train and target data. After such preprocessing, the price time series data were systematically encoded into the three-dimensional price tensor representing the past price changes of cryptocurrencies. We also presented our LSTM model structure as well as how to use such price tensor as input data of the LSTM model. In particular, a grid search-based k-fold cross-validation technique was applied to find the most suitable LSTM model parameters. Lastly, through the comparison of the f1-score values, our study showed that the LSTM model outperforms the gradient boosting model, a general machine learning model known to have relatively good prediction performance, for the time series classification of the cryptocurrency price trend. With the LSTM model, we got a performance improvement of about 7% compared to using the GB model.

Savitzky-Golay 필터와 미분을 활용한 LSTM 기반 지하수 수위 예측 모델의 성능 비교 (Performance Comparison of LSTM-Based Groundwater Level Prediction Model Using Savitzky-Golay Filter and Differential Method )

  • 송근산;송영진
    • 반도체디스플레이기술학회지
    • /
    • 제22권3호
    • /
    • pp.84-89
    • /
    • 2023
  • In water resource management, data prediction is performed using artificial intelligence, and companies, governments, and institutions continue to attempt to efficiently manage resources through this. LSTM is a model specialized for processing time series data, which can identify data patterns that change over time and has been attempted to predict groundwater level data. However, groundwater level data can cause sen-sor errors, missing values, or outliers, and these problems can degrade the performance of the LSTM model, and there is a need to improve data quality by processing them in the pretreatment stage. Therefore, in pre-dicting groundwater data, we will compare the LSTM model with the MSE and the model after normaliza-tion through distribution, and discuss the important process of analysis and data preprocessing according to the comparison results and changes in the results.

  • PDF

서로 다른 특성의 시계열 데이터 통합 프레임워크 제안 및 활용 (Introduction and Utilization of Time Series Data Integration Framework with Different Characteristics)

  • 황지수;문재원
    • 방송공학회논문지
    • /
    • 제27권6호
    • /
    • pp.872-884
    • /
    • 2022
  • IoT 산업 발전으로 다양한 산업군에서 서로 다른 형태의 시계열 데이터를 생성하고 있으며 이를 다시 통합하여 재생산 및 활용하는 연구로 진화하고 있다. 더불어, 실제 산업에서 데이터 처리 속도 및 활용 시스템의 이슈 등으로 인해 시계열 데이터 활용 시 데이터의 크기를 압축하여 통합 활용하는 경향이 증가하고 있다. 그러나 시계열 데이터의 통합 가이드라인이 명확하지 않고 데이터 기술 시간 간격, 시간 구간 등 각각의 특성이 달라 일괄 통합하여 활용하기 어렵다. 본 논문에서는 통합 기준 설정 방법과 시계열 데이터의 통합시 발생하는 문제점을 기반으로 두 가지의 통합 방법을 제시하였다. 이를 기반으로 시계열 데이터의 특성을 고려한 이질적 시계열 데이터 통합 프레임워크를 구성하였으며 압축된 서로 다른 이질적 시계열 데이터의 통합과 다양한 기계 학습에 활용할 수 있음을 확인하였다.

실시간 유비쿼터스 환경에서 센서 데이터 처리를 위한 대기시간 산출 알고리즘 (Queuing Time Computation Algorithm for Sensor Data Processing in Real-time Ubiquitous Environment)

  • 강경우;권오병
    • 지능정보연구
    • /
    • 제17권1호
    • /
    • pp.1-16
    • /
    • 2011
  • 실시간 유비쿼터스 환경은 센서로부터 얻어낸 데이터를 기반으로 상황을 인지하고 사용자에게 적절한 반응을 보이기까지 제한된 시간 내에 모든 것을 처리해야 한다. 전체적인 센서 데이터 처리는 센서로부터의 자료 확보, 상황 정보의 획득, 그리고 사용자로의 반응이라고 하는 과정을 거친다. 즉, 유비쿼터스 컴퓨팅 미들웨어는 입력된 센서 자료 및 데이터베이스 또는 지식베이스로부터 일련의 자료들을 활용하여 상황을 인식하며, 그 상황에 적합한 반응을 하게 된다. 그런데 실시간 환경의 특징 상 센서데이터가 들어오면 각 가용 자원들을 검색하고 그 곳에 있는 미들웨어가 데이터를 처리 할 경우 어느 정도의 대기 시간이 필요한지를 결정해야 한다. 또한 센서 데이터 처리의 우선순위가 높을 때는 미들웨어가 현재 처리중인 데이터를 언제 처리를 중지하고 얼마나 대기시켜야 하는지도 결정해야 한다. 그러나 이러한 의사결정에 대한 연구는 아직 활발하지 않다. 따라서 본 논문에서는 유비쿼터스 미들웨어가 이미 센서 데이터를 처리하고 있고 동시에 새로운 센서 데이터를 처리해야 할 때 각 작업의 최적 대기시간을 계산하고 스케줄링하는 알고리즘을 제안한다.

QP-DTW: Upgrading Dynamic Time Warping to Handle Quasi Periodic Time Series Alignment

  • Boulnemour, Imen;Boucheham, Bachir
    • Journal of Information Processing Systems
    • /
    • 제14권4호
    • /
    • pp.851-876
    • /
    • 2018
  • Dynamic time warping (DTW) is the main algorithms for time series alignment. However, it is unsuitable for quasi-periodic time series. In the current situation, except the recently published the shape exchange algorithm (SEA) method and its derivatives, no other technique is able to handle alignment of this type of very complex time series. In this work, we propose a novel algorithm that combines the advantages of the SEA and the DTW methods. Our main contribution consists in the elevation of the DTW power of alignment from the lowest level (Class A, non-periodic time series) to the highest level (Class C, multiple-periods time series containing different number of periods each), according to the recent classification of time series alignment methods proposed by Boucheham (Int J Mach Learn Cybern, vol. 4, no. 5, pp. 537-550, 2013). The new method (quasi-periodic dynamic time warping [QP-DTW]) was compared to both SEA and DTW methods on electrocardiogram (ECG) time series, selected from the Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) public database and from the PTB Diagnostic ECG Database. Results show that the proposed algorithm is more effective than DTW and SEA in terms of alignment accuracy on both qualitative and quantitative levels. Therefore, QP-DTW would potentially be more suitable for many applications related to time series (e.g., data mining, pattern recognition, search/retrieval, motif discovery, classification, etc.).

TadGAN 기반 시계열 이상 탐지를 활용한 전처리 프로세스 연구 (A Pre-processing Process Using TadGAN-based Time-series Anomaly Detection)

  • 이승훈;김용수
    • 품질경영학회지
    • /
    • 제50권3호
    • /
    • pp.459-471
    • /
    • 2022
  • Purpose: The purpose of this study was to increase prediction accuracy for an anomaly interval identified using an artificial intelligence-based time series anomaly detection technique by establishing a pre-processing process. Methods: Significant variables were extracted by applying feature selection techniques, and anomalies were derived using the TadGAN time series anomaly detection algorithm. After applying machine learning and deep learning methodologies using normal section data (excluding anomaly sections), the explanatory power of the anomaly sections was demonstrated through performance comparison. Results: The results of the machine learning methodology, the performance was the best when SHAP and TadGAN were applied, and the results in the deep learning, the performance was excellent when Chi-square Test and TadGAN were applied. Comparing each performance with the papers applied with a Conventional methodology using the same data, it can be seen that the performance of the MLR was significantly improved to 15%, Random Forest to 24%, XGBoost to 30%, Lasso Regression to 73%, LSTM to 17% and GRU to 19%. Conclusion: Based on the proposed process, when detecting unsupervised learning anomalies of data that are not actually labeled in various fields such as cyber security, financial sector, behavior pattern field, SNS. It is expected to prove the accuracy and explanation of the anomaly detection section and improve the performance of the model.