• Title/Summary/Keyword: time-series graph

Search Result 37, Processing Time 0.022 seconds

A Simulation Study of IT Diffusion by Using System Dynamics (시스템 다이내믹스를 활용한 정보 기술 수용에 대한 동태적 모형 개발 - 휴대 전화 사용을 중심으로 -)

  • Han, Sang-Jun;Lee, Sang-Gun
    • CRM연구
    • /
    • v.1 no.1
    • /
    • pp.49-69
    • /
    • 2006
  • Previous studies, Technology Acceptance Model (TAM) and Post Acceptance Model (PAM) have a little limitation in time series analysis. To solve this limitation, we used system dynamics as research methodology and designed simulation model based on TAM and PAM. Moreover, we designed new simulation model which can analyize time series data in customers' demand change from initial acceptance to post acceptance. This study targeted domestic mobile phone market. The simulation results showed that diffusion graph was similar to real data. That means we validated our simulation model. Since the simulation model offers the graph of customer's demand change by time, so it can be useful as a leaning tool. Therefore, we think this study helps IT companies use the model for forecasting of market demand.

  • PDF

A Rule-based Urban Image Classification System for Time Series Landsat Data

  • Lee, Jin-A;Lee, Sung-Soon;Chi, Kwang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.6
    • /
    • pp.637-651
    • /
    • 2011
  • This study presents a rule-based urban image classification method for time series analysis of changes in the vicinity of Asan-si and Cheonan-si in Chungcheongnam-do, using Landsat satellite images (1991-2006). The area has been highly developed through the relocation of industrial facilities, land development, construction of a high-speed railroad, and an extension of the subway. To determine the yearly changing pattern of the urban area, eleven classes were made depending on the trend of development. An algorithm was generalized for the rules to be applied as an unsupervised classification, without the need of training area. The analysis results show that the urban zone of the research area has increased by about 1.53 times, and each correlation graph confirmed the distribution of the Built Up Index (BUI) values for each class. To evaluate the rule-based classification, coverage and accuracy were assessed. When Optimal allowable factor=0.36, the coverage of the rule was 98.4%, and for the test using ground data from 1991 to 2006, overall accuracy was 99.49%. It was confirmed that the method suggested to determine the maximum allowable factor correlates to the accuracy test results using ground data. Among the multiple images, available data was used as best as possible and classification accuracy could be improved since optimal classification to suit objectives was possible. The rule-based urban image classification method is expected to be applied to time series image analyses such as thematic mapping for urban development, urban development, and monitoring of environmental changes.

A Reexamination on the Influence of Fine-particle between Districts in Seoul from the Perspective of Information Theory (정보이론 관점에서 본 서울시 지역구간의 미세먼지 영향력 재조명)

  • Lee, Jaekoo;Lee, Taehoon;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.109-114
    • /
    • 2015
  • This paper presents a computational model on the transfer of airborne fine particles to analyze the similarities and influences among the 25 districts in Seoul by quantifying a time series data collected from each district. The properties of each district are driven with the model of a time series of the fine particle concentrations, and the calculation of edge-based weights are carried out with the transfer entropies between all pairs of the districts. We applied a modularity-based graph clustering technique to detect the communities among the 25 districts. The result indicates the discovered clusters correspond to a high transfer-entropy group among the communities with geographical adjacency or high in-between traffic volumes. We believe that this approach can be further extended to the discovery of significant flows of other indicators causing environmental pollution.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Time Series Data Processing Deep Learning system for Prediction of Hospital Outpatient Number (병원 외래환자수의 예측을 위한 시계열 데이터처리 딥러닝 시스템)

  • Jo, Jun-Mo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.2
    • /
    • pp.313-318
    • /
    • 2021
  • The advent of the Deep Learning has applied to many industrial and general applications having an impact on our lives these days. Certain type of machine learning model is needed to be designed for a specific problem of field. Recently, there are many instances to solve the various COVID-19 related problems using deep learning model. Therefore, in this paper, a deep learning model for predicting number of outpatients of a hospital in advance is suggested. The suggested deep learning model is designed by using the Keras in Jupyter Notebook. The prediction result is being analyzed with the real data in graph, as well as the loss rate with some validation data to verify either for the underfitting or the overfitting.

Transforming Patient Health Management: Insights from Explainable AI and Network Science Integration

  • Mi-Hwa Song
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.307-313
    • /
    • 2024
  • This study explores the integration of Explainable Artificial Intelligence (XAI) and network science in healthcare, focusing on enhancing healthcare data interpretation and improving diagnostic and treatment methods. Key methodologies like Graph Neural Networks, Community Detection, Overlapping Network Models, and Time-Series Network Analysis are examined in depth for their potential in patient health management. The research highlights the transformative role of XAI in making complex AI models transparent and interpretable, essential for accurate, data-driven decision-making in healthcare. Case studies demonstrate the practical application of these methodologies in predicting diseases, understanding drug interactions, and tracking patient health over time. The study concludes with the immense promise of these advancements in healthcare, despite existing challenges, and underscores the need for ongoing research to fully realize the potential of AI in this field.

A Real-time Resource Allocation Algorithm for Minimizing the Completion Time of Workflow (워크플로우 완료시간 최소화를 위한 실시간 자원할당 알고리즘)

  • Yoon, Sang-Hum;Shin, Yong-Seung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.29 no.1
    • /
    • pp.1-8
    • /
    • 2006
  • This paper proposes a real-time resource allocation algorithm for minimizing the completion time of overall workflow process. The jobs in a workflow process are interrelated through the precedence graph including Sequence, AND, OR and Loop control structure. A resource should be allocated for the processing of each job, and the required processing time of the job can be varied by the resource allocation decision. Each resource has several inherent restrictions such as the functional, geographical, positional and other operational characteristics. The algorithm suggested in this paper selects an effective resource for each job by considering the precedence constraint and the resource characteristics such as processing time and the inherent restrictions. To investigate the performance of the proposed algorithm, several numerical tests are performed for four different workflow graphs including standard, parallel and two series-parallel structures. In the tests, the solutions by the proposed algorithm are compared with random and optimal solutions which are obtained by a random selection rule and a full enumeration method respectively.

A Study on Time-Expanded Network Approach for Finding Maximal Capacity of Extra Freight on Railway Network (시간전개형 네트워크 접근법을 이용한 기존 열차시각표를 고려한 추가적 철도화물 최대수송량 결정에 관한 연구)

  • Ahn, Jae-Geun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.8
    • /
    • pp.3706-3714
    • /
    • 2011
  • This study deals with the algorithm to finding the maximum capacity and their schedule of extra freight while honoring planned timetable of trains on railway network. Time-expanded network, a kind of space-time graph, can be shown both planned train timetable and dynamic features of given problem. Pre-processing procedure is a series of infeasible arcs removal from time-expanded network honoring planned timetable. In the result, this preprocessing transforms dynamic features of given problem into static maximal flow problem which can be easily solved.

Internal Information Leakage Detection System using Time Series Graph (시계열 그래프를 이용한 내부 데이터 유출 탐지 시스템)

  • Seo, Min Ji;Shin, Hee Jin;Kim, Myung Ho;Park, Jin Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.769-770
    • /
    • 2017
  • 최근 데이터 기술의 발달에 따라, 기업에서는 중요 데이터를 서버와 같은 데이터 저장 장치에 보관하고 있다. 하지만 기업 내부 직원에 의해 기업의 기밀 데이터가 유출될 수 있는 위험성이 있기 때문에, 내부 직원에 의한 데이터 유출을 탐지 및 방지해야 할 필요성이 있다. 따라서 본 논문에서는 각 보안 솔루션에서 수집한 보안 로그를 데이터 유출 시나리오를 바탕으로 시계열 그래프로 작성하여, 이미지 인식에 뛰어난 성능을 보이는 합성곱 신경망을 통해 데이터 유출을 탐지하는 시스템을 제안한다. 실험 결과 유출된 데이터의 크기에 상관없이 95% 이상의 정확도를 보였으며, 복합적인 행동을 통해 데이터 유출을 시도한 경우에도 97% 이상의 정확도를 보였다.

The Analysis of EU Carbon Prices Using SVECM Approach (SVECM 모형을 이용한 탄소배출권 가격 연구)

  • Bu, Gi-Duck;Jeong, Kiho
    • Environmental and Resource Economics Review
    • /
    • v.20 no.3
    • /
    • pp.531-565
    • /
    • 2011
  • All previous studies analyzing multivariate time series data of EUA (European Union Allowance) price commonly used endogenous variables within the four variables and included the period from April to June of 2006 in the analysis, when the price distortion occurred. This study uses graph theory and structural vector error correction model (SVECM) to analyze the daily time series data of the EUA (European Union Allowance) price. As endogenous variables, five variables are considered for the analysis, including prices of crude oil, natural gas, electricity and coal in addition to carbon price. Data period is Phase 2 period (April 21, 2008 to March 31, 2010) to avoid the EUA price distortion of Phase 1 period (2005~2007). Further, the monthly data including the economic variables as endogenous variables are analyzed.

  • PDF