• Title/Summary/Keyword: series model

Search Result 5,386, Processing Time 0.034 seconds

DEVELOPMENT OF FINITE ELEMENT HUMAN NECK MODEL FOR VEHICLE SAFETY SIMULATION

  • Lee, I.H.;Choi, H.Y.;Lee, J.H.;Han, D.C.
    • International Journal of Automotive Technology
    • /
    • v.5 no.1
    • /
    • pp.33-46
    • /
    • 2004
  • A finite element model development of a 50th percentile male cervical spine is presented in this paper. The model consists of rigid, geometrically accurate vertebrae held together with deformable intervertibral disks, facet joints, and ligaments modeled as a series of nonlinear springs. These deformable structures were rigorously tuned, through failure, to mimic existing experimental data; first as functional unit characterizations at three cervical levels and then as a fully assembled c-spine using the experimental data from Duke University and other data in the NHTSA database. After obtaining satisfactory validation of the performance of the assembled ligamentous cervical spine against available experimental data, 22 cervical muscle pairs, representing the majority of the neck's musculature, were added to the model. Hill's muscle model was utilized to generate muscle forces within the assembled cervical model. The muscle activation level was assumed to be the same for all modeled muscles and the degree of activation was set to correctly predict available human volunteer experimental data from NBDL. The validated model is intended for use as a post processor of dummy measurement within the simulated injury monitor (SIMon) concept being developed by NHTSA where measured kinematics and kinetic data obtained from a dummy during a crash test will serve as the boundary conditions to "drive" the finite element model of the neck. The post-processor will then interrogate the model to determine whether any ligament have exceeded its known failure limit. The model will allow a direct assessment of potential injury, its degree and location thus eliminating the need for global correlates such as Nij.

Hidden Markov model with stochastic volatility for estimating bitcoin price volatility (확률적 변동성을 가진 은닉마르코프 모형을 통한 비트코인 가격의 변동성 추정)

  • Tae Hyun Kang;Beom Seuk Hwang
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.1
    • /
    • pp.85-100
    • /
    • 2023
  • The stochastic volatility (SV) model is one of the main methods of modeling time-varying volatility. In particular, SV model is actively used in estimation and prediction of financial market volatility and option pricing. This paper attempts to model the time-varying volatility of the bitcoin market price using SV model. Hidden Markov model (HMM) is combined with the SV model to capture characteristics of regime switching of the market. The HMM is useful for recognizing patterns of time series to divide the regime of market volatility. This study estimated the volatility of bitcoin by using data from Upbit, a cryptocurrency trading site, and analyzed it by dividing the volatility regime of the market to improve the performance of the SV model. The MCMC technique is used to estimate the parameters of the SV model, and the performance of the model is verified through evaluation criteria such as MAPE and MSE.

Development and Verification of an AI Model for Melon Import Prediction

  • KHOEURN SAKSONITA;Jungsung Ha;Wan-Sup Cho;Phyoungjung Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.7
    • /
    • pp.29-37
    • /
    • 2023
  • Due to climate change, interest in crop production and distribution is increasing, and attempts are being made to use bigdata and AI to predict production volume and control shipments and distribution stages. Prediction of agricultural product imports not only affects prices, but also controls shipments of farms and distributions of distribution companies, so it is important information for establishing marketing strategies. In this paper, we create an artificial intelligence prediction model that predicts the future import volume based on the wholesale market melon import volume data disclosed by the agricultural statistics information system and evaluate its accuracy. We create prediction models using three models: the Neural Prophet technique, the Ensembled Neural Prophet model, and the GRU model. As a result of evaluating the performance of the model by comparing two major indicators, MAE and RMSE, the Ensembled Neural Prophet model predicted the most accurately, and the GRU model also showed similar performance to the ensemble model. The model developed in this study is published on the web and used in the field for 1 year and 6 months, and is used to predict melon production in the near future and to establish marketing and distribution strategies.

An Evaluation of Software Quality Using Phase-based Defect Profile (단계기반 결점 프로파일을 이용한 소프트웨어 품질 평가)

  • Lee, Sang-Un
    • The KIPS Transactions:PartD
    • /
    • v.15D no.3
    • /
    • pp.313-320
    • /
    • 2008
  • A typical software development life cycle consists of a series of phases, each of which has some ability to insert and detect defects. To achieve desired quality, we should progress the defect removal with the all phases of the software development. The well-known model of phase-based defect profile is Gaffney model. This model assumes that the defect removal profile follows Rayleigh curve and uses the parameters as the phase index number. However, these is a problem that the location parameter cannot present the peak point of removed defects when you apply Gaffney model to the actual situation. Therefore, Gaffney model failed to represent the actual defect profile. This paper suggests two different models: One is modified Gaffney model that introduce the parameter of Putnam's SLIM model to replace of the location parameter, the other is the growth function model because the cumulative defect profile shows S-shaped. Suggested model is analyzed and verified by the defect profile sets that are obtained from 5 different software projects. We could see from the experiment, the suggested model performed better result than Gaffney model.

Analysis of the Macroscopic Traffic Flow Changes using the Two-Fluid Model by the Improvements of the Traffic Signal Control System (Two-Fluid Model을 이용한 교통신호제어시스템 개선에 따른 거시적 교통류 변화 분석)

  • Jeong, Yeong-Je;Kim, Yeong-Chan;Kim, Dae-Ho
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.1
    • /
    • pp.27-34
    • /
    • 2009
  • The operational effect of traffic signal control improvement was evaluated using the Two-Fluid Model. The parameters engaged in the Two-Fluid Model becomes food indicators to measure the quality of traffic flow due to the improvement of traffic signal operation. A series of experiment were conduced for the 31 signalized intersections in Uijeongbu City. To estimate the parameters in the Two-Fluid Model the trajectory informations of individual vehicles were collected using the CORSIM and Run Time Extension. The test results showed 35 percent decrease of average minimum trip time per unit distance. One of the parameters in the Two-Fluid Model is a measure of the resistance of the network to the degraded operation with the increased demand. The test result showed 28 percent decrease of this parameter. In spite of the simulation results of the arterial flow, it was concluded that the Two-Fluid Model is useful tool to evaluate the improvement of the traffic signal control system from the macroscopic aspect.

A Multiple Regression Model for the Estimation of Monthly Runoff from Ungaged Watersheds (미계측 중소유역의 월유출량 산정을 위한 다중회귀모형 연구)

  • 윤용남;원석연
    • Water for future
    • /
    • v.24 no.3
    • /
    • pp.71-82
    • /
    • 1991
  • Methods of predicting water resources availiability of a river basin can be classified as empirical formula, water budget analysis and regression analysis. The purpose of this study is to develop a method to estimate the monthly runoff required for long-term water resources development project. Using the monthly runoff data series at gaging stations alternative multiple regression models were constructed and evaluated. Monthly runoff volume along with the meteorological and physiographic parameters of 48 gaging stations are used, those of 43 stations to construct the model and the remaining 5 stations to verify the model. Regression models are named to be Model-1, Model-2, Model-3 and Model-4 developing on the way of data processing for the multiple regressions. From the verification, Model-2 is found to be the best-fit model. A comparison of the selected regression model with the Kajiyama's formula is made based on the predicted monthly and annual runoff of the 5 watersheds. The result showed that the present model is fairly resonable and convinient to apply in practice.

  • PDF

Vacant House Prediction and Important Features Exploration through Artificial Intelligence: In Case of Gunsan (인공지능 기반 빈집 추정 및 주요 특성 분석)

  • Lim, Gyoo Gun;Noh, Jong Hwa;Lee, Hyun Tae;Ahn, Jae Ik
    • Journal of Information Technology Services
    • /
    • v.21 no.3
    • /
    • pp.63-72
    • /
    • 2022
  • The extinction crisis of local cities, caused by a population density increase phenomenon in capital regions, directly causes the increase of vacant houses in local cities. According to population and housing census, Gunsan-si has continuously shown increasing trend of vacant houses during 2015 to 2019. In particular, since Gunsan-si is the city which suffers from doughnut effect and industrial decline, problems regrading to vacant house seems to exacerbate. This study aims to provide a foundation of a system which can predict and deal with the building that has high risk of becoming vacant house through implementing a data driven vacant house prediction machine learning model. Methodologically, this study analyzes three types of machine learning model by differing the data components. First model is trained based on building register, individual declared land value, house price and socioeconomic data and second model is trained with the same data as first model but with additional POI(Point of Interest) data. Finally, third model is trained with same data as the second model but with excluding water usage and electricity usage data. As a result, second model shows the best performance based on F1-score. Random Forest, Gradient Boosting Machine, XGBoost and LightGBM which are tree ensemble series, show the best performance as a whole. Additionally, the complexity of the model can be reduced through eliminating independent variables that have correlation coefficient between the variables and vacant house status lower than the 0.1 based on absolute value. Finally, this study suggests XGBoost and LightGBM based machine learning model, which can handle missing values, as final vacant house prediction model.

KOSPI index prediction using topic modeling and LSTM

  • Jin-Hyeon Joo;Geun-Duk Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.73-80
    • /
    • 2024
  • In this paper, we proposes a method to improve the accuracy of predicting the Korea Composite Stock Price Index (KOSPI) by combining topic modeling and Long Short-Term Memory (LSTM) neural networks. In this paper, we use the Latent Dirichlet Allocation (LDA) technique to extract ten major topics related to interest rate increases and decreases from financial news data. The extracted topics, along with historical KOSPI index data, are input into an LSTM model to predict the KOSPI index. The proposed model has the characteristic of predicting the KOSPI index by combining the time series prediction method by inputting the historical KOSPI index into the LSTM model and the topic modeling method by inputting news data. To verify the performance of the proposed model, this paper designs four models (LSTM_K model, LSTM_KNS model, LDA_K model, LDA_KNS model) based on the types of input data for the LSTM and presents the predictive performance of each model. The comparison of prediction performance results shows that the LSTM model (LDA_K model), which uses financial news topic data and historical KOSPI index data as inputs, recorded the lowest RMSE (Root Mean Square Error), demonstrating the best predictive performance.

Long-Term Memory and Correct Answer Rate of Foreign Exchange Data (환율데이타의 장기기억성과 정답율)

  • Weon, Sek-Jun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.12
    • /
    • pp.3866-3873
    • /
    • 2000
  • In this paper, we investigates the long-term memory and the Correct answer rate of the foreign exchange data (Yen/Dollar) that is one of economic time series, There are many cases where two kinds of fractal dimensions exist in time series generated from dynamical systems such as AR models that are typical models having a short terrr memory, The sample interval separating from these two dimensions are denoted by kcrossover. Let the fractal dimension be $D_1$ in K < $k^{crossover}$,and $D_2$ in K > $k^{crossover}$ from the statistics mode. In usual, Statistic models have dimensions D1 and D2 such that $D_1$ < $D_2$ and $D_2\cong2$ But it showed a result contrary to this in the real time series such as NIKKEL The exchange data that is one of real time series have relation of $D_1$ > $D_2$ When the interval between data increases, the correlation between data increases, which is quite a peculiar phenomenon, We predict exchange data by neural networks, We confirm that $\beta$ obrained from prediction errors and D calculated from time series data precisely satisfy the relationship $\beta$ = 2-2D which is provided from a non-linear model having fractal dimension, And We identified that the difference of fractal dimension appeaed in the Correct answer rate.

  • PDF

An Empirical Study for the Existence of Long-term Memory Properties and Influential Factors in Financial Time Series (주식가격변화의 장기기억속성 존재 및 영향요인에 대한 실증연구)

  • Eom, Cheol-Jun;Oh, Gab-Jin;Kim, Seung-Hwan;Kim, Tae-Hyuk
    • The Korean Journal of Financial Management
    • /
    • v.24 no.3
    • /
    • pp.63-89
    • /
    • 2007
  • This study aims at empirically verifying whether long memory properties exist in returns and volatility of the financial time series and then, empirically observing influential factors of long-memory properties. The presence of long memory properties in the financial time series is examined with the Hurst exponent. The Hurst exponent is measured by DFA(detrended fluctuation analysis). The empirical results are summarized as follows. First, the presence of significant long memory properties is not identified in return time series. But, in volatility time series, as the Hurst exponent has the high value on average, a strong presence of long memory properties is observed. Then, according to the results empirically confirming influential factors of long memory properties, as the Hurst exponent measured with volatility of residual returns filtered by GARCH(1, 1) model reflecting properties of volatility clustering has the level of $H{\approx}0.5$ on average, long memory properties presented in the data before filtering are no longer observed. That is, we positively find out that the observed long memory properties are considerably due to volatility clustering effect.

  • PDF