• Title/Summary/Keyword: data-based model

Search Result 21,096, Processing Time 0.051 seconds

Estimation of Cable Tension Force by ARX Model-Based Virtual Sensing (ARX모델기반 가상센싱을 통한 사장교 케이블의 장력 추정)

  • Choi, Gahee;Shin, Soobong
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.287-293
    • /
    • 2017
  • Sometimes, it is impossible to install a sensor on a certain location of a structure due to the size of a structure or poor surrounding environments. Even if possible, sensors can be frequently malfunctioned or improperly operated due to lack of adequate maintenance. These kind of problems are solved by the virtual sensing methods in various engineering fields. Virtual sensing technology is a technology that can measure data even though there is no physical sensor. It is expected that this technology can be also applied to the construction field effectively. In this study, a virtual sensing technology based on ARX model is proposed. An ARX model is defined by using the simulated data through a structural analysis rather than by actually measured data. The ARX-based virtual sensing model can be applied to estimate unmeasured response using a transfer function that defines the relationship between two point data. In this study, a simulation and experimental study were carried out to examine the proposed virtual sensing method with a laboratory test on a cable-stayed model bridge. Acceleration measured at a girder is transformed to estimate a cable tension through the ARX model-based virtual sensing.

Estimation Model-based Verification and Validation of Fossil Power Plant Performance Measurement Data (추정모델에 의한 화력발전 플랜트 계측데이터의 검증 및 유효화)

  • 김성근;윤문철;최영석
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.2
    • /
    • pp.114-120
    • /
    • 2000
  • Fossil power plant availability is significantly affected by gradual degradations of equipment as operation of the plant continues. It is quite important to determine whether or not to replace some equipment and when to replace the equipment. Performance calculation and analysis can provide the information. Robustness in the performance calculation can be increased by using verification & validation of measured input data. We suggest new algorithm in which estimation relation for validated measurement can be obtained using correlation between measurements. Input estimation model is obtained using design data and acceptance measurement data of domestic 16 fossil power plant. The model consists of finding mostly correlated state variable in plant state and mapping relation based on the model and current state of power plant.

  • PDF

CIM based Distribution Automation Simulator (CIM 기반의 배전자동화 시뮬레이터)

  • Park, Ji-Seung;Lim, Seong-Il
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.27 no.3
    • /
    • pp.87-94
    • /
    • 2013
  • The main purpose of the distribution automation system (DAS) is to achieve efficient operation of primary distribution systems by monitoring and control of the feeder remote terminal unit(FRTU) deployed on the distribution feeders. DAS simulators are introduced to verify the functions of the application software installed in the central control unit(CCU) of the DAS. Because each DAS is developed on the basis of its own specific data model, the power system data cannot be easily transferred from the DAS to the simulator or vice versa. This paper presents a common information model(CIM)-based DAS simulator to achieve interoperability between the simulator and the DASs developed by different vendors. The CIM-based data model conversion between Smart DMS (SDMS) and Total DAS (TDAS) has been performed to establish feasibility of the proposed scheme.

Accounting Information Processing Model Using Big Data Mining (빅데이터마이닝을 이용한 회계정보처리 모형)

  • Kim, Kyung-Ihl
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.7
    • /
    • pp.14-19
    • /
    • 2020
  • This study suggests an accounting information processing model based on internet standard XBRL which applies an extensible business reporting language, the XML technology. Due to the differences in document characteristics among various companies, this is very important with regard to the purpose of accounting that the system should provide useful information to the decision maker. This study develops a data mining model based on XML hierarchy which is stored as XBRL in the X-Hive data base. The data ming analysis is experimented by the data mining association rule. And based on XBRL, the DC-Apriori data mining method is suggested combining Apriori algorithm and X-query together. Finally, the validity and effectiveness of the suggested model is investigated through experiments.

Conditional Variational Autoencoder-based Generative Model for Gene Expression Data Augmentation (유전자 발현량 데이터 증대를 위한 Conditional VAE 기반 생성 모델)

  • Hyunsu Bong;Minsik Oh
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.275-284
    • /
    • 2023
  • Gene expression data can be utilized in various studies, including the prediction of disease prognosis. However, there are challenges associated with collecting enough data due to cost constraints. In this paper, we propose a gene expression data generation model based on Conditional Variational Autoencoder. Our results demonstrate that the proposed model generates synthetic data with superior quality compared to two other state-of-the-art models for gene expression data generation, namely the Wasserstein Generative Adversarial Network with Gradient Penalty based model and the structured data generation models CTGAN and TVAE.

Role of Scientific Reasoning in Elementary School Students' Construction of Food Pyramid Prediction Models (초등학생들의 먹이 피라미드 예측 모형 구성에서 과학적 추론의 역할)

  • Han, Moonhyun
    • Journal of Korean Elementary Science Education
    • /
    • v.38 no.3
    • /
    • pp.375-386
    • /
    • 2019
  • This study explores how elementary school students construct food pyramid prediction models using scientific reasoning. Thirty small groups of sixth-grade students in the Kyoungki province (n=138) participated in this study; each small group constructed a food pyramid prediction model based on scientific reasoning, utilizing prior knowledge on topics such as biotic and abiotic factors, food chains, food webs, and food pyramid concepts. To understand the scientific reasoning applied by the students during the modeling process, three forms of qualitative data were collected and analyzed: each small group's discourse, their representation, and the researcher's field notes. Based on this data, the researcher categorized the students' model patterns into three categories and identified how the students used scientific reasoning in their model patterns. The study found that the model patterns consisted of the population number variation model, the biological and abiotic factors change model, and the equilibrium model. In the population number variation model, students used phenomenon-based reasoning and relation-based reasoning to predict variations in the number of producers and consumers. In the biotic and abiotic factors change model, students used relation-based reasoning to predict the effects on producers and consumers as well as on decomposers and abiotic factors. In the equilibrium model, students predicted that "the food pyramid would reach equilibrium," using relation-based reasoning and model-based reasoning. This study demonstrates that elementary school students can systematically elaborate on complicated ecology concepts using scientific reasoning and modeling processes.

Precision Evaluation of Recent Global Geopotential Models based on GNSS/Leveling Data on Unified Control Points

  • Lee, Jisun;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.153-163
    • /
    • 2020
  • After launching the GOCE (Gravity Field and Steady-State Ocean Circulation Explorer) which obtains high-frequency gravity signal using a gravity gradiometer, many research institutes are concentrating on the development of GGM (Global Geopotential Model) based on GOCE data and evaluating its precision. The precision of some GGMs was also evaluated in Korea. However, some studies dealt with GGMs constructed based on initial GOCE data or others applied a part of GNSS (Global Navigation Satellite System) / Leveling data on UCPs (Unified Control Points) for the precision evaluation. Now, GGMs which have a higher degree than EGM2008 (Earth Gravitational Model 2008) are available and UCPs were fully established at the end of 2019. Thus, EIGEN-6C4 (European Improved Gravity Field of the Earth by New techniques - 6C4), GECO (GOCE and EGM2008 Combined model), XGM2016 (Experimental Gravity Field Model 2016), SGG-UGM-1, XGM2019e_2159 were collected with EGM2008, and their precisions were assessed based on the GNSS/Leveling data on UCPs. Among GGMs, it was found that XGM2019e_2159 showed the minimum difference compared to a total of 5,313 points of GNSS/Leveling data. It is about a 1.5cm and 0.6cm level of improvement compare to EGM2008 and EIGEN-6C4. Especially, the local biases in the northern part of Gyeonggi-do, Jeju island shown in the EGM2008 was removed, so that both mean and standard deviation of the difference of XGM2019e_2159 to the GNSS/Leveling are homogeneous regardless of region (mountainous or plain area). NGA (National Geospatial-Intelligence Agency) is currently in progress in developing EGM2020 and XGM2019e_2159 is the experimentally published model of EGM2020. Therefore, it is expected that the improved GGM will be available shortly so that it is necessary to verify the precision of new GGMs consistently.

Prediction Model of Real Estate Transaction Price with the LSTM Model based on AI and Bigdata

  • Lee, Jeong-hyun;Kim, Hoo-bin;Shim, Gyo-eon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.274-283
    • /
    • 2022
  • Korea is facing a number difficulties arising from rising housing prices. As 'housing' takes the lion's share in personal assets, many difficulties are expected to arise from fluctuating housing prices. The purpose of this study is creating housing price prediction model to prevent such risks and induce reasonable real estate purchases. This study made many attempts for understanding real estate instability and creating appropriate housing price prediction model. This study predicted and validated housing prices by using the LSTM technique - a type of Artificial Intelligence deep learning technology. LSTM is a network in which cell state and hidden state are recursively calculated in a structure which added cell state, which is conveyor belt role, to the existing RNN's hidden state. The real sale prices of apartments in autonomous districts ranging from January 2006 to December 2019 were collected through the Ministry of Land, Infrastructure, and Transport's real sale price open system and basic apartment and commercial district information were collected through the Public Data Portal and the Seoul Metropolitan City Data. The collected real sale price data were scaled based on monthly average sale price and a total of 168 data were organized by preprocessing respective data based on address. In order to predict prices, the LSTM implementation process was conducted by setting training period as 29 months (April 2015 to August 2017), validation period as 13 months (September 2017 to September 2018), and test period as 13 months (December 2018 to December 2019) according to time series data set. As a result of this study for predicting 'prices', there have been the following results. Firstly, this study obtained 76 percent of prediction similarity. We tried to design a prediction model of real estate transaction price with the LSTM Model based on AI and Bigdata. The final prediction model was created by collecting time series data, which identified the fact that 76 percent model can be made. This validated that predicting rate of return through the LSTM method can gain reliability.

Pavement Performance Model Development Using Bayesian Algorithm (베이지안 기법을 활용한 공용성 모델개발 연구)

  • Mun, Sungho
    • International Journal of Highway Engineering
    • /
    • v.18 no.1
    • /
    • pp.91-97
    • /
    • 2016
  • PURPOSES : The objective of this paper is to develop a pavement performance model based on the Bayesian algorithm, and compare the measured and predicted performance data. METHODS : In this paper, several pavement types such as SMA (stone mastic asphalt), PSMA (polymer-modified stone mastic asphalt), PMA (polymer-modified asphalt), SBS (styrene-butadiene-styrene) modified asphalt, and DGA (dense-graded asphalt) are modeled in terms of the performance evaluation of pavement structures, using the Bayesian algorithm. RESULTS : From case studies related to the performance model development, the statistical parameters of the mean value and standard deviation can be obtained through the Bayesian algorithm, using the initial performance data of two different pavement cases. Furthermore, an accurate performance model can be developed, based on the comparison between the measured and predicted performance data. CONCLUSIONS : Based on the results of the case studies, it is concluded that the determined coefficients of the nonlinear performance models can be used to accurately predict the long-term performance behaviors of DGA and modified asphalt concrete pavements. In addition, the developed models were evaluated through comparison studies between the initial measurement and prediction data, as well as between the final measurement and prediction data. In the model development, the initial measured data were used.

Incorporating BERT-based NLP and Transformer for An Ensemble Model and its Application to Personal Credit Prediction

  • Sophot Ky;Ju-Hong Lee;Kwangtek Na
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.9-15
    • /
    • 2024
  • Tree-based algorithms have been the dominant methods used build a prediction model for tabular data. This also includes personal credit data. However, they are limited to compatibility with categorical and numerical data only, and also do not capture information of the relationship between other features. In this work, we proposed an ensemble model using the Transformer architecture that includes text features and harness the self-attention mechanism to tackle the feature relationships limitation. We describe a text formatter module, that converts the original tabular data into sentence data that is fed into FinBERT along with other text features. Furthermore, we employed FT-Transformer that train with the original tabular data. We evaluate this multi-modal approach with two popular tree-based algorithms known as, Random Forest and Extreme Gradient Boosting, XGBoost and TabTransformer. Our proposed method shows superior Default Recall, F1 score and AUC results across two public data sets. Our results are significant for financial institutions to reduce the risk of financial loss regarding defaulters.