• Title/Summary/Keyword: data-based model

Search Result 21,105, Processing Time 0.047 seconds

Research on prediction and analysis of supercritical water heat transfer coefficient based on support vector machine

  • Ma Dongliang;Li Yi;Zhou Tao;Huang Yanping
    • Nuclear Engineering and Technology
    • /
    • v.55 no.11
    • /
    • pp.4102-4111
    • /
    • 2023
  • In order to better perform thermal hydraulic calculation and analysis of supercritical water reactor, based on the experimental data of supercritical water, the model training and predictive analysis of the heat transfer coefficient of supercritical water were carried out by using the support vector machine (SVM) algorithm. The changes in the prediction accuracy of the supercritical water heat transfer coefficient are analyzed by the changes of the regularization penalty parameter C, the slack variable epsilon and the Gaussian kernel function parameter gamma. The predicted value of the SVM model obtained after parameter optimization and the actual experimental test data are analyzed for data verification. The research results show that: the normalization of the data has a great influence on the prediction results. The slack variable has a relatively small influence on the accuracy change range of the predicted heat transfer coefficient. The change of gamma has the greatest impact on the accuracy of the heat transfer coefficient. Compared with the calculation results of traditional empirical formula methods, the trained algorithm model using SVM has smaller average error and standard deviations. Using the SVM trained algorithm model, the heat transfer coefficient of supercritical water can be effectively predicted and analyzed.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.

A Fast and Exact Verification of Inter-Domain Data Transfer based on PKI

  • Jung, Im-Y.;Eom, Hyeon-Sang;Yeom, Heon-Y.
    • Journal of Information Technology Applications and Management
    • /
    • v.18 no.3
    • /
    • pp.61-72
    • /
    • 2011
  • Trust for the data created, processed and transferred on e-Science environments can be estimated with provenance. The information to form provenance, which says how the data was created and reached its current state, increases as data evolves. It is a heavy burden to trace and verify the massive provenance in order to trust data. On the other hand, it is another issue how to trust the verification of data with provenance. This paper proposes a fast and exact verification of inter-domain data transfer and data origin for e-Science environment based on PKI. The verification, which is called two-way verification, cuts down the tracking overhead of the data along the causality presented on Open Provenance Model with the domain specialty of e-Science environment supported by Grid Security Infrastructure (GSI). The proposed scheme is easy-applicable without an extra infrastructure, scalable irrespective of the number of provenance records, transparent and secure with cryptography as well as low-overhead.

A Survey on Prognostics and Comparison Study on the Model-Based Prognostics (예지기술의 연구동향 및 모델기반 예지기술 비교연구)

  • Choi, Joo-Ho;An, Da-Wn;Gang, Jin-Hyuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.11
    • /
    • pp.1095-1100
    • /
    • 2011
  • In this paper, PHM (Prognostics and Health Management) techniques are briefly outlined. Prognostics, being a central step within the PHM, is explained in more detail, stating that there are three approaches - experience based, data-driven and model based approaches. Representative articles in the field of prognostics are also given in terms of the type of faults. Model based method is illustrated by introducing a case study that was conducted to the crack growth of the gear plate in UH-60A helicopter. The paper also addresses the comparison of the OBM (Overall Bayesian Method), which was developed by the authors with the PF (Particle Filtering) method, which draws great attention recently in prognostics, through the study on a simple crack growth problem. Their performances are examined by evaluating the metrics introduced by PHM society.

Multivariate Congestion Prediction using Stacked LSTM Autoencoder based Bidirectional LSTM Model

  • Vijayalakshmi, B;Thanga, Ramya S;Ramar, K
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.1
    • /
    • pp.216-238
    • /
    • 2023
  • In intelligent transportation systems, traffic management is an important task. The accurate forecasting of traffic characteristics like flow, congestion, and density is still active research because of the non-linear nature and uncertainty of the spatiotemporal data. Inclement weather, such as rain and snow, and other special events such as holidays, accidents, and road closures have a significant impact on driving and the average speed of vehicles on the road, which lowers traffic capacity and causes congestion in a widespread manner. This work designs a model for multivariate short-term traffic congestion prediction using SLSTM_AE-BiLSTM. The proposed design consists of a Bidirectional Long Short Term Memory(BiLSTM) network to predict traffic flow value and a Convolutional Neural network (CNN) model for detecting the congestion status. This model uses spatial static temporal dynamic data. The stacked Long Short Term Memory Autoencoder (SLSTM AE) is used to encode the weather features into a reduced and more informative feature space. BiLSTM model is used to capture the features from the past and present traffic data simultaneously and also to identify the long-term dependencies. It uses the traffic data and encoded weather data to perform the traffic flow prediction. The CNN model is used to predict the recurring congestion status based on the predicted traffic flow value at a particular urban traffic network. In this work, a publicly available Caltrans PEMS dataset with traffic parameters is used. The proposed model generates the congestion prediction with an accuracy rate of 92.74% which is slightly better when compared with other deep learning models for congestion prediction.

Template-based Automatic 3D Model Generation from Automotive Freehand Sketch (템플릿을 이용한 자동차 프리핸드 스케치의 3D 모델로 자동변환)

  • Cheon, S.U.;Han, S.H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.4
    • /
    • pp.283-297
    • /
    • 2007
  • Seamless data integration in the CAx chain of the CAD/CAPP/CAM/CNC has been achieved to a high degree, but research concerning the transfer of data from conceptual sketches to a CAD system should be carried out further. This paper presents a method for reconstructing a 3D model from a freehand sketch. Sketch-based modeling research can be classified into gestural modeling methods and reconstructional modeling methods. This research involves the reconstructional modeling method. Here, Mitani's seminal work, designed for box-shaped 3D model using a predefined template, is improved by leveraging a relational template and specialized for automotive design. Matching between edge graphs of the relational template and the sketch is formulated and solved as the assignment problem using the feature vectors of the edges. Including the stroke preprocessing method required to generate an edge graph from a sketch, necessary procedures and relevant techniques for implementing the template-based modeling method are described. Examples from a working implementation are given.

The Effects of Subjective Norm and Social Interactivity on Usage Intention in WBC Learning Systems (웹기반 협동학습 시스템에서의 주관적 규범과 사회적 상호작용이 지속적 사용의도에 미치는 영향)

  • Lee, Dong-Hoon;Lee, Sang-Kon;Lee, Ji-Yeon
    • Journal of Information Technology Services
    • /
    • v.7 no.4
    • /
    • pp.21-43
    • /
    • 2008
  • This paper develops the research model for the understanding of learner's usage intention in web based collaborative learning(WBCL) system. This model is based on the Davis' Technology Acceptance Model(TAM) and Social Interactivity Theory. Data is collected 225 University students from two different institutions. They were divided into 46 groups and asked to complete an online TOEIC preparation module using WBCL systems over 4 weeks. Data were collected at three points for each participant-before, 3 weeks after, and at the end of the online module. The result show that TAM based Belief factors(Usefulness, Ease of use, Playfulenss) are important determinants of usage intention in WBCL systems. The study also found the external factors of the extended TAM to be subjective norm, leader's enthusiasm in WBCL context.

Fundamentals of Numerical Modeling of the Mid-latitude Ionosphere

  • Geonhwa Jee
    • Journal of Astronomy and Space Sciences
    • /
    • v.40 no.1
    • /
    • pp.11-18
    • /
    • 2023
  • The ionosphere is one of the key components of the near-Earth's space environment and has a practical consequence to the human society as a nearest region of the space environment to the Earth. Therefore, it becomes essential to specify and forecast the state of the ionosphere using both the observations and numerical models. In particular, numerical modeling of the ionosphere is a prerequisite not only for better understanding of the physical processes occurring within the ionosphere but also for the specification and forecast of the space weather. There are several approaches for modeling the ionosphere, including data-based empirical modeling, physics-based theoretical modeling and data assimilation modeling. In this review, these three types of the ionospheric model are briefly introduced with recently available models. And among those approaches, fundamental aspects of the physics-based ionospheric model will be described using the basic equations governing the mid-latitude ionosphere. Then a numerical solution of the equations will be discussed with required boundary conditions.

Enhanced deep soft interference cancellation for multiuser symbol detection

  • Jihyung Kim;Junghyun Kim;Moon-Sik Lee
    • ETRI Journal
    • /
    • v.45 no.6
    • /
    • pp.929-938
    • /
    • 2023
  • The detection of all the symbols transmitted simultaneously in multiuser systems using limited wireless resources is challenging. Traditional model-based methods show high performance with perfect channel state information (CSI); however, severe performance degradation will occur if perfect CSI cannot be acquired. In contrast, data-driven methods perform slightly worse than model-based methods in terms of symbol error ratio performance in perfect CSI states; however, they are also able to overcome extreme performance degradation in imperfect CSI states. This study proposes a novel deep learning-based method by improving a state-of-the-art data-driven technique called deep soft interference cancellation (DSIC). The enhanced DSIC (EDSIC) method detects multiuser symbols in a fully sequential manner and uses an efficient neural network structure to ensure high performance. Additionally, error-propagation mitigation techniques are used to ensure robustness against channel uncertainty. The EDSIC guarantees a performance that is very close to the optimal performance of the existing model-based methods in perfect CSI environments and the best performance in imperfect CSI environments.

Real-time Oil Spill Dispersion Modelling (실시간 유출유 확산모델링)

  • 정연철
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.5 no.1
    • /
    • pp.9-18
    • /
    • 1999
  • To predict the oil spill dispersion phenomena in the ocean, the oil spill response model, which can be used for strategic purpose on the oil spill site, based on Lagrangian particle-tracking method was formulated and applied to the neighboring area with Pusan port where the oil spill incident occurred when the tanker ship No.1 Youil struck on a small rock near the Namhyungjeto on September 21, 1995. The real-time tidal currents to be required as input data of the oil spill model were obtained by the two-dimensional hydrodynamic model and the tide prediction model. Evaluation of tidal currents using observation data was successful. For wind data, other input data of oil spill model, observed data on the spot were used. To verify the oil spill model, the oil spill modelling results were compared with the field data obtained from the spill site. Compared the modelling results with the observation data, there exist some discrepancies but the general pattern of modelling results was similar to that of field observation. The modelling results on 7 days after spill occurred showed that the 40% of spilled oil is in floating, 36% in evaporated, 23% at shore, and 1% in out of boundary, respectively. According to the evaluation of weighting curves of effective components to the dispersion of oil, the winds make a 37% of contribution to the dispersion of oil, turbulent diffusion 39.5%, and tidal currents 23.5%, respectively. Provided the more accurate wind data are supported, more favorable results might be obtained.

  • PDF