• Title/Summary/Keyword: data-based model

Search Result 21,096, Processing Time 0.178 seconds

Development of Coil Breakage Prediction Model In Cold Rolling Mill

  • Park, Yeong-Bok;Hwang, Hwa-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1343-1346
    • /
    • 2005
  • In the cold rolling mill, coil breakage that generated in rolling process makes the various types of troubles such as the degradation of productivity and the damage of equipment. Recent researches were done by the mechanical analysis such as the analysis of roll chattering or strip inclining and the prevention of breakage that detects the crack of coil. But they could cover some kind of breakages. The prediction of Coil breakage was very complicated and occurred rarely. We propose to build effective prediction modes for coil breakage in rolling process, based on data mining model. We proposed three prediction models for coil breakage: (1) decision tree based model, (2) regression based model and (3) neural network based model. To reduce model parameters, we selected important variables related to the occurrence of coil breakage from the attributes of coil setup by using the methods such as decision tree, variable selection and the choice of domain experts. We developed these prediction models and chose the best model among them using SEMMA process that proposed in SAS E-miner environment. We estimated model accuracy by scoring the prediction model with the posterior probability. We also have developed a software tool to analyze the data and generate the proposed prediction models either automatically and in a user-driven manner. It also has an effective visualization feature that is based on PCA (Principle Component Analysis).

  • PDF

A Prediction Model for Stage of Change of Exercise In the Korean Elderly -Based on the Transtheoretical Model- (한국노인의 운동행위 변화단계의 예측모형구축 -범이론적 모델(Transtheoretical Model)을 기반으로-)

  • 김순용;김소인;전영자;이평숙;이숙자;박은숙;장성옥
    • Journal of Korean Academy of Nursing
    • /
    • v.30 no.2
    • /
    • pp.366-379
    • /
    • 2000
  • The purpose of this study was to identify causal relationships among variables of transtheoretical model for exercise in the elderly A predictivel model explaining the stage of change was constructed based on a transtheoretical model. Empirical data for testing the hypothetical model was collected from 198 old adults over 60 years old in a community setting in Seoul, Korea in April and May,1999. Data were analyzed by descriptive statistics and correlational analysis using pc-SAS program. The Linear Structural Modeling (LISREL) 8.0 program was used to find the best fit model which predicts causal relationship of variables. The fit of the hypothetical model to the data was X2=132.85. (df=22, p=.000). GFI=.88, NNFI=.35, NFI=.77, AGFI=.59 which was not favorable but the fit of modified model to the data was X2=46.90. (df=27, p=.01).GFI= .95, NNFI=.91, NFI=.92, AGFI=.87) which was more than moderate. The predictable variables of stage of change for exercise of the Korean elderly were helping relationship, self cognitive determination, conversion of negative condition in process of change and efficacy for exercise. These variables explained 68% of stage of change for exercise of the Korean elderly.

  • PDF

Automatic Local Update of Triangular Mesh Models Based on Measurement Point Clouds (측정된 점데이터 기반 삼각형망 곡면 메쉬 모델의 국부적 자동 수정)

  • Woo, Hyuck-Je;Lee, Jong-Dae;Lee, Kwan-H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.11 no.5
    • /
    • pp.335-343
    • /
    • 2006
  • Design changes for an original surface model are frequently required in a manufacturing area: for example, when the physical parts are modified or when the parts are partially manufactured from analogous shapes. In this case, an efficient 3D model updating method by locally adding scan data for the modified area is highly desirable. For this purpose, this paper presents a new procedure to update an initial model that is composed of combinatorial triangular facets based on a set of locally added point data. The initial surface model is first created from the initial point set by Tight Cocone, which is a water-tight surface reconstructor; and then the point cloud data for the updates is locally added onto the initial model maintaining the same coordinate system. In order to update the initial model, the special region on the initial surface that needs to be updated is recognized through the detection of the overlapping area between the initial model and the boundary of the newly added point cloud. After that, the initial surface model is eventually updated to the final output by replacing the recognized region with the newly added point cloud. The proposed method has been implemented and tested with several examples. This algorithm will be practically useful to modify the surface model with physical part changes and free-form surface design.

Stream-based Biomedical Classification Algorithms for Analyzing Biosignals

  • Fong, Simon;Hang, Yang;Mohammed, Sabah;Fiaidhi, Jinan
    • Journal of Information Processing Systems
    • /
    • v.7 no.4
    • /
    • pp.717-732
    • /
    • 2011
  • Classification in biomedical applications is an important task that predicts or classifies an outcome based on a given set of input variables such as diagnostic tests or the symptoms of a patient. Traditionally the classification algorithms would have to digest a stationary set of historical data in order to train up a decision-tree model and the learned model could then be used for testing new samples. However, a new breed of classification called stream-based classification can handle continuous data streams, which are ever evolving, unbound, and unstructured, for instance--biosignal live feeds. These emerging algorithms can potentially be used for real-time classification over biosignal data streams like EEG and ECG, etc. This paper presents a pioneer effort that studies the feasibility of classification algorithms for analyzing biosignals in the forms of infinite data streams. First, a performance comparison is made between traditional and stream-based classification. The results show that accuracy declines intermittently for traditional classification due to the requirement of model re-learning as new data arrives. Second, we show by a simulation that biosignal data streams can be processed with a satisfactory level of performance in terms of accuracy, memory requirement, and speed, by using a collection of stream-mining algorithms called Optimized Very Fast Decision Trees. The algorithms can effectively serve as a corner-stone technology for real-time classification in future biomedical applications.

A Flexible Modeling Approach for Current Status Survival Data via Pseudo-Observations

  • Han, Seungbong;Andrei, Adin-Cristian;Tsui, Kam-Wah
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.6
    • /
    • pp.947-958
    • /
    • 2012
  • When modeling event times in biomedical studies, the outcome might be incompletely observed. In this paper, we assume that the outcome is recorded as current status failure time data. Despite well-developed literature the routine practical use of many current status data modeling methods remains infrequent due to the lack of specialized statistical software, the difficulty to assess model goodness-of-fit, as well as the possible loss of information caused by covariate grouping or discretization. We propose a model based on pseudo-observations that is convenient to implement and that allows for flexibility in the choice of the outcome. Parameter estimates are obtained based on generalized estimating equations. Examples from studies in bile duct hyperplasia and breast cancer in conjunction with simulated data illustrate the practical advantages of this model.

Study on Object Modelling for Oceanic Data (해양자료 객체 DB 모델링 연구)

  • 박종민;서상현
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.454-457
    • /
    • 1999
  • Most of oceanic information are related with spatial properties directly or impliedly and Presented differently depend on applications. So, without efficient integrated data model and strategies it inevitably occurs redundant development results of database itself and application systems. Therefor to avoid these inefficiencies it is most basic need to establish the ocean data infrastructure based on unified data model. In this paper we suggest the guideline for object data model based on ocean GIS concept followed with a sample primitive object data model implementing tile proposed guideline. With this unified data model we could expect the improvement ill the every phase of ocean related environment from data acquisition through translation and utilizing to service and exchange.

  • PDF

A Human Movement Stream Processing System for Estimating Worker Locations in Shipyards

  • Duong, Dat Van Anh;Yoon, Seokhoon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.135-142
    • /
    • 2021
  • Estimating the locations of workers in a shipyard is beneficial for a variety of applications such as selecting potential forwarders for transferring data in IoT services and quickly rescuing workers in the event of industrial disasters or accidents. In this work, we propose a human movement stream processing system for estimating worker locations in shipyards based on Apache Spark and TensorFlow serving. First, we use Apache Spark to process location data streams. Then, we design a worker location prediction model to estimate the locations of workers. TensorFlow serving manages and executes the worker location prediction model. When there are requirements from clients, Apache Spark extracts input data from the processed data for the prediction model and then sends it to TensorFlow serving for estimating workers' locations. The worker movement data is needed to evaluate the proposed system but there are no available worker movement traces in shipyards. Therefore, we also develop a mobility model for generating the workers' movements in shipyards. Based on synthetic data, the proposed system is evaluated. It obtains a high performance and could be used for a variety of tasksin shipyards.

Construction of an Internet of Things Industry Chain Classification Model Based on IRFA and Text Analysis

  • Zhimin Wang
    • Journal of Information Processing Systems
    • /
    • v.20 no.2
    • /
    • pp.215-225
    • /
    • 2024
  • With the rapid development of Internet of Things (IoT) and big data technology, a large amount of data will be generated during the operation of related industries. How to classify the generated data accurately has become the core of research on data mining and processing in IoT industry chain. This study constructs a classification model of IoT industry chain based on improved random forest algorithm and text analysis, aiming to achieve efficient and accurate classification of IoT industry chain big data by improving traditional algorithms. The accuracy, precision, recall, and AUC value size of the traditional Random Forest algorithm and the algorithm used in the paper are compared on different datasets. The experimental results show that the algorithm model used in this paper has better performance on different datasets, and the accuracy and recall performance on four datasets are better than the traditional algorithm, and the accuracy performance on two datasets, P-I Diabetes and Loan Default, is better than the random forest model, and its final data classification results are better. Through the construction of this model, we can accurately classify the massive data generated in the IoT industry chain, thus providing more research value for the data mining and processing technology of the IoT industry chain.

A Study on the Intention to Use MyData Service based on Open Banking (오픈뱅킹 기반의 마이데이터 서비스 이용의도에 관한 연구)

  • Lee, Jongsub;Choi, Jaeseob;Choi, Jeongil
    • Journal of Information Technology Services
    • /
    • v.21 no.1
    • /
    • pp.1-19
    • /
    • 2022
  • With the revision of the Credit Information Use and Protection Act in August 2020, the MyData service based on open banking policy will take effect in January 2022. Nonetheless, the previous studies focused on the legal system or security-related issues of such service. Therefore, this paper conducted an empirical study on financial consumers aged 20 or older nationwide to analyze the factors which influence the intention to use MyData services based on open banking. Five characteristics representing open banking-based MyData service were derived through prior research, and a research model that combined value-based adoption model and privacy calculus theory was presented. The proposed research model and the relationship of its variables was analyzed using a sample of 400 users that is randomly selected. The results of empirical analysis showed that personalization had the greatest influence on benefits and reliability on sacrifice among service characteristics. They also suggested that MyData operators should devote themselves to providing customized services optimized for customers and establishing trust relationships. It was confirmed that both usefulness and enjoyment had a great influence on perceived value, and in terms of sacrifice, the burden of financial costs had a greater influence than privacy concerns. This study is meaningful in that it explored the psychological propensity of financial consumers to identify service utilization factors and presented a new approach that can contribute to the successful settlement of the domestic MyData industry.

Reliability Estimation in Bivariate Pareto Model with Bivariate Type I Censored Data

  • Cho, Jang-Sik;Cho, Kil-Ho;Kang, Sang-Gil
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.4
    • /
    • pp.837-844
    • /
    • 2003
  • In this paper, we obtain the estimator of system reliability for the bivariate Pareto model with bivariate type 1 censored data. We obtain the estimators and approximated confidence intervals of the reliability for the parallel system based on likelihood function and the relative frequency, respectively. Also we present a numerical example by giving a data set which is generated by computer.

  • PDF