• Title/Summary/Keyword: Data-based model

Search Result 20,850, Processing Time 0.049 seconds

Concrete Crack Detection and Visualization Method Using CNN Model (CNN 모델을 활용한 콘크리트 균열 검출 및 시각화 방법)

  • Choi, Ju-hee;Kim, Young-Kwan;Lee, Han-Seung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.04a
    • /
    • pp.73-74
    • /
    • 2022
  • Concrete structures occupy the largest proportion of modern infrastructure, and concrete structures often have cracking problems. Existing concrete crack diagnosis methods have limitations in crack evaluation because they rely on expert visual inspection. Therefore, in this study, we design a deep learning model that detects, visualizes, and outputs cracks on the surface of RC structures based on image data by using a CNN (Convolution Neural Networks) model that can process two- and three-dimensional data such as video and image data. do. An experimental study was conducted on an algorithm to automatically detect concrete cracks and visualize them using a CNN model. For the three deep learning models used for algorithm learning in this study, the concrete crack prediction accuracy satisfies 90%, and in particular, the 'InceptionV3'-based CNN model showed the highest accuracy. In the case of the crack detection visualization model, it showed high crack detection prediction accuracy of more than 95% on average for data with crack width of 0.2 mm or more.

  • PDF

A Study On Product Data Model for Central Database in an Integrated System for Structural Design of Building (구조설계 통합 시스템에서 중앙 데이터베이스를 위한 데이터 모델에 관한 연구)

  • 안계현;신동철;이병해
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 1999.10a
    • /
    • pp.444-451
    • /
    • 1999
  • The purpose of this study is to Propose data models for central database in integrated system for structural design building. In order to efficiently express data related to structure, I analyzed the structure design process and classified data considering design step. 1 used an object-oriented modeling methodology for logical data model and relational modeling for physical data model. Based on this model, we will develop an integration system with several applications for structure design. Each application will communicate through the central database.

  • PDF

Design and Implementation of the Video Query Processing Engine for Content-Based Query Processing (내용기반 질의 처리를 위한 동영상 질의 처리기의 설계 및 구현)

  • Jo, Eun-Hui;Kim, Yong-Geol;Lee, Hun-Sun;Jeong, Yeong-Eun;Jin, Seong-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.603-614
    • /
    • 1999
  • As multimedia application services on high-speed information network have been rapidly developed, the need for the video information management system that provides an efficient way for users to retrieve video data is growing. In this paper, we propose a video data model that integrates free annotations, image features, and spatial-temporal features for video purpose of improving content-based retrieval of video data. The proposed video data model can act as a generic video data model for multimedia applications, and support free annotations, image features, spatial-temporal features, and structure information of video data within the same framework. We also propose the video query language for efficiently providing query specification to access video clips in the video data. It can formalize various kinds of queries based on the video contents. Finally we design and implement the query processing engine for efficient video data retrieval on the proposed metadata model and the proposed video query language.

  • PDF

The study on a plan for applying UNeDocs to Maritime Logistics to achieve its paperless logistics (Paperless 해운 물류를 위한 UNeDocs 적용 방안 연구)

  • Ahn, Kyeong Rim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.2
    • /
    • pp.199-208
    • /
    • 2009
  • Mosts of export/import cargo has been moving using maritime transport means. Korea had been driven the system automation project using EDI document since the mid-1990s. However, this automation system comes upon about 40-50% against overall maritime business process, manual or paper document processing work is existing as ever. International e-business environment also has changing into electronic form document transaction from paper document-based transaction. International standardization organization, UN/CEFACT proposed UNeDocs for paperless jtransaction. UNeDocs is a specification to define XML data model as well as electronic form. With UNeDocs, it is not necessary to generate the duplexed data, and it can support user convenient and guarantee the flexibility. This paper defines the UNeDocs data model for EDI and Off-Line processing at the current maritime business. Then, it have to check XML syntax and structure for the defined data model through quality of document check system. Also, it explains the applying plan about the defined UNeDocs data model. It is possible to support paperless transaction as defining UNeDocs-based standard data model and converting into paper document, XML and EDI document using UNeDocs data model.

Anomaly Detection of Machining Process based on Power Load Analysis (전력 부하 분석을 통한 절삭 공정 이상탐지)

  • Jun Hong Yook;Sungmoon Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.173-180
    • /
    • 2023
  • Smart factory companies are installing various sensors in production facilities and collecting field data. However, there are relatively few companies that actively utilize collected data, academic research using field data is actively underway. This study seeks to develop a model that detects anomalies in the process by analyzing spindle power data from a company that processes shafts used in automobile throttle valves. Since the data collected during machining processing is time series data, the model was developed through unsupervised learning by applying the Holt Winters technique and various deep learning algorithms such as RNN, LSTM, GRU, BiRNN, BiLSTM, and BiGRU. To evaluate each model, the difference between predicted and actual values was compared using MSE and RMSE. The BiLSTM model showed the optimal results based on RMSE. In order to diagnose abnormalities in the developed model, the critical point was set using statistical techniques in consultation with experts in the field and verified. By collecting and preprocessing real-world data and developing a model, this study serves as a case study of utilizing time-series data in small and medium-sized enterprises.

Optimal Fuzzy Models with the Aid of SAHN-based Algorithm

  • Lee Jong-Seok;Jang Kyung-Won;Ahn Tae-Chon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.2
    • /
    • pp.138-143
    • /
    • 2006
  • In this paper, we have presented a Sequential Agglomerative Hierarchical Nested (SAHN) algorithm-based data clustering method in fuzzy inference system to achieve optimal performance of fuzzy model. SAHN-based algorithm is used to give possible range of number of clusters with cluster centers for the system identification. The axes of membership functions of this fuzzy model are optimized by using cluster centers obtained from clustering method and the consequence parameters of the fuzzy model are identified by standard least square method. Finally, in this paper, we have observed our model's output performance using the Box and Jenkins's gas furnace data and Sugeno's non-linear process data.

Development of ML and IoT Enabled Disease Diagnosis Model for a Smart Healthcare System

  • Mehra, Navita;Mittal, Pooja
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.1-12
    • /
    • 2022
  • The current progression in the Internet of Things (IoT) and Machine Learning (ML) based technologies converted the traditional healthcare system into a smart healthcare system. The incorporation of IoT and ML has changed the way of treating patients and offers lots of opportunities in the healthcare domain. In this view, this research article presents a new IoT and ML-based disease diagnosis model for the diagnosis of different diseases. In the proposed model, vital signs are collected via IoT-based smart medical devices, and the analysis is done by using different data mining techniques for detecting the possibility of risk in people's health status. Recommendations are made based on the results generated by different data mining techniques, for high-risk patients, an emergency alert will be generated to healthcare service providers and family members. Implementation of this model is done on Anaconda Jupyter notebook by using different Python libraries in it. The result states that among all data mining techniques, SVM achieved the highest accuracy of 0.897 on the same dataset for classification of Parkinson's disease.

Proposal of a Conceptual Model for Research Data Curation based on Activity Theory (활동이론을 중심으로 한 연구데이터 큐레이션 개념 모델 제안)

  • Na-eun Han
    • Journal of Korean Library and Information Science Society
    • /
    • v.54 no.1
    • /
    • pp.167-190
    • /
    • 2023
  • This study is a literature study that analyzed the research data curation models using activity theory as a theoretical framework. Based on the factors of the activity used in the activity theory, this study analyzed various research data curation models, as well as issues that needed discussion in the library field in carrying out research data curation activities. And based on this, a new research data curation conceptual model was proposed. This study analyzed how the five previously proposed digital curation lifecycle models are configured, and analyzed the actions presented sporadically in each model. A new research data curation conceptual model was proposed by analyzing factors, extracting common factors and integrating them into a new model. In addition, six issues to be considered in carrying out research data curation activities in libraries and repositories were analyzed and discussed. The research data curation conceptual model proposed in this study consists of a total of 10 steps, and it contains practical issues and contradictions to consider at each stage of activity.

Effects of Turbulent Mixing and Void Drift Models on the Predictions of COBRA-IV-I

  • Yoo, Yeon-Jong;Hwang, Dae-Hyun;Nahm, Kee-Yil;Sohn, Dong-Seong
    • Proceedings of the Korean Nuclear Society Conference
    • /
    • 1996.05b
    • /
    • pp.284-289
    • /
    • 1996
  • The predictions of the COBRA-IV-I code with the modified turbulent mixing and void drift models have been compared with the diabatic two-phase flow data on equilibrium quality. The turbulent mixing model based on an equal mass exchange of the existing COBRA-IV-I code has been modified to that based on an equal volume exchange between adjacent subchannels, and a void drift model has been newly incorporated in the code. To evaluate the performance of the equal volume exchange turbulent mixing model and the effects of the void drift model, the diabatic steam-water two-phase flow data obtained for the 9-rod bundle test under the typical operating conditions of the boiling water reactor(BWR) conducted by the General Electric (GE) were analyzed by the modified COBRA-IV-I code. The analysis indicates that the equal volume exchange turbulent mixing model with void drift predicts the observed two-phase flow data trends better than the equal mass exchange model, and to predict the correct data trends a more physically based void drift model need to be developed.

  • PDF

Vulnerability Threat Classification Based on XLNET AND ST5-XXL model

  • Chae-Rim Hong;Jin-Keun Hong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.262-273
    • /
    • 2024
  • We provide a detailed analysis of the data processing and model training process for vulnerability classification using Transformer-based language models, especially sentence text-to-text transformers (ST5)-XXL and XLNet. The main purpose of this study is to compare the performance of the two models, identify the strengths and weaknesses of each, and determine the optimal learning rate to increase the efficiency and stability of model training. We performed data preprocessing, constructed and trained models, and evaluated performance based on data sets with various characteristics. We confirmed that the XLNet model showed excellent performance at learning rates of 1e-05 and 1e-04 and had a significantly lower loss value than the ST5-XXL model. This indicates that XLNet is more efficient for learning. Additionally, we confirmed in our study that learning rate has a significant impact on model performance. The results of the study highlight the usefulness of ST5-XXL and XLNet models in the task of classifying security vulnerabilities and highlight the importance of setting an appropriate learning rate. Future research should include more comprehensive analyzes using diverse data sets and additional models.