• Title/Summary/Keyword: Algorithm Comparison

Search Result 2,966, Processing Time 0.029 seconds

Role of unstructured data on water surface elevation prediction with LSTM: case study on Jamsu Bridge, Korea (LSTM 기법을 활용한 수위 예측 알고리즘 개발 시 비정형자료의 역할에 관한 연구: 잠수교 사례)

  • Lee, Seung Yeon;Yoo, Hyung Ju;Lee, Seung Oh
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1195-1204
    • /
    • 2021
  • Recently, local torrential rain have become more frequent and severe due to abnormal climate conditions, causing a surge in human and properties damage including infrastructures along the river. In this study, water surface elevation prediction algorithm was developed using the LSTM (Long Short-term Memory) technique specialized for time series data among Machine Learning to estimate and prevent flooding of the facilities. The study area is Jamsu Bridge, the study period is 6 years (2015~2020) of June, July and August and the water surface elevation of the Jamsu Bridge after 3 hours was predicted. Input data set is composed of the water surface elevation of Jamsu Bridge (EL.m), the amount of discharge from Paldang Dam (m3/s), the tide level of Ganghwa Bridge (cm) and the number of tweets in Seoul. Complementary data were constructed by using not only structured data mainly used in precedent research but also unstructured data constructed through wordcloud, and the role of unstructured data was presented through comparison and analysis of whether or not unstructured data was used. When predicting the water surface elevation of the Jamsu Bridge, the accuracy of prediction was improved and realized that complementary data could be conservative alerts to reduce casualties. In this study, it was concluded that the use of complementary data was relatively effective in providing the user's safety and convenience of riverside infrastructure. In the future, more accurate water surface elevation prediction would be expected through the addition of types of unstructured data or detailed pre-processing of input data.

A study on EPB shield TBM face pressure prediction using machine learning algorithms (머신러닝 기법을 활용한 토압식 쉴드TBM 막장압 예측에 관한 연구)

  • Kwon, Kibeom;Choi, Hangseok;Oh, Ju-Young;Kim, Dongku
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.2
    • /
    • pp.217-230
    • /
    • 2022
  • The adequate control of TBM face pressure is of vital importance to maintain face stability by preventing face collapse and surface settlement. An EPB shield TBM excavates the ground by applying face pressure with the excavated soil in the pressure chamber. One of the challenges during the EPB shield TBM operation is the control of face pressure due to difficulty in managing the excavated soil. In this study, the face pressure of an EPB shield TBM was predicted using the geological and operational data acquired from a domestic TBM tunnel site. Four machine learning algorithms: KNN (K-Nearest Neighbors), SVM (Support Vector Machine), RF (Random Forest), and XGB (eXtreme Gradient Boosting) were applied to predict the face pressure. The model comparison results showed that the RF model yielded the lowest RMSE (Root Mean Square Error) value of 7.35 kPa. Therefore, the RF model was selected as the optimal machine learning algorithm. In addition, the feature importance of the RF model was analyzed to evaluate appropriately the influence of each feature on the face pressure. The water pressure indicated the highest influence, and the importance of the geological conditions was higher in general than that of the operation features in the considered site.

Accuracy of artificial intelligence-assisted landmark identification in serial lateral cephalograms of Class III patients who underwent orthodontic treatment and two-jaw orthognathic surgery

  • Hong, Mihee;Kim, Inhwan;Cho, Jin-Hyoung;Kang, Kyung-Hwa;Kim, Minji;Kim, Su-Jung;Kim, Yoon-Ji;Sung, Sang-Jin;Kim, Young Ho;Lim, Sung-Hoon;Kim, Namkug;Baek, Seung-Hak
    • The korean journal of orthodontics
    • /
    • v.52 no.4
    • /
    • pp.287-297
    • /
    • 2022
  • Objective: To investigate the pattern of accuracy change in artificial intelligence-assisted landmark identification (LI) using a convolutional neural network (CNN) algorithm in serial lateral cephalograms (Lat-cephs) of Class III (C-III) patients who underwent two-jaw orthognathic surgery. Methods: A total of 3,188 Lat-cephs of C-III patients were allocated into the training and validation sets (3,004 Lat-cephs of 751 patients) and test set (184 Lat-cephs of 46 patients; subdivided into the genioplasty and non-genioplasty groups, n = 23 per group) for LI. Each C-III patient in the test set had four Lat-cephs: initial (T0), pre-surgery (T1, presence of orthodontic brackets [OBs]), post-surgery (T2, presence of OBs and surgical plates and screws [S-PS]), and debonding (T3, presence of S-PS and fixed retainers [FR]). After mean errors of 20 landmarks between human gold standard and the CNN model were calculated, statistical analysis was performed. Results: The total mean error was 1.17 mm without significant difference among the four time-points (T0, 1.20 mm; T1, 1.14 mm; T2, 1.18 mm; T3, 1.15 mm). In comparison of two time-points ([T0, T1] vs. [T2, T3]), ANS, A point, and B point showed an increase in error (p < 0.01, 0.05, 0.01, respectively), while Mx6D and Md6D showeda decrease in error (all p < 0.01). No difference in errors existed at B point, Pogonion, Menton, Md1C, and Md1R between the genioplasty and non-genioplasty groups. Conclusions: The CNN model can be used for LI in serial Lat-cephs despite the presence of OB, S-PS, FR, genioplasty, and bone remodeling.

Parallel Computation on the Three-dimensional Electromagnetic Field by the Graph Partitioning and Multi-frontal Method (그래프 분할 및 다중 프론탈 기법에 의거한 3차원 전자기장의 병렬 해석)

  • Kang, Seung-Hoon;Song, Dong-Hyeon;Choi, JaeWon;Shin, SangJoon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.12
    • /
    • pp.889-898
    • /
    • 2022
  • In this paper, parallel computing method on the three-dimensional electromagnetic field is proposed. The present electromagnetic scattering analysis is conducted based on the time-harmonic vector wave equation and the finite element method. The edge-based element and 2nd -order absorbing boundary condition are used. Parallelization of the elemental numerical integration and the matrix assemblage is accomplished by allocating the partitioned finite element subdomain for each processor. The graph partitioning library, METIS, is employed for the subdomain generation. The large sparse matrix computation is conducted by MUMPS, which is the parallel computing library based on the multi-frontal method. The accuracy of the present program is validated by the comparison against the Mie-series analytical solution and the results by ANSYS HFSS. In addition, the scalability is verified by measuring the speed-up in terms of the number of processors used. The present electromagnetic scattering analysis is performed for a perfect electric conductor sphere, isotropic/anisotropic dielectric sphere, and the missile configuration. The algorithm of the present program will be applied to the finite element and tearing method, aiming for the further extended parallel computing performance.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

A Study on the Accuracy Comparison of Object Detection Algorithms for 360° Camera Images for BIM Model Utilization (BIM 모델 활용을 위한 360° 카메라 이미지의 객체 탐지 알고리즘 정확성 비교 연구)

  • Hyun-Chul Joo;Ju-Hyeong Lee;Jong-Won Lim;Jae-Hee Lee;Leen-Seok Kang
    • Land and Housing Review
    • /
    • v.14 no.3
    • /
    • pp.145-155
    • /
    • 2023
  • Recently, with the widespread adoption of Building Information Modeling (BIM) technology in the construction industry, various object detection algorithms have been used to verify errors between 3D models and actual construction elements. Since the characteristics of objects vary depending on the type of construction facility, such as buildings, bridges, and tunnels, appropriate methods for object detection technology need to be employed. Additionally, for object detection, initial object images are required, and to obtain these, various methods, such as drones and smartphones, can be used for image acquisition. The study uses a 360° camera optimized for internal tunnel imaging to capture initial images of the tunnel structures of railway and road facilities. Various object detection methodologies including the YOLO, SSD, and R-CNN algorithms are applied to detect actual objects from the captured images. And the Faster R-CNN algorithm had a higher recognition rate and mAP value than the SSD and YOLO v5 algorithms, and the difference between the minimum and maximum values of the recognition rates was small, showing equal detection ability. Considering the increasing adoption of BIM in current railway and road construction projects, this research highlights the potential utilization of 360° cameras and object detection methodologies for tunnel facility sections, aiming to expand their application in maintenance.

Water leakage accident analysis of water supply networks using big data analysis technique (R기반 빅데이터 분석기법을 활용한 상수도시스템 누수사고 분석)

  • Hong, Sung-Jin;Yoo, Do-Guen
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.spc1
    • /
    • pp.1261-1270
    • /
    • 2022
  • The purpose of this study is to collect and analyze information related to water leaks that cannot be easily accessed, and utilized by using the news search results that people can easily access. We applied a web crawling technique for extracting big data news on water leakage accidents in the water supply system and presented an algorithm in a procedural way to obtain accurate leak accident news. In addition, a data analysis technique suitable for water leakage accident information analysis was developed so that additional information such as the date and time of occurrence, cause of occurrence, location of occurrence, damaged facilities, damage effect. The primary goal of value extraction through big data-based leak analysis proposed in this study is to extract a meaningful value through comparison with the existing waterworks statistical results. In addition, the proposed method can be used to effectively respond to consumers or determine the service level of water supply networks. In other words, the presentation of such analysis results suggests the need to inform the public of information such as accidents a little more, and can be used in conjunction to prepare a radio wave and response system that can quickly respond in case of an accident.

A Study on Feature Selection and Feature Extraction for Hyperspectral Image Classification Using Canonical Correlation Classifier (정준상관분류에 의한 하이퍼스펙트럴영상 분류에서 유효밴드 선정 및 추출에 관한 연구)

  • Park, Min-Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.3D
    • /
    • pp.419-431
    • /
    • 2009
  • The core of this study is finding out the efficient band selection or extraction method discovering the optimal spectral bands when applying canonical correlation classifier (CCC) to hyperspectral data. The optimal efficient bands grounded on each separability decision technique are selected using Multispec$^{(C)}$ software developed by Purdue university of USA. Total 6 separability decision techniques are used, which are Divergence, Transformed Divergence, Bhattacharyya, Mean Bhattacharyya, Covariance Bhattacharyya, Noncovariance Bhattacharyya. For feature extraction, PCA transformation and MNF transformation are accomplished by ERDAS Imagine and ENVI software. For the comparison and assessment on the effect of feature selection and feature extraction, land cover classification is performed by CCC. The overall accuracy of CCC using the firstly selected 60 bands is 71.8%, the highest classification accuracy acquired by CCC is 79.0% as the case that executes CCC after appling Noncovariance Bhattacharyya. In conclusion, as a matter of fact, only Noncovariance Bhattacharyya separability decision method was valuable as feature selection algorithm for hyperspectral image classification depended on CCC. The lassification accuracy using other feature selection and extraction algorithms except Divergence rather declined in CCC.

Development of a window-shifting ANN training method for a quantitative rock classification in unsampled rock zone (미시추 구간의 정량적 지반 등급 분류를 위한 윈도우-쉬프팅 인공 신경망 학습 기법의 개발)

  • Shin, Hyu-Soung;Kwon, Young-Cheul
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.11 no.2
    • /
    • pp.151-162
    • /
    • 2009
  • This study proposes a new methodology for quantitative rock classification in unsampled rock zone, which occupies the most of tunnel design area. This methodology is to train an ANN (artificial neural network) by using results from a drilling investigation combined with electric resistivity survey in sampled zone, and then apply the trained ANN to making a prediction of grade of rock classification in unsampled zone. The prediction is made at the center point of a shifting window by using a number of electric resistivity values within the window as input reference information. The ANN training in this study was carried out by the RPROP (Resilient backpropagation) training algorithm and Early-Stopping method for achieving a generalized training. The proposed methodology is then applied to generate a rock grade distribution on a real tunnel site where drilling investigation and resistivity survey were undertaken. The result from the ANN based prediction is compared with one from a conventional kriging method. In the comparison, the proposed ANN method shows a better agreement with the electric resistivity distribution obtained by field survey. And it is also seen that the proposed method produces a more realistic and more understandable rock grade distribution.

A Study on the Classification Model of Overseas Infringing Websites based on Web Hierarchy Similarity Analysis using GNN (GNN을 이용한 웹사이트 Hierarchy 유사도 분석 기반 해외 침해 사이트 분류 모델 연구)

  • Ju-hyeon Seo;Sun-mo Yoo;Jong-hwa Park;Jin-joo Park;Tae-jin Lee
    • Convergence Security Journal
    • /
    • v.23 no.2
    • /
    • pp.47-54
    • /
    • 2023
  • The global popularity of K-content(Korean Wave) has led to a continuous increase in copyright infringement cases involving domestic works, not only within the country but also overseas. In response to this trend, there is active research on technologies for detecting illegal distribution sites of domestic copyrighted materials, with recent studies utilizing the characteristics of domestic illegal distribution sites that often include a significant number of advertising banners. However, the application of detection techniques similar to those used domestically is limited for overseas illegal distribution sites. These sites may not include advertising banners or may have significantly fewer ads compared to domestic sites, making the application of detection technologies used domestically challenging. In this study, we propose a detection technique based on the similarity comparison of links and text trees, leveraging the characteristic of including illegal sharing posts and images of copyrighted materials in a similar hierarchical structure. Additionally, to accurately compare the similarity of large-scale trees composed of a massive number of links, we utilize Graph Neural Network (GNN). The experiments conducted in this study demonstrated a high accuracy rate of over 95% in classifying regular sites and sites involved in the illegal distribution of copyrighted materials. Applying this algorithm to automate the detection of illegal distribution sites is expected to enable swift responses to copyright infringements.