• Title/Summary/Keyword: Learning/Training Algorithms

Search Result 432, Processing Time 0.022 seconds

Automatic Collection of Production Performance Data Based on Multi-Object Tracking Algorithms (다중 객체 추적 알고리즘을 이용한 가공품 흐름 정보 기반 생산 실적 데이터 자동 수집)

  • Lim, Hyuna;Oh, Seojeong;Son, Hyeongjun;Oh, Yosep
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.2
    • /
    • pp.205-218
    • /
    • 2022
  • Recently, digital transformation in manufacturing has been accelerating. It results in that the data collection technologies from the shop-floor is becoming important. These approaches focus primarily on obtaining specific manufacturing data using various sensors and communication technologies. In order to expand the channel of field data collection, this study proposes a method to automatically collect manufacturing data based on vision-based artificial intelligence. This is to analyze real-time image information with the object detection and tracking technologies and to obtain manufacturing data. The research team collects object motion information for each frame by applying YOLO (You Only Look Once) and DeepSORT as object detection and tracking algorithms. Thereafter, the motion information is converted into two pieces of manufacturing data (production performance and time) through post-processing. A dynamically moving factory model is created to obtain training data for deep learning. In addition, operating scenarios are proposed to reproduce the shop-floor situation in the real world. The operating scenario assumes a flow-shop consisting of six facilities. As a result of collecting manufacturing data according to the operating scenarios, the accuracy was 96.3%.

Application of neural network for airship take-off and landing mode by buoyancy control (기낭 부력 제어에 의한 비행선 이착륙의 인공신경망 적용)

  • Chang, Yong-Jin;Woo, Gui-Ae;Kim, Jong-Kwon;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.2
    • /
    • pp.84-91
    • /
    • 2005
  • For long time, the takeoff and landing control of airship was worked by human handling. With the development of the autonomous control system, the exact controls during the takeoff and landing were required and lots of methods and algorithms were suggested. This paper presents the result of airship take-off and landing by buoyancy control using air ballonet volume change and performance control of pitch angle for stable flight within the desired altitude. For the complexity of airship's dynamics, firstly, simple PID controller was applied. Due to the various atmospheric conditions, this controller didn't give satisfactory results. Therefore, new control method was designed to reduce rapidly the error between designed trajectory and actual trajectory by learning algorithm using an artificial neural network. Generally, ANN has various weaknesses such as large training time, selection of neuron and hidden layer numbers required to deal with complex problem. To overcome these drawbacks, in this paper, the RBFN (radial basis function network) controller developed. The weight value of RBFN is acquired by learning which to reduce the error between desired input output through and airship dynamics to impress the disturbance. As a result of simulation, the controller using the RBFN is superior to PID controller which maximum error is 15M.

The Development of Dynamic Forecasting Model for Short Term Power Demand using Radial Basis Function Network (Radial Basis 함수를 이용한 동적 - 단기 전력수요예측 모형의 개발)

  • Min, Joon-Young;Cho, Hyung-Ki
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.7
    • /
    • pp.1749-1758
    • /
    • 1997
  • This paper suggests the development of dynamic forecasting model for short-term power demand based on Radial Basis Function Network and Pal's GLVQ algorithm. Radial Basis Function methods are often compared with the backpropagation training, feed-forward network, which is the most widely used neural network paradigm. The Radial Basis Function Network is a single hidden layer feed-forward neural network. Each node of the hidden layer has a parameter vector called center. This center is determined by clustering algorithm. Theatments of classical approached to clustering methods include theories by Hartigan(K-means algorithm), Kohonen(Self Organized Feature Maps %3A SOFM and Learning Vector Quantization %3A LVQ model), Carpenter and Grossberg(ART-2 model). In this model, the first approach organizes the load pattern into two clusters by Pal's GLVQ clustering algorithm. The reason of using GLVQ algorithm in this model is that GLVQ algorithm can classify the patterns better than other algorithms. And the second approach forecasts hourly load patterns by radial basis function network which has been constructed two hidden nodes. These nodes are determined from the cluster centers of the GLVQ in first step. This model was applied to forecast the hourly loads on Mar. $4^{th},\;Jun.\;4^{th},\;Jul.\;4^{th},\;Sep.\;4^{th},\;Nov.\;4^{th},$ 1995, after having trained the data for the days from Mar. $1^{th}\;to\;3^{th},\;from\;Jun.\;1^{th}\;to\;3^{th},\;from\;Jul.\;1^{th}\;to\;3^{th},\;from\;Sep.\;1^{th}\;to\;3^{th},\;and\;from\;Nov.\;1^{th}\;to\;3^{th},$ 1995, respectively. In the experiments, the average absolute errors of one-hour ahead forecasts on utility actual data are shown to be 1.3795%.

  • PDF

Investigation of the Super-resolution Algorithm for the Prediction of Periodontal Disease in Dental X-ray Radiography (치주질환 예측을 위한 치과 X-선 영상에서의 초해상화 알고리즘 적용 가능성 연구)

  • Kim, Han-Na
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.2
    • /
    • pp.153-158
    • /
    • 2021
  • X-ray image analysis is a very important field to improve the early diagnosis rate and prediction accuracy of periodontal disease. Research on the development and application of artificial intelligence-based algorithms to improve the quality of such dental X-ray images is being widely conducted worldwide. Thus, the aim of this study was to design a super-resolution algorithm for predicting periodontal disease and to evaluate its applicability in dental X-ray images. The super-resolution algorithm was constructed based on the convolution layer and ReLU, and an image obtained by up-sampling a low-resolution image by 2 times was used as an input data. Also, 1,500 dental X-ray data used for deep learning training were used. Quantitative evaluation of images used root mean square error and structural similarity, which are factors that can measure similarity through comparison of two images. In addition, the recently developed no-reference based natural image quality evaluator and blind/referenceless image spatial quality evaluator were additionally analyzed. According to the results, we confirmed that the average similarity and no-reference-based evaluation values were improved by 1.86 and 2.14 times, respectively, compared to the existing bicubic-based upsampling method when the proposed method was used. In conclusion, the super-resolution algorithm for predicting periodontal disease proved useful in dental X-ray images, and it is expected to be highly applicable in various fields in the future.

Diagnosis and Visualization of Intracranial Hemorrhage on Computed Tomography Images Using EfficientNet-based Model (전산화 단층 촬영(Computed tomography, CT) 이미지에 대한 EfficientNet 기반 두개내출혈 진단 및 가시화 모델 개발)

  • Youn, Yebin;Kim, Mingeon;Kim, Jiho;Kang, Bongkeun;Kim, Ghootae
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.4
    • /
    • pp.150-158
    • /
    • 2021
  • Intracranial hemorrhage (ICH) refers to acute bleeding inside the intracranial vault. Not only does this devastating disease record a very high mortality rate, but it can also cause serious chronic impairment of sensory, motor, and cognitive functions. Therefore, a prompt and professional diagnosis of the disease is highly critical. Noninvasive brain imaging data are essential for clinicians to efficiently diagnose the locus of brain lesion, volume of bleeding, and subsequent cortical damage, and to take clinical interventions. In particular, computed tomography (CT) images are used most often for the diagnosis of ICH. In order to diagnose ICH through CT images, not only medical specialists with a sufficient number of diagnosis experiences are required, but even when this condition is met, there are many cases where bleeding cannot be successfully detected due to factors such as low signal ratio and artifacts of the image itself. In addition, discrepancies between interpretations or even misinterpretations might exist causing critical clinical consequences. To resolve these clinical problems, we developed a diagnostic model predicting intracranial bleeding and its subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and epidural) by applying deep learning algorithms to CT images. We also constructed a visualization tool highlighting important regions in a CT image for predicting ICH. Specifically, 1) 27,758 CT brain images from RSNA were pre-processed to minimize the computational load. 2) Three different CNN-based models (ResNet, EfficientNet-B2, and EfficientNet-B7) were trained based on a training image data set. 3) Diagnosis performance of each of the three models was evaluated based on an independent test image data set: As a result of the model comparison, EfficientNet-B7's performance (classification accuracy = 91%) was a way greater than the other models. 4) Finally, based on the result of EfficientNet-B7, we visualized the lesions of internal bleeding using the Grad-CAM. Our research suggests that artificial intelligence-based diagnostic systems can help diagnose and treat brain diseases resolving various problems in clinical situations.

An Accurate Cryptocurrency Price Forecasting using Reverse Walk-Forward Validation (역순 워크 포워드 검증을 이용한 암호화폐 가격 예측)

  • Ahn, Hyun;Jang, Baekcheol
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.45-55
    • /
    • 2022
  • The size of the cryptocurrency market is growing. For example, market capitalization of bitcoin exceeded 500 trillion won. Accordingly, many studies have been conducted to predict the price of cryptocurrency, and most of them have similar methodology of predicting stock prices. However, unlike stock price predictions, machine learning become best model in cryptocurrency price predictions, conceptually cryptocurrency has no passive income from ownership, and statistically, cryptocurrency has at least three times higher liquidity than stocks. Thats why we argue that a methodology different from stock price prediction should be applied to cryptocurrency price prediction studies. We propose Reverse Walk-forward Validation (RWFV), which modifies Walk-forward Validation (WFV). Unlike WFV, RWFV measures accuracy for Validation by pinning the Validation dataset directly in front of the Test dataset in time series, and gradually increasing the size of the Training dataset in front of it in time series. Train data were cut according to the size of the Train dataset with the highest accuracy among all measured Validation accuracy, and then combined with Validation data to measure the accuracy of the Test data. Logistic regression analysis and Support Vector Machine (SVM) were used as the analysis model, and various algorithms and parameters such as L1, L2, rbf, and poly were applied for the reliability of our proposed RWFV. As a result, it was confirmed that all analysis models showed improved accuracy compared to existing studies, and on average, the accuracy increased by 1.23%p. This is a significant improvement in accuracy, given that most of the accuracy of cryptocurrency price prediction remains between 50% and 60% through previous studies.

A Comparison of Pan-sharpening Algorithms for GK-2A Satellite Imagery (천리안위성 2A호 위성영상을 위한 영상융합기법의 비교평가)

  • Lee, Soobong;Choi, Jaewan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.275-292
    • /
    • 2022
  • In order to detect climate changes using satellite imagery, the GCOS (Global Climate Observing System) defines requirements such as spatio-temporal resolution, stability by the time change, and uncertainty. Due to limitation of GK-2A sensor performance, the level-2 products can not satisfy the requirement, especially for spatial resolution. In this paper, we found the optimal pan-sharpening algorithm for GK-2A products. The six pan-sharpening methods included in CS (Component Substitution), MRA (Multi-Resolution Analysis), VO (Variational Optimization), and DL (Deep Learning) were used. In the case of DL, the synthesis property based method was used to generate training dataset. The process of synthesis property is that pan-sharpening model is applied with Pan (Panchromatic) and MS (Multispectral) images with reduced spatial resolution, and fused image is compared with the original MS image. In the synthesis property based method, fused image with desire level for user can be produced only when the geometric characteristics between the PAN with reduced spatial resolution and MS image are similar. However, since the dissimilarity exists, RD (Random Down-sampling) was additionally used as a way to minimize it. Among the pan-sharpening methods, PSGAN was applied with RD (PSGAN_RD). The fused images are qualitatively and quantitatively validated with consistency property and the synthesis property. As validation result, the GSA algorithm performs well in the evaluation index representing spatial characteristics. In the case of spectral characteristics, the PSGAN_RD has the best accuracy with the original MS image. Therefore, in consideration of spatial and spectral characteristics of fused image, we found that PSGAN_RD is suitable for GK-2A products.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Wildfire Severity Mapping Using Sentinel Satellite Data Based on Machine Learning Approaches (Sentinel 위성영상과 기계학습을 이용한 국내산불 피해강도 탐지)

  • Sim, Seongmun;Kim, Woohyeok;Lee, Jaese;Kang, Yoojin;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1109-1123
    • /
    • 2020
  • In South Korea with forest as a major land cover class (over 60% of the country), many wildfires occur every year. Wildfires weaken the shear strength of the soil, forming a layer of soil that is vulnerable to landslides. It is important to identify the severity of a wildfire as well as the burned area to sustainably manage the forest. Although satellite remote sensing has been widely used to map wildfire severity, it is often difficult to determine the severity using only the temporal change of satellite-derived indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Burn Ratio (NBR). In this study, we proposed an approach for determining wildfire severity based on machine learning through the synergistic use of Sentinel-1A Synthetic Aperture Radar-C data and Sentinel-2A Multi Spectral Instrument data. Three wildfire cases-Samcheok in May 2017, Gangreung·Donghae in April 2019, and Gosung·Sokcho in April 2019-were used for developing wildfire severity mapping models with three machine learning algorithms (i.e., Random Forest, Logistic Regression, and Support Vector Machine). The results showed that the random forest model yielded the best performance, resulting in an overall accuracy of 82.3%. The cross-site validation to examine the spatiotemporal transferability of the machine learning models showed that the models were highly sensitive to temporal differences between the training and validation sites, especially in the early growing season. This implies that a more robust model with high spatiotemporal transferability can be developed when more wildfire cases with different seasons and areas are added in the future.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.