• Title/Summary/Keyword: 기계 학습 모델

Search Result 1,145, Processing Time 0.027 seconds

A sea trial method of hull-mounted sonar using machine learning and numerical experiments (기계학습 및 수치실험을 활용한 선체고정형소나 해상 시운전 평가 방안)

  • Ho-seong Chang;Chang-hyun Youn;Hyung-in Ra;Kyung-won Lee;Dea-hwan Kim;Ki-man Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.3
    • /
    • pp.293-304
    • /
    • 2024
  • In this paper, efficient and reliable methodologies for conducting sea trials to evaluate the performance of hull-mounted sonar systems is discussed. These systems undergo performance verification during ship construction via sea trials. However, the evaluation procedures often lack detailed consideration of variabilities in detection performance due to seabed topography, seasonal factors. To resolve this issue, temperature and salinity structure data were collected from 1967 to 2022 using ARGO floats and ocean observers data. The paper proposes an efficient and reliable sea trial method incorporating Bellhop modeling. Furthermore, a machine learning model applying a Physics-Informed Neural Networks was developed using the acquired data. This model predicts the sound speed profile at specific points within the sea trial area, reflecting seasonal elements of performance evaluation. In this study, we predicted the seasonal variations in sound speed structure during sea trial operations at a specific location within the trial area. We then proposed a strategy to account for the variability in detection performance caused by seasonal factors, using results from Bellhop modeling.

Effective Harmony Search-Based Optimization of Cost-Sensitive Boosting for Improving the Performance of Cross-Project Defect Prediction (교차 프로젝트 결함 예측 성능 향상을 위한 효과적인 하모니 검색 기반 비용 민감 부스팅 최적화)

  • Ryu, Duksan;Baik, Jongmoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.77-90
    • /
    • 2018
  • Software Defect Prediction (SDP) is a field of study that identifies defective modules. With insufficient local data, a company can exploit Cross-Project Defect Prediction (CPDP), a way to build a classifier using dataset collected from other companies. Most machine learning algorithms for SDP have used more than one parameter that significantly affects prediction performance depending on different values. The objective of this study is to propose a parameter selection technique to enhance the performance of CPDP. Using a Harmony Search algorithm (HS), our approach tunes parameters of cost-sensitive boosting, a method to tackle class imbalance causing the difficulty of prediction. According to distributional characteristics, parameter ranges and constraint rules between parameters are defined and applied to HS. The proposed approach is compared with three CPDP methods and a Within-Project Defect Prediction (WPDP) method over fifteen target projects. The experimental results indicate that the proposed model outperforms the other CPDP methods in the context of class imbalance. Unlike the previous researches showing high probability of false alarm or low probability of detection, our approach provides acceptable high PD and low PF while providing high overall performance. It also provides similar performance compared with WPDP.

Research Trend Analysis for Fault Detection Methods Using Machine Learning (머신러닝을 사용한 단층 탐지 기술 연구 동향 분석)

  • Bae, Wooram;Ha, Wansoo
    • Economic and Environmental Geology
    • /
    • v.53 no.4
    • /
    • pp.479-489
    • /
    • 2020
  • A fault is a geological structure that can be a migration path or a cap rock of hydrocarbon such as oil and gas, formed from source rock. The fault is one of the main targets of seismic exploration to find reservoirs in which hydrocarbon have accumulated. However, conventional fault detection methods using lateral discontinuity in seismic data such as semblance, coherence, variance, gradient magnitude and fault likelihood, have problem that professional interpreters have to invest lots of time and computational costs. Therefore, many researchers are conducting various studies to save computational costs and time for fault interpretation, and machine learning technologies attracted attention recently. Among various machine learning technologies, many researchers are conducting fault interpretation studies using the support vector machine, multi-layer perceptron, deep neural networks and convolutional neural networks algorithms. Especially, researchers use not only their own convolution networks but also proven networks in image processing to predict fault locations and fault information such as strike and dip. In this paper, by investigating and analyzing these studies, we found that the convolutional neural networks based on the U-Net from image processing is the most effective one for fault detection and interpretation. Further studies can expect better results from fault detection and interpretation using the convolutional neural networks along with transfer learning and data augmentation.

A Research about Time Domain Estimation Method for Greenhouse Environmental Factors based on Artificial Intelligence (인공지능 기반 온실 환경인자의 시간영역 추정)

  • Lee, JungKyu;Oh, JongWoo;Cho, YongJin;Lee, Donghoon
    • Journal of Bio-Environment Control
    • /
    • v.29 no.3
    • /
    • pp.277-284
    • /
    • 2020
  • To increase the utilization of the intelligent methodology of smart farm management, estimation modeling techniques are required to assess prior examination of crops and environment changes in realtime. A mandatory environmental factor such as CO2 is challenging to establish a reliable estimation model in time domain accounted for indoor agricultural facilities where various correlated variables are highly coupled. Thus, this study was conducted to develop an artificial neural network for reducing time complexity by using environmental information distributed in adjacent areas from a time perspective as input and output variables as CO2. The environmental factors in the smart farm were continuously measured using measuring devices that integrated sensors through experiments. Modeling 1 predicted by the mean data of the experiment period and modeling 2 predicted by the day-to-day data were constructed to predict the correlation of CO2. Modeling 2 predicted by the previous day's data learning performed better than Modeling 1 predicted by the 60-day average value. Until 30 days, most of them showed a coefficient of determination between 0.70 and 0.88, and Model 2 was about 0.05 higher. However, after 30 days, the modeling coefficients of both models showed low values below 0.50. According to the modeling approach, comparing and analyzing the values of the determinants showed that data from adjacent time zones were relatively high performance at points requiring prediction rather than a fixed neural network model.

Convergence of Artificial Intelligence Techniques and Domain Specific Knowledge for Generating Super-Resolution Meteorological Data (기상 자료 초해상화를 위한 인공지능 기술과 기상 전문 지식의 융합)

  • Ha, Ji-Hun;Park, Kun-Woo;Im, Hyo-Hyuk;Cho, Dong-Hee;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.63-70
    • /
    • 2021
  • Generating a super-resolution meteological data by using a high-resolution deep neural network can provide precise research and useful real-life services. We propose a new technique of generating improved training data for super-resolution deep neural networks. To generate high-resolution meteorological data with domain specific knowledge, Lambert conformal conic projection and objective analysis were applied based on observation data and ERA5 reanalysis field data of specialized institutions. As a result, temperature and humidity analysis data based on domain specific knowledge showed improved RMSE by up to 42% and 46%, respectively. Next, a super-resolution generative adversarial network (SRGAN) which is one of the aritifial intelligence techniques was used to automate the manual data generation technique using damain specific techniques as described above. Experiments were conducted to generate high-resolution data with 1 km resolution from global model data with 10 km resolution. Finally, the results generated with SRGAN have a higher resoltuion than the global model input data, and showed a similar analysis pattern to the manually generated high-resolution analysis data, but also showed a smooth boundary.

Development of an intelligent skin condition diagnosis information system based on social media

  • Kim, Hyung-Hoon;Ohk, Seung-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.241-251
    • /
    • 2022
  • Diagnosis and management of customer's skin condition is an important essential function in the cosmetics and beauty industry. As the social media environment spreads and generalizes to all fields of society, the interaction of questions and answers to various and delicate concerns and requirements regarding the diagnosis and management of skin conditions is being actively dealt with in the social media community. However, since social media information is very diverse and atypical big data, an intelligent skin condition diagnosis system that combines appropriate skin condition information analysis and artificial intelligence technology is necessary. In this paper, we developed the skin condition diagnosis system SCDIS to intelligently diagnose and manage the skin condition of customers by processing the text analysis information of social media into learning data. In SCDIS, an artificial neural network model, AnnTFIDF, that automatically diagnoses skin condition types using artificial neural network technology, a deep learning machine learning method, was built up and used. The performance of the artificial neural network model AnnTFIDF was analyzed using test sample data, and the accuracy of the skin condition type diagnosis prediction value showed a high performance of about 95%. Through the experimental and performance analysis results of this paper, SCDIS can be evaluated as an intelligent tool that can be used efficiently in the skin condition analysis and diagnosis management process in the cosmetic and beauty industry. And this study can be used as a basic research to solve the new technology trend, customized cosmetics manufacturing and consumer-oriented beauty industry technology demand.

Sorghum Field Segmentation with U-Net from UAV RGB (무인기 기반 RGB 영상 활용 U-Net을 이용한 수수 재배지 분할)

  • Kisu Park;Chanseok Ryu ;Yeseong Kang;Eunri Kim;Jongchan Jeong;Jinki Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.521-535
    • /
    • 2023
  • When converting rice fields into fields,sorghum (sorghum bicolor L. Moench) has excellent moisture resistance, enabling stable production along with soybeans. Therefore, it is a crop that is expected to improve the self-sufficiency rate of domestic food crops and solve the rice supply-demand imbalance problem. However, there is a lack of fundamental statistics,such as cultivation fields required for estimating yields, due to the traditional survey method, which takes a long time even with a large manpower. In this study, U-Net was applied to RGB images based on unmanned aerial vehicle to confirm the possibility of non-destructive segmentation of sorghum cultivation fields. RGB images were acquired on July 28, August 13, and August 25, 2022. On each image acquisition date, datasets were divided into 6,000 training datasets and 1,000 validation datasets with a size of 512 × 512 images. Classification models were developed based on three classes consisting of Sorghum fields(sorghum), rice and soybean fields(others), and non-agricultural fields(background), and two classes consisting of sorghum and non-sorghum (others+background). The classification accuracy of sorghum cultivation fields was higher than 0.91 in the three class-based models at all acquisition dates, but learning confusion occurred in the other classes in the August dataset. In contrast, the two-class-based model showed an accuracy of 0.95 or better in all classes, with stable learning on the August dataset. As a result, two class-based models in August will be advantageous for calculating the cultivation fields of sorghum.

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Development of New Variables Affecting Movie Success and Prediction of Weekly Box Office Using Them Based on Machine Learning (영화 흥행에 영향을 미치는 새로운 변수 개발과 이를 이용한 머신러닝 기반의 주간 박스오피스 예측)

  • Song, Junga;Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.67-83
    • /
    • 2018
  • The Korean film industry with significant increase every year exceeded the number of cumulative audiences of 200 million people in 2013 finally. However, starting from 2015 the Korean film industry entered a period of low growth and experienced a negative growth after all in 2016. To overcome such difficulty, stakeholders like production company, distribution company, multiplex have attempted to maximize the market returns using strategies of predicting change of market and of responding to such market change immediately. Since a film is classified as one of experiential products, it is not easy to predict a box office record and the initial number of audiences before the film is released. And also, the number of audiences fluctuates with a variety of factors after the film is released. So, the production company and distribution company try to be guaranteed the number of screens at the opining time of a newly released by multiplex chains. However, the multiplex chains tend to open the screening schedule during only a week and then determine the number of screening of the forthcoming week based on the box office record and the evaluation of audiences. Many previous researches have conducted to deal with the prediction of box office records of films. In the early stage, the researches attempted to identify factors affecting the box office record. And nowadays, many studies have tried to apply various analytic techniques to the factors identified previously in order to improve the accuracy of prediction and to explain the effect of each factor instead of identifying new factors affecting the box office record. However, most of previous researches have limitations in that they used the total number of audiences from the opening to the end as a target variable, and this makes it difficult to predict and respond to the demand of market which changes dynamically. Therefore, the purpose of this study is to predict the weekly number of audiences of a newly released film so that the stakeholder can flexibly and elastically respond to the change of the number of audiences in the film. To that end, we considered the factors used in the previous studies affecting box office and developed new factors not used in previous studies such as the order of opening of movies, dynamics of sales. Along with the comprehensive factors, we used the machine learning method such as Random Forest, Multi Layer Perception, Support Vector Machine, and Naive Bays, to predict the number of cumulative visitors from the first week after a film release to the third week. At the point of the first and the second week, we predicted the cumulative number of visitors of the forthcoming week for a released film. And at the point of the third week, we predict the total number of visitors of the film. In addition, we predicted the total number of cumulative visitors also at the point of the both first week and second week using the same factors. As a result, we found the accuracy of predicting the number of visitors at the forthcoming week was higher than that of predicting the total number of them in all of three weeks, and also the accuracy of the Random Forest was the highest among the machine learning methods we used. This study has implications in that this study 1) considered various factors comprehensively which affect the box office record and merely addressed by other previous researches such as the weekly rating of audiences after release, the weekly rank of the film after release, and the weekly sales share after release, and 2) tried to predict and respond to the demand of market which changes dynamically by suggesting models which predicts the weekly number of audiences of newly released films so that the stakeholders can flexibly and elastically respond to the change of the number of audiences in the film.