• Title/Summary/Keyword: 학습 데이터 모델

Search Result 3,041, Processing Time 0.037 seconds

Prediction of Stacking Angles of Fiber-reinforced Composite Materials Using Deep Learning Based on Convolutional Neural Networks (합성곱 신경망 기반의 딥러닝을 이용한 섬유 강화 복합재료의 적층 각도 예측)

  • Hyunsoo Hong;Wonki Kim;Do Yoon Jeon;Kwanho Lee;Seong Su Kim
    • Composites Research
    • /
    • v.36 no.1
    • /
    • pp.48-52
    • /
    • 2023
  • Fiber-reinforced composites have anisotropic material properties, so the mechanical properties of composite structures can vary depending on the stacking sequence. Therefore, it is essential to design the proper stacking sequence of composite structures according to the functional requirements. However, depending on the manufacturing condition or the shape of the structure, there are many cases where the designed stacking angle is out of range, which can affect structural performance. Accordingly, it is important to analyze the stacking angle in order to confirm that the composite structure is correctly fabricated as designed. In this study, the stacking angle was predicted from real cross-sectional images of fiber-reinforced composites using convolutional neural network (CNN)-based deep learning. Carbon fiber-reinforced composite specimens with several stacking angles were fabricated and their cross-sections were photographed on a micro-scale using an optical microscope. The training was performed for a CNN-based deep learning model using the cross-sectional image data of the composite specimens. As a result, the stacking angle can be predicted from the actual cross-sectional image of the fiber-reinforced composite with high accuracy.

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

Development and Evaluation of Safe Route Service of Electric Personal Assistive Mobility Devices for the Mobility Impaired People (교통약자를 위한 전동 이동 보조기기 안전 경로 서비스의 개발과 평가)

  • Je-Seung WOO;Sun-Gi HONG;Sang-Kyoung YOO;Hoe Kyoung KIM
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.3
    • /
    • pp.85-96
    • /
    • 2023
  • This study developed and evaluated a safe route guidance service for electric personal assistive mobility device used mainly by the mobility impaired people to improve their mobility. Thirteen underlying factors affecting the mobility of electric personal assistive mobility device have been derived through a survey with the mobility impaired people and employees in related organizations in Busan Metropolitan City. After assigning safety scores to individual factors and identifying the relevant factors along routes of interest with an object detection AI model, the safe route for electric personal assistive mobility device was provided through an optimal path-finding algorithm. As a result of comparing the general route of T-map and the recommended route of this study for the identical routes, the latter had relatively fewer obstacles and the gentler slope than the former, implicating that the recommended route is safer than the general one. As future works, it is necessary to enhance the function of a route guidance service based on the real-time location of users and to conduct spot investigations to evaluate and verify its social acceptability.

An Approach Using LSTM Model to Forecasting Customer Congestion Based on Indoor Human Tracking (실내 사람 위치 추적 기반 LSTM 모델을 이용한 고객 혼잡 예측 연구)

  • Hee-ju Chae;Kyeong-heon Kwak;Da-yeon Lee;Eunkyung Kim
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.3
    • /
    • pp.43-53
    • /
    • 2023
  • In this detailed and comprehensive study, our primary focus has been placed on accurately gauging the number of visitors and their real-time locations in commercial spaces. Particularly, in a real cafe, using security cameras, we have developed a system that can offer live updates on available seating and predict future congestion levels. By employing YOLO, a real-time object detection and tracking algorithm, the number of visitors and their respective locations in real-time are also monitored. This information is then used to update a cafe's indoor map, thereby enabling users to easily identify available seating. Moreover, we developed a model that predicts the congestion of a cafe in real time. The sophisticated model, designed to learn visitor count and movement patterns over diverse time intervals, is based on Long Short Term Memory (LSTM) to address the vanishing gradient problem and Sequence-to-Sequence (Seq2Seq) for processing data with temporal relationships. This innovative system has the potential to significantly improve cafe management efficiency and customer satisfaction by delivering reliable predictions of cafe congestion to all users. Our groundbreaking research not only demonstrates the effectiveness and utility of indoor location tracking technology implemented through security cameras but also proposes potential applications in other commercial spaces.

Analysis of the Abstract Structure in Scientific Papers by Gifted Students and Exploring the Possibilities of Artificial Intelligence Applied to the Educational Setting (과학 영재의 논문 초록 구조 분석 및 이에 대한 인공지능의 활용 가능성 탐색)

  • Bongwoo Lee;Hunkoog Jho
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.6
    • /
    • pp.573-582
    • /
    • 2023
  • This study aimed to explore the potential use of artificial intelligence in science education for gifted students by analyzing the structure of abstracts written by students at a gifted science academy and comparing the performance of various elements extracted using AI. The study involved an analysis of 263 graduation theses from S Science High School over five years (2017-2021), focusing on the frequency and types of background, objectives, methods, results, and discussions included in their abstracts. This was followed by an evaluation of their accuracy using AI classification methods with fine-tuning and prompts. The results revealed that the frequency of elements in the abstracts written by gifted students followed the order of objectives, methods, results, background, and discussions. However, only 57.4% of the abstracts contained all the essential elements, such as objectives, methods, and results. Among these elements, fine-tuned AI classification showed the highest accuracy, with background, objectives, and results demonstrating relatively high performance, while methods and discussions were often inaccurately classified. These findings suggest the need for a more effective use of AI, through providing a better distribution of elements or appropriate datasets for training. Educational implications of these findings were also discussed.

A Study on Intelligent Self-Recovery Technologies for Cyber Assets to Actively Respond to Cyberattacks (사이버 공격에 능동대응하기 위한 사이버 자산의 지능형 자가복구기술 연구)

  • Se-ho Choi;Hang-sup Lim;Jung-young Choi;Oh-jin Kwon;Dong-kyoo Shin
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.137-144
    • /
    • 2023
  • Cyberattack technology is evolving to an unpredictable degree, and it is a situation that can happen 'at any time' rather than 'someday'. Infrastructure that is becoming hyper-connected and global due to cloud computing and the Internet of Things is an environment where cyberattacks can be more damaging than ever, and cyberattacks are still ongoing. Even if damage occurs due to external influences such as cyberattacks or natural disasters, intelligent self-recovery must evolve from a cyber resilience perspective to minimize downtime of cyber assets (OS, WEB, WAS, DB). In this paper, we propose an intelligent self-recovery technology to ensure sustainable cyber resilience when cyber assets fail to function properly due to a cyberattack. The original and updated history of cyber assets is managed in real-time using timeslot design and snapshot backup technology. It is necessary to secure technology that can automatically detect damage situations in conjunction with a commercialized file integrity monitoring program and minimize downtime of cyber assets by analyzing the correlation of backup data to damaged files on an intelligent basis to self-recover to an optimal state. In the future, we plan to research a pilot system that applies the unique functions of self-recovery technology and an operating model that can learn and analyze self-recovery strategies appropriate for cyber assets in damaged states.

Implementation of an Automated Agricultural Frost Observation System (AAFOS) (농업서리 자동관측 시스템(AAFOS)의 구현)

  • Kyu Rang Kim;Eunsu Jo;Myeong Su Ko;Jung Hyuk Kang;Yunjae Hwang;Yong Hee Lee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.1
    • /
    • pp.63-74
    • /
    • 2024
  • In agriculture, frost can be devastating, which is why observation and forecasting are so important. According to a recent report analyzing frost observation data from the Korea Meteorological Administration, despite global warming due to climate change, the late frost date in spring has not been accelerated, and the frequency of frost has not decreased. Therefore, it is important to automate and continuously operate frost observation in risk areas to prevent agricultural frost damage. In the existing frost observation using leaf wetness sensors, there is a problem that the reference voltage value fluctuates over a long period of time due to contamination of the observation sensor or changes in the humidity of the surrounding environment. In this study, a datalogger program was implemented to automatically solve these problems. The established frost observation system can stably and automatically accumulate time-resolved observation data over a long period of time. This data can be utilized in the future for the development of frost diagnosis models using machine learning methods and the production of frost occurrence prediction information for surrounding areas.

A Case Study: Improvement of Wind Risk Prediction by Reclassifying the Detection Results (풍해 예측 결과 재분류를 통한 위험 감지확률의 개선 연구)

  • Kim, Soo-ock;Hwang, Kyu-Hong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.3
    • /
    • pp.149-155
    • /
    • 2021
  • Early warning systems for weather risk management in the agricultural sector have been developed to predict potential wind damage to crops. These systems take into account the daily maximum wind speed to determine the critical wind speed that causes fruit drops and provide the weather risk information to farmers. In an effort to increase the accuracy of wind risk predictions, an artificial neural network for binary classification was implemented. In the present study, the daily wind speed and other weather data, which were measured at weather stations at sites of interest in Jeollabuk-do and Jeollanam-do as well as Gyeongsangbuk- do and part of Gyeongsangnam- do provinces in 2019, were used for training the neural network. These weather stations include 210 synoptic and automated weather stations operated by the Korean Meteorological Administration (KMA). The wind speed data collected at the same locations between January 1 and December 12, 2020 were used to validate the neural network model. The data collected from December 13, 2020 to February 18, 2021 were used to evaluate the wind risk prediction performance before and after the use of the artificial neural network. The critical wind speed of damage risk was determined to be 11 m/s, which is the wind speed reported to cause fruit drops and damages. Furthermore, the maximum wind speeds were expressed using Weibull distribution probability density function for warning of wind damage. It was found that the accuracy of wind damage risk prediction was improved from 65.36% to 93.62% after re-classification using the artificial neural network. Nevertheless, the error rate also increased from 13.46% to 37.64%, as well. It is likely that the machine learning approach used in the present study would benefit case studies where no prediction by risk warning systems becomes a relatively serious issue.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.