• Title/Summary/Keyword: artificial intelligence-based model

Search Result 1,215, Processing Time 0.026 seconds

Research on the Utilization of Recurrent Neural Networks for Automatic Generation of Korean Definitional Sentences of Technical Terms (기술 용어에 대한 한국어 정의 문장 자동 생성을 위한 순환 신경망 모델 활용 연구)

  • Choi, Garam;Kim, Han-Gook;Kim, Kwang-Hoon;Kim, You-eil;Choi, Sung-Pil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.4
    • /
    • pp.99-120
    • /
    • 2017
  • In order to develop a semiautomatic support system that allows researchers concerned to efficiently analyze the technical trends for the ever-growing industry and market. This paper introduces a couple of Korean sentence generation models that can automatically generate definitional statements as well as descriptions of technical terms and concepts. The proposed models are based on a deep learning model called LSTM (Long Sort-Term Memory) capable of effectively labeling textual sequences by taking into account the contextual relations of each item in the sequences. Our models take technical terms as inputs and can generate a broad range of heterogeneous textual descriptions that explain the concept of the terms. In the experiments using large-scale training collections, we confirmed that more accurate and reasonable sentences can be generated by CHAR-CNN-LSTM model that is a word-based LSTM exploiting character embeddings based on convolutional neural networks (CNN). The results of this study can be a force for developing an extension model that can generate a set of sentences covering the same subjects, and furthermore, we can implement an artificial intelligence model that automatically creates technical literature.

Development and Validation of AI Image Segmentation Model for CT Image-Based Sarcopenia Diagnosis (CT 영상 기반 근감소증 진단을 위한 AI 영상분할 모델 개발 및 검증)

  • Lee Chung-Sub;Lim Dong-Wook;Noh Si-Hyeong;Kim Tae-Hoon;Ko Yousun;Kim Kyung Won;Jeong Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.3
    • /
    • pp.119-126
    • /
    • 2023
  • Sarcopenia is not well known enough to be classified as a disease in 2021 in Korea, but it is recognized as a social problem in developed countries that have entered an aging society. The diagnosis of sarcopenia follows the international standard guidelines presented by the European Working Group for Sarcopenia in Older People (EWGSOP) and the d Asian Working Group for Sarcopenia (AWGS). Recently, it is recommended to evaluate muscle function by using physical performance evaluation, walking speed measurement, and standing test in addition to absolute muscle mass as a diagnostic method. As a representative method for measuring muscle mass, the body composition analysis method using DEXA has been formally implemented in clinical practice. In addition, various studies for measuring muscle mass using abdominal images of MRI or CT are being actively conducted. In this paper, we develop an AI image segmentation model based on abdominal images of CT with a relatively short imaging time for the diagnosis of sarcopenia and describe the multicenter validation. We developed an artificial intelligence model using U-Net that can automatically segment muscle, subcutaneous fat, and visceral fat by selecting the L3 region from the CT image. Also, to evaluate the performance of the model, internal verification was performed by calculating the intersection over union (IOU) of the partitioned area, and the results of external verification using data from other hospitals are shown. Based on the verification results, we tried to review and supplement the problems and solutions.

Data collection strategy for building rainfall-runoff LSTM model predicting daily runoff (강수-일유출량 추정 LSTM 모형의 구축을 위한 자료 수집 방안)

  • Kim, Dongkyun;Kang, Seokkoo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.10
    • /
    • pp.795-805
    • /
    • 2021
  • In this study, after developing an LSTM-based deep learning model for estimating daily runoff in the Soyang River Dam basin, the accuracy of the model for various combinations of model structure and input data was investigated. A model was built based on the database consisting of average daily precipitation, average daily temperature, average daily wind speed (input up to here), and daily average flow rate (output) during the first 12 years (1997.1.1-2008.12.31). The Nash-Sutcliffe Model Efficiency Coefficient (NSE) and RMSE were examined for validation using the flow discharge data of the later 12 years (2009.1.1-2020.12.31). The combination that showed the highest accuracy was the case in which all possible input data (12 years of daily precipitation, weather temperature, wind speed) were used on the LSTM model structure with 64 hidden units. The NSE and RMSE of the verification period were 0.862 and 76.8 m3/s, respectively. When the number of hidden units of LSTM exceeds 500, the performance degradation of the model due to overfitting begins to appear, and when the number of hidden units exceeds 1000, the overfitting problem becomes prominent. A model with very high performance (NSE=0.8~0.84) could be obtained when only 12 years of daily precipitation was used for model training. A model with reasonably high performance (NSE=0.63-0.85) when only one year of input data was used for model training. In particular, an accurate model (NSE=0.85) could be obtained if the one year of training data contains a wide magnitude of flow events such as extreme flow and droughts as well as normal events. If the training data includes both the normal and extreme flow rates, input data that is longer than 5 years did not significantly improve the model performance.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.

Comparative Analysis of CNN Deep Learning Model Performance Based on Quantification Application for High-Speed Marine Object Classification (고속 해상 객체 분류를 위한 양자화 적용 기반 CNN 딥러닝 모델 성능 비교 분석)

  • Lee, Seong-Ju;Lee, Hyo-Chan;Song, Hyun-Hak;Jeon, Ho-Seok;Im, Tae-ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.59-68
    • /
    • 2021
  • As artificial intelligence(AI) technologies, which have made rapid growth recently, began to be applied to the marine environment such as ships, there have been active researches on the application of CNN-based models specialized for digital videos. In E-Navigation service, which is combined with various technologies to detect floating objects of clash risk to reduce human errors and prevent fires inside ships, real-time processing is of huge importance. More functions added, however, mean a need for high-performance processes, which raises prices and poses a cost burden on shipowners. This study thus set out to propose a method capable of processing information at a high rate while maintaining the accuracy by applying Quantization techniques of a deep learning model. First, videos were pre-processed fit for the detection of floating matters in the sea to ensure the efficient transmission of video data to the deep learning entry. Secondly, the quantization technique, one of lightweight techniques for a deep learning model, was applied to reduce the usage rate of memory and increase the processing speed. Finally, the proposed deep learning model to which video pre-processing and quantization were applied was applied to various embedded boards to measure its accuracy and processing speed and test its performance. The proposed method was able to reduce the usage of memory capacity four times and improve the processing speed about four to five times while maintaining the old accuracy of recognition.

A Study on A Study on the University Education Plan Using ChatGPTfor University Students (ChatGPT를 활용한 대학 교육 방안 연구)

  • Hyun-ju Kim;Jinyoung Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.71-79
    • /
    • 2024
  • ChatGPT, an interactive artificial intelligence (AI) chatbot developed by Open AI in the U.S., gaining popularity with great repercussions around the world. Some academia are concerned that ChatGPT can be used by students for plagiarism, but ChatGPT is also widely used in a positive direction, such as being used to write marketing phrases or website phrases. There is also an opinion that ChatGPT could be a new future for "search," and some analysts say that the focus should be on fostering rather than excessive regulation. This study analyzed consciousness about ChatGPT for college students through a survey of their perception of ChatGPT. And, plagiarism inspection systems were prepared to establish an education support model using ChatGPT and ChatGPT. Based on this, a university education support model using ChatGPT was constructed. The education model using ChatGPT established an education model based on text, digital, and art, and then composed of detailed strategies necessary for the era of the 4th industrial revolution below it. In addition, it was configured to guide students to use ChatGPT within the permitted range by using the ChatGPT detection function provided by the plagiarism inspection system, after the instructor of the class determined the allowable range of content generated by ChatGPT according to the learning goal. By linking and utilizing ChatGPT and the plagiarism inspection system in this way, it is expected to prevent situations in which ChatGPT's excellent ability is abused in education.

Data Mining using Instance Selection in Artificial Neural Networks for Bankruptcy Prediction (기업부도예측을 위한 인공신경망 모형에서의 사례선택기법에 의한 데이터 마이닝)

  • Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.10 no.1
    • /
    • pp.109-123
    • /
    • 2004
  • Corporate financial distress and bankruptcy prediction is one of the major application areas of artificial neural networks (ANNs) in finance and management. ANNs have showed high prediction performance in this area, but sometimes are confronted with inconsistent and unpredictable performance for noisy data. In addition, it may not be possible to train ANN or the training task cannot be effectively carried out without data reduction when the amount of data is so large because training the large data set needs much processing time and additional costs of collecting data. Instance selection is one of popular methods for dimensionality reduction and is directly related to data reduction. Although some researchers have addressed the need for instance selection in instance-based learning algorithms, there is little research on instance selection for ANN. This study proposes a genetic algorithm (GA) approach to instance selection in ANN for bankruptcy prediction. In this study, we use ANN supported by the GA to optimize the connection weights between layers and select relevant instances. It is expected that the globally evolved weights mitigate the well-known limitations of gradient descent algorithm of backpropagation algorithm. In addition, genetically selected instances will shorten the learning time and enhance prediction performance. This study will compare the proposed model with other major data mining techniques. Experimental results show that the GA approach is a promising method for instance selection in ANN.

  • PDF

Estimation of Fractional Urban Tree Canopy Cover through Machine Learning Using Optical Satellite Images (기계학습을 이용한 광학 위성 영상 기반의 도시 내 수목 피복률 추정)

  • Sejeong Bae ;Bokyung Son ;Taejun Sung ;Yeonsu Lee ;Jungho Im ;Yoojin Kang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.1009-1029
    • /
    • 2023
  • Urban trees play a vital role in urban ecosystems,significantly reducing impervious surfaces and impacting carbon cycling within the city. Although previous research has demonstrated the efficacy of employing artificial intelligence in conjunction with airborne light detection and ranging (LiDAR) data to generate urban tree information, the availability and cost constraints associated with LiDAR data pose limitations. Consequently, this study employed freely accessible, high-resolution multispectral satellite imagery (i.e., Sentinel-2 data) to estimate fractional tree canopy cover (FTC) within the urban confines of Suwon, South Korea, employing machine learning techniques. This study leveraged a median composite image derived from a time series of Sentinel-2 images. In order to account for the diverse land cover found in urban areas, the model incorporated three types of input variables: average (mean) and standard deviation (std) values within a 30-meter grid from 10 m resolution of optical indices from Sentinel-2, and fractional coverage for distinct land cover classes within 30 m grids from the existing level 3 land cover map. Four schemes with different combinations of input variables were compared. Notably, when all three factors (i.e., mean, std, and fractional cover) were used to consider the variation of landcover in urban areas(Scheme 4, S4), the machine learning model exhibited improved performance compared to using only the mean of optical indices (Scheme 1). Of the various models proposed, the random forest (RF) model with S4 demonstrated the most remarkable performance, achieving R2 of 0.8196, and mean absolute error (MAE) of 0.0749, and a root mean squared error (RMSE) of 0.1022. The std variable exhibited the highest impact on model outputs within the heterogeneous land covers based on the variable importance analysis. This trained RF model with S4 was then applied to the entire Suwon region, consistently delivering robust results with an R2 of 0.8702, MAE of 0.0873, and RMSE of 0.1335. The FTC estimation method developed in this study is expected to offer advantages for application in various regions, providing fundamental data for a better understanding of carbon dynamics in urban ecosystems in the future.

User Experience Analysis and Management Based on Text Mining: A Smart Speaker Case (텍스트 마이닝 기반 사용자 경험 분석 및 관리: 스마트 스피커 사례)

  • Dine Yeon;Gayeon Park;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.22 no.2
    • /
    • pp.77-99
    • /
    • 2020
  • Smart speaker is a device that provides an interactive voice-based service that can search and use various information and contents such as music, calendar, weather, and merchandise using artificial intelligence. Since AI technology provides more sophisticated and optimized services to users by accumulating data, early smart speaker manufacturers tried to build a platform through aggressive marketing. However, the frequency of using smart speakers is less than once a month, accounting for more than one third of the total, and user satisfaction is only 49%. Accordingly, the necessity of strengthening the user experience of smart speakers has emerged in order to acquire a large number of users and to enable continuous use. Therefore, this study analyzes the user experience of the smart speaker and proposes a method for enhancing the user experience of the smart speaker. Based on the analysis results in two stages, we propose ways to enhance the user experience of smart speakers by model. The existing research on the user experience of the smart speaker was mainly conducted by survey and interview-based research, whereas this study collected the actual review data written by the user. Also, this study interpreted the analysis result based on the smart speaker user experience dimension. There is an academic significance in interpreting the text mining results by developing the smart speaker user experience dimension. Based on the results of this study, we can suggest strategies for enhancing the user experience to smart speaker manufacturers.