• Title/Summary/Keyword: deep machine learning

Search Result 1,093, Processing Time 0.03 seconds

Ensemble Learning-Based Prediction of Good Sellers in Overseas Sales of Domestic Books and Keyword Analysis of Reviews of the Good Sellers (앙상블 학습 기반 국내 도서의 해외 판매 굿셀러 예측 및 굿셀러 리뷰 키워드 분석)

  • Do Young Kim;Na Yeon Kim;Hyon Hee Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.173-178
    • /
    • 2023
  • As Korean literature spreads around the world, its position in the overseas publishing market has become important. As demand in the overseas publishing market continues to grow, it is essential to predict future book sales and analyze the characteristics of books that have been highly favored by overseas readers in the past. In this study, we proposed ensemble learning based prediction model and analyzed characteristics of the cumulative sales of more than 5,000 copies classified as good sellers published overseas over the past 5 years. We applied the five ensemble learning models, i.e., XGBoost, Gradient Boosting, Adaboost, LightGBM, and Random Forest, and compared them with other machine learning algorithms, i.e., Support Vector Machine, Logistic Regression, and Deep Learning. Our experimental results showed that the ensemble algorithm outperforms other approaches in troubleshooting imbalanced data. In particular, the LightGBM model obtained an AUC value of 99.86% which is the best prediction performance. Among the features used for prediction, the most important feature is the author's number of overseas publications, and the second important feature is publication in countries with the largest publication market size. The number of evaluation participants is also an important feature. In addition, text mining was performed on the four book reviews that sold the most among good-selling books. Many reviews were interested in stories, characters, and writers and it seems that support for translation is needed as many of the keywords of "translation" appear in low-rated reviews.

Tongue Image Segmentation Using CNN and Various Image Augmentation Techniques (콘볼루션 신경망(CNN)과 다양한 이미지 증강기법을 이용한 혀 영역 분할)

  • Ahn, Ilkoo;Bae, Kwang-Ho;Lee, Siwoo
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.5
    • /
    • pp.201-210
    • /
    • 2021
  • In Korean medicine, tongue diagnosis is one of the important diagnostic methods for diagnosing abnormalities in the body. Representative features that are used in the tongue diagnosis include color, shape, texture, cracks, and tooth marks. When diagnosing a patient through these features, the diagnosis criteria may be different for each oriental medical doctor, and even the same person may have different diagnosis results depending on time and work environment. In order to overcome this problem, recent studies to automate and standardize tongue diagnosis using machine learning are continuing and the basic process of such a machine learning-based tongue diagnosis system is tongue segmentation. In this paper, image data is augmented based on the main tongue features, and backbones of various famous deep learning architecture models are used for automatic tongue segmentation. The experimental results show that the proposed augmentation technique improves the accuracy of tongue segmentation, and that automatic tongue segmentation can be performed with a high accuracy of 99.12%.

Anomaly Data Detection Using Machine Learning in Crowdsensing System (크라우드센싱 시스템에서 머신러닝을 이용한 이상데이터 탐지)

  • Kim, Mihui;Lee, Gihun
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.475-485
    • /
    • 2020
  • Recently, a crowdsensing system that provides a new sensing service with real-time sensing data provided from a user's device including a sensor without installing a separate sensor has attracted attention. In the crowdsensing system, meaningless data may be provided due to a user's operation error or communication problem, or false data may be provided to obtain compensation. Therefore, the detection and removal of the abnormal data determines the quality of the crowdsensing service. The proposed methods in the past to detect these anomalies are not efficient for the fast-changing environment of crowdsensing. This paper proposes an anomaly data detection method by extracting the characteristics of continuously and rapidly changing sensing data environment by using machine learning technology and modeling it with an appropriate algorithm. We show the performance and feasibility of the proposed system using deep learning binary classification model of supervised learning and autoencoder model of unsupervised learning.

Collaborative Filtering based Recommender System using Restricted Boltzmann Machines

  • Lee, Soojung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.9
    • /
    • pp.101-108
    • /
    • 2020
  • Recommender system is a must-have feature of e-commerce, since it provides customers with convenience in selecting products. Collaborative filtering is a widely-used and representative technique, where it gives recommendation lists of products preferred by other users or preferred by the current user in the past. Recently, researches on the recommendation system using deep learning artificial intelligence technologies are actively being conducted to achieve performance improvement. This study develops a collaborative filtering based recommender system using restricted Boltzmann machines of the deep learning technology by utilizing user ratings. Moreover, a learning parameter update algorithm is proposed for learning efficiency and performance. Performance evaluation of the proposed system is made through experimental analysis and comparison with conventional collaborative filtering methods. It is found that the proposed algorithm yields superior performance than the basic restricted Boltzmann machines.

Deep Learning Model for Electric Power Demand Prediction Using Special Day Separation and Prediction Elements Extention (특수일 분리와 예측요소 확장을 이용한 전력수요 예측 딥 러닝 모델)

  • Park, Jun-Ho;Shin, Dong-Ha;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.4
    • /
    • pp.365-370
    • /
    • 2017
  • This study analyze correlation between weekdays data and special days data of different power demand patterns, and builds a separate data set, and suggests ways to reduce power demand prediction error by using deep learning network suitable for each data set. In addition, we propose a method to improve the prediction rate by adding the environmental elements and the separating element to the meteorological element, which is a basic power demand prediction elements. The entire data predicted power demand using LSTM which is suitable for learning time series data, and the special day data predicted power demand using DNN. The experiment result show that the prediction rate is improved by adding prediction elements other than meteorological elements. The average RMSE of the entire dataset was 0.2597 for LSTM and 0.5474 for DNN, indicating that the LSTM showed a good prediction rate. The average RMSE of the special day data set was 0.2201 for DNN, indicating that the DNN had better prediction than LSTM. The MAPE of the LSTM of the whole data set was 2.74% and the MAPE of the special day was 3.07 %.

A layered-wise data augmenting algorithm for small sampling data (적은 양의 데이터에 적용 가능한 계층별 데이터 증강 알고리즘)

  • Cho, Hee-chan;Moon, Jong-sub
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.65-72
    • /
    • 2019
  • Data augmentation is a method that increases the amount of data through various algorithms based on a small amount of sample data. When machine learning and deep learning techniques are used to solve real-world problems, there is often a lack of data sets. The lack of data is at greater risk of underfitting and overfitting, in addition to the poor reflection of the characteristics of the set of data when learning a model. Thus, in this paper, through the layer-wise data augmenting method at each layer of deep neural network, the proposed method produces augmented data that is substantially meaningful and shows that the method presented by the paper through experimentation is effective in the learning of the model by measuring whether the method presented by the paper improves classification accuracy.

A pilot study of an automated personal identification process: Applying machine learning to panoramic radiographs

  • Ortiz, Adrielly Garcia;Soares, Gustavo Hermes;da Rosa, Gabriela Cauduro;Biazevic, Maria Gabriela Haye;Michel-Crosato, Edgard
    • Imaging Science in Dentistry
    • /
    • v.51 no.2
    • /
    • pp.187-193
    • /
    • 2021
  • Purpose: This study aimed to assess the usefulness of machine learning and automation techniques to match pairs of panoramic radiographs for personal identification. Materials and Methods: Two hundred panoramic radiographs from 100 patients (50 males and 50 females) were randomly selected from a private radiological service database. Initially, 14 linear and angular measurements of the radiographs were made by an expert. Eight ratio indices derived from the original measurements were applied to a statistical algorithm to match radiographs from the same patients, simulating a semi-automated personal identification process. Subsequently, measurements were automatically generated using a deep neural network for image recognition, simulating a fully automated personal identification process. Results: Approximately 85% of the radiographs were correctly matched by the automated personal identification process. In a limited number of cases, the image recognition algorithm identified 2 potential matches for the same individual. No statistically significant differences were found between measurements performed by the expert on panoramic radiographs from the same patients. Conclusion: Personal identification might be performed with the aid of image recognition algorithms and machine learning techniques. This approach will likely facilitate the complex task of personal identification by performing an initial screening of radiographs and matching ante-mortem and post-mortem images from the same individuals.

Comparative Analysis of Machine Learning Techniques for IoT Anomaly Detection Using the NSL-KDD Dataset

  • Zaryn, Good;Waleed, Farag;Xin-Wen, Wu;Soundararajan, Ezekiel;Maria, Balega;Franklin, May;Alicia, Deak
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.46-52
    • /
    • 2023
  • With billions of IoT (Internet of Things) devices populating various emerging applications across the world, detecting anomalies on these devices has become incredibly important. Advanced Intrusion Detection Systems (IDS) are trained to detect abnormal network traffic, and Machine Learning (ML) algorithms are used to create detection models. In this paper, the NSL-KDD dataset was adopted to comparatively study the performance and efficiency of IoT anomaly detection models. The dataset was developed for various research purposes and is especially useful for anomaly detection. This data was used with typical machine learning algorithms including eXtreme Gradient Boosting (XGBoost), Support Vector Machines (SVM), and Deep Convolutional Neural Networks (DCNN) to identify and classify any anomalies present within the IoT applications. Our research results show that the XGBoost algorithm outperformed both the SVM and DCNN algorithms achieving the highest accuracy. In our research, each algorithm was assessed based on accuracy, precision, recall, and F1 score. Furthermore, we obtained interesting results on the execution time taken for each algorithm when running the anomaly detection. Precisely, the XGBoost algorithm was 425.53% faster when compared to the SVM algorithm and 2,075.49% faster than the DCNN algorithm. According to our experimental testing, XGBoost is the most accurate and efficient method.

A Study on the Applicability of Machine Learning Algorithms for Detecting Hydraulic Outliers in a Borehole (시추공 수리 이상점 탐지를 위한 기계학습 알고리즘의 적용성 연구)

  • Seungbeom Choi; Kyung-Woo Park;Changsoo Lee
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.561-573
    • /
    • 2023
  • Korea Atomic Energy Research Institute (KAERI) constructed the KURT (KAERI Underground Research Tunnel) to analyze the hydrogeological/geochemical characteristics of deep rock mass. Numerous boreholes have been drilled to conduct various field tests. The selection of suitable investigation intervals within a borehole is of great importance. When objectives are centered around hydraulic flow and groundwater sampling, intervals with sufficient groundwater flow are the most suitable. This study defines such points as hydraulic outliers and aimed to detect them using borehole geophysical logging data (temperature and EC) from a 1 km depth borehole. For systematic and efficient outlier detection, machine learning algorithms, such as DBSCAN, OCSVM, kNN, and isolation forest, were applied and their applicability was assessed. Following data preprocessing and algorithm optimization, the four algorithms detected 55, 12, 52, and 68 outliers, respectively. Though this study confirms applicability of the machine learning algorithms, it is suggested that further verification and supplements are desirable since the input data were relatively limited.

Dependency of Generator Performance on T1 and T2 weights of the Input MR Images in developing a CycleGan based CT image generator from MR images (CycleGan 딥러닝기반 인공CT영상 생성성능에 대한 입력 MR영상의 T1 및 T2 가중방식의 영향)

  • Samuel Lee;Jonghun Jeong;Jinyoung Kim;Yeon Soo Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.37-44
    • /
    • 2024
  • Even though MR can reveal excellent soft-tissue contrast and functional information, CT is also required for electron density information for accurate dose calculation in Radiotherapy. For the fusion of MRI and CT images in RT treatment planning workflow, patients are normally scanned on both MRI and CT imaging modalities. Recently deep-learning-based generations of CT images from MR images became possible owing to machine learning technology. This eliminated CT scanning work. This study implemented a CycleGan deep-learning-based CT image generation from MR images. Three CT generators whose learning is based on T1- , T2- , or T1-&T2-weighted MR images were created, respectively. We found that the T1-weighted MR image-based generator can generate better than other CT generators when T1-weighted MR images are input. In contrast, a T2-weighted MR image-based generator can generate better than other CT generators do when T2-weighted MR images are input. The results say that the CT generator from MR images is just outside the practical clinics and the specific weight MR image-based machine-learning generator can generate better CT images than other sequence MR image-based generators do.