• Title/Summary/Keyword: deep machine learning

Search Result 1,085, Processing Time 0.027 seconds

Development of Machine Learning Model to Predict Hydrogen Maser Holdover Time (수소 메이저 홀드오버 시간예측을 위한 머신러닝 모델 개발)

  • Sang Jun Kim;Young Kyu Lee;Joon Hyo Rhee;Juhyun Lee;Gyeong Won Choi;Ju-Ik Oh;Donghui Yu
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.13 no.1
    • /
    • pp.111-115
    • /
    • 2024
  • This study builds a machine learning model optimized for clocks among various techniques in the field of artificial intelligence and applies it to clock stabilization or synchronization technology based on atomic clock noise characteristics. In addition, the possibility of providing stable source clock data is confirmed through the characteristics of machine learning predicted values during holdover of atomic clocks. The proposed machine learning model is evaluated by comparing its performance with the AutoRegressive Integrated Moving Average (ARIMA) model, an existing statistical clock prediction model. From the results of the analysis, the prediction model proposed in this study (MSE: 9.47476) has a lower MSE value than the ARIMA model (MSE: 221.2622), which means that it provides more accurate predictions. The prediction accuracy is based on understanding the complex nature of data that changes over time and how well the model reflects this. The application of a machine learning prediction model can be seen as a way to overcome the limitations of the statistical-based ARIMA model in time series prediction and achieve improved prediction performance.

A New Ensemble Machine Learning Technique with Multiple Stacking (다중 스태킹을 가진 새로운 앙상블 학습 기법)

  • Lee, Su-eun;Kim, Han-joon
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.3
    • /
    • pp.1-13
    • /
    • 2020
  • Machine learning refers to a model generation technique that can solve specific problems from the generalization process for given data. In order to generate a high performance model, high quality training data and learning algorithms for generalization process should be prepared. As one way of improving the performance of model to be learned, the Ensemble technique generates multiple models rather than a single model, which includes bagging, boosting, and stacking learning techniques. This paper proposes a new Ensemble technique with multiple stacking that outperforms the conventional stacking technique. The learning structure of multiple stacking ensemble technique is similar to the structure of deep learning, in which each layer is composed of a combination of stacking models, and the number of layers get increased so as to minimize the misclassification rate of each layer. Through experiments using four types of datasets, we have showed that the proposed method outperforms the exiting ones.

Deep Learning Based CCTV Fire Detection System (딥러닝 기반 CCTV 화재 감지 시스템)

  • Yim, Jihyeon;Park, Hyunho;Lee, Wonjae;Kim, Seonghyun;Lee, Yong-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.11a
    • /
    • pp.139-141
    • /
    • 2017
  • 화재는 다른 재난보다 확산 속도가 빠르기 때문에 신속하고 정확한 감지와 지속적인 감시가 요구된다. 최근, 신속하고 정확한 화재 감지를 위해, CCTV(Closed-Circuit TeleVision)으로 획득한 이미지를 기계학습(Machine Learning)을 이용해 화재 발생 여부를 감지하는 화재 감지 시스템이 주목받고 있다. 본 논문에서는 기계학습의 기술 중 정확도가 가장 높은 딥러닝(Deep Learning)기반의 CCTV 화재 감지 시스템을 제안한다. 본 논문의 시스템은 딥러닝 기술 적용뿐만이 아니라, CCTV 이미지 전처리 과정을 보완함으로써 딥러닝에서의 미지 데이터(unseen data)의 낮은 분류 정확도 문제인 과적합(overfitting)문제를 해결하였다. 본 논문의 시스템은 약 80,000 개의 CCTV 이미지 데이터를 학습하여, 90% 이상의 화재 이미지 분류 정확도의 성능을 보여주었다.

  • PDF

Design and Implementation of a Body Fat Classification Model using Human Body Size Data

  • Taejun Lee;Hakseong Kim;Hoekyung Jung
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.110-116
    • /
    • 2023
  • Recently, as various examples of machine learning have been applied in the healthcare field, deep learning technology has been applied to various tasks, such as electrocardiogram examination and body composition analysis using wearable devices such as smart watches. To utilize deep learning, securing data is the most important procedure, where human intervention, such as data classification, is required. In this study, we propose a model that uses a clustering algorithm, namely, the K-means clustering, to label body fat according to gender and age considering body size aspects, such as chest circumference and waist circumference, and classifies body fat into five groups from high risk to low risk using a convolutional neural network (CNN). As a result of model validation, accuracy, precision, and recall results of more than 95% were obtained. Thus, rational decision making can be made in the field of healthcare or obesity analysis using the proposed method.

Bi-LSTM model with time distribution for bandwidth prediction in mobile networks

  • Hyeonji Lee;Yoohwa Kang;Minju Gwak;Donghyeok An
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.205-217
    • /
    • 2024
  • We propose a bandwidth prediction approach based on deep learning. The approach is intended to accurately predict the bandwidth of various types of mobile networks. We first use a machine learning technique, namely, the gradient boosting algorithm, to recognize the connected mobile network. Second, we apply a handover detection algorithm based on network recognition to account for vertical handover that causes the bandwidth variance. Third, as the communication performance offered by 3G, 4G, and 5G networks varies, we suggest a bidirectional long short-term memory model with time distribution for bandwidth prediction per network. To increase the prediction accuracy, pretraining and fine-tuning are applied for each type of network. We use a dataset collected at University College Cork for network recognition, handover detection, and bandwidth prediction. The performance evaluation indicates that the handover detection algorithm achieves 88.5% accuracy, and the bandwidth prediction model achieves a high accuracy, with a root-mean-square error of only 2.12%.

Application of Artificial Intelligence in Gastric Cancer (위암에서 인공지능의 응용)

  • Jung In Lee
    • Journal of Digestive Cancer Research
    • /
    • v.11 no.3
    • /
    • pp.130-140
    • /
    • 2023
  • Gastric cancer (GC) is one of the most common malignant tumors worldwide, with a 5-year survival rate of < 40%. The diagnosis and treatment decisions of GC rely on human experts' judgments on medical images; therefore, the accuracy can be hindered by image condition, objective criterion, limited experience, and interobserver discrepancy. In recent years, several applications of artificial intelligence (AI) have emerged in the GC field based on improvement of computational power and deep learning algorithms. AI can support various clinical practices in endoscopic examination, pathologic confirmation, radiologic staging, and prognosis prediction. This review has systematically summarized the current status of AI applications after a comprehensive literature search. Although the current approaches are challenged by data scarcity and poor interpretability, future directions of this field are likely to overcome the risk and enhance their accuracy and applicability in clinical practice.

Improving learning outcome prediction method by applying Markov Chain (Markov Chain을 응용한 학습 성과 예측 방법 개선)

  • Chul-Hyun Hwang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.595-600
    • /
    • 2024
  • As the use of artificial intelligence technologies such as machine learning increases in research fields that predict learning outcomes or optimize learning pathways, the use of artificial intelligence in education is gradually making progress. This research is gradually evolving into more advanced artificial intelligence methods such as deep learning and reinforcement learning. This study aims to improve the method of predicting future learning performance based on the learner's past learning performance-history data. Therefore, to improve prediction performance, we propose conditional probability applying the Markov Chain method. This method is used to improve the prediction performance of the classifier by allowing the learner to add learning history data to the classification prediction in addition to classification prediction by machine learning. In order to confirm the effectiveness of the proposed method, a total of more than 30 experiments were conducted per algorithm and indicator using empirical data, 'Teaching aid-based early childhood education learning performance data'. As a result of the experiment, higher performance indicators were confirmed in cases using the proposed method than in cases where only the classification algorithm was used in all cases.

An Effective Anomaly Detection Approach based on Hybrid Unsupervised Learning Technologies in NIDS

  • Kangseok Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.494-510
    • /
    • 2024
  • Internet users are exposed to sophisticated cyberattacks that intrusion detection systems have difficulty detecting. Therefore, research is increasing on intrusion detection methods that use artificial intelligence technology for detecting novel cyberattacks. Unsupervised learning-based methods are being researched that learn only from normal data and detect abnormal behaviors by finding patterns. This study developed an anomaly-detection method based on unsupervised machines and deep learning for a network intrusion detection system (NIDS). We present a hybrid anomaly detection approach based on unsupervised learning techniques using the autoencoder (AE), Isolation Forest (IF), and Local Outlier Factor (LOF) algorithms. An oversampling approach that increased the detection rate was also examined. A hybrid approach that combined deep learning algorithms and traditional machine learning algorithms was highly effective in setting the thresholds for anomalies without subjective human judgment. It achieved precision and recall rates respectively of 88.2% and 92.8% when combining two AEs, IF, and LOF while using an oversampling approach to learn more unknown normal data improved the detection accuracy. This approach achieved precision and recall rates respectively of 88.2% and 94.6%, further improving the detection accuracy compared with the hybrid method. Therefore, in NIDS the proposed approach provides high reliability for detecting cyberattacks.

A Study on the Optimal Setting of Large Uncharged Hole Boring Machine for Reducing Blast-induced Vibration Using Deep Learning (터널 발파 진동 저감을 위한 대구경 무장약공 천공 장비의 최적 세팅조건 산정을 위한 딥러닝 적용에 관한 연구)

  • Kim, Min-Seong;Lee, Je-Kyum;Choi, Yo-Hyun;Kim, Seon-Hong;Jeong, Keon-Woong;Kim, Ki-Lim;Lee, Sean Seungwon
    • Explosives and Blasting
    • /
    • v.38 no.4
    • /
    • pp.16-25
    • /
    • 2020
  • Multi-setting smart-investigation of the ground and large uncharged hole boring (MSP) method to reduce the blast-induced vibration in a tunnel excavation is carried out over 50m of long-distance boring in a horizontal direction and thus has been accompanied by deviations in boring alignment because of the heavy and one-directional rotation of the rod. Therefore, the deviation has been adjusted through the boring machine's variable setting rely on the previous construction records and expert's experience. However, the geological characteristics, machine conditions, and inexperienced workers have caused significant deviation from the target alignment. The excessive deviation from the boring target may cause a delay in the construction schedule and economic losses. A deep learning-based prediction model has been developed to discover an ideal initial setting of the MSP machine. Dropout, early stopping, pre-training techniques have been employed to prevent overfitting in the training phase and, significantly improved the prediction results. These results showed the high possibility of developing the model to suggest the boring machine's optimum initial setting. We expect that optimized setting guidelines can be further developed through the continuous addition of the data and the additional consideration of the other factors.

Predicting antioxidant activity of compounds based on chemical structure using machine learning methods

  • Jinwoo Jung;Jeon-Ok Moon;Song Ih Ahn;Haeseung Lee
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.28 no.6
    • /
    • pp.527-537
    • /
    • 2024
  • Oxidative stress is a well-established risk factor for numerous chronic diseases, emphasizing the need for efficient identification of potent antioxidants. Conventional methods for assessing antioxidant properties are often time-consuming and resource-intensive, typically relying on laborious biochemical assays. In this study, we investigated the applicability of machine learning (ML) algorithms for predicting the antioxidant activity of compounds based solely on their molecular structure. We evaluated the performance of five ML algorithms, Support Vector Machine (SVM), Logistic Regression (LR), XGBoost, Random Forest (RF), and Deep Neural Network (DNN), using a dataset of over 1,900 compounds with experimentally determined antioxidant activity. Both RF and SVM achieved the best overall performance, exhibiting high accuracy (> 0.9) and effectively distinguishing active and inactive compounds with high structural similarity. External validation using natural product data from the BATMAN database confirmed the generalizability of the RF and SVM models. Our results suggest that ML models serve as powerful tools to expedite the discovery of novel antioxidant candidates, potentially streamlining the development of future therapeutic interventions.