• Title/Summary/Keyword: Deep Learning Dataset

Search Result 816, Processing Time 0.027 seconds

A Study on Reliability Analysis According to the Number of Training Data and the Number of Training (훈련 데이터 개수와 훈련 횟수에 따른 과도학습과 신뢰도 분석에 대한 연구)

  • Kim, Sung Hyeock;Oh, Sang Jin;Yoon, Geun Young;Kim, Wan
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.1
    • /
    • pp.29-37
    • /
    • 2017
  • The range of problems that can be handled by the activation of big data and the development of hardware has been rapidly expanded and machine learning such as deep learning has become a very versatile technology. In this paper, mnist data set is used as experimental data, and the Cross Entropy function is used as a loss model for evaluating the efficiency of machine learning, and the value of the loss function in the steepest descent method is We applied the Gradient Descent Optimize algorithm to minimize and updated weight and bias via backpropagation. In this way we analyze optimal reliability value corresponding to the number of exercises and optimal reliability value without overfitting. And comparing the overfitting time according to the number of data changes based on the number of training times, when the training frequency was 1110 times, we obtained the result of 92%, which is the optimal reliability value without overfitting.

Comparison of machine learning algorithms for regression and classification of ultimate load-carrying capacity of steel frames

  • Kim, Seung-Eock;Vu, Quang-Viet;Papazafeiropoulos, George;Kong, Zhengyi;Truong, Viet-Hung
    • Steel and Composite Structures
    • /
    • v.37 no.2
    • /
    • pp.193-209
    • /
    • 2020
  • In this paper, the efficiency of five Machine Learning (ML) methods consisting of Deep Learning (DL), Support Vector Machine (SVM), Random Forest (RF), Decision Tree (DT), and Gradient Tree Booting (GTB) for regression and classification of the Ultimate Load Factor (ULF) of nonlinear inelastic steel frames is compared. For this purpose, a two-story, a six-story, and a twenty-story space frame are considered. An advanced nonlinear inelastic analysis is carried out for the steel frames to generate datasets for the training of the considered ML methods. In each dataset, the input variables are the geometric features of W-sections and the output variable is the ULF of the frame. The comparison between the five ML methods is made in terms of the mean-squared-error (MSE) for the regression models and the accuracy for the classification models, respectively. Moreover, the ULF distribution curve is calculated for each frame and the strength failure probability is estimated. It is found that the GTB method has the best efficiency in both regression and classification of ULF regardless of the number of training samples and the space frames considered.

Hybrid Learning-Based Cell Morphology Profiling Framework for Classifying Cancer Heterogeneity (암의 이질성 분류를 위한 하이브리드 학습 기반 세포 형태 프로파일링 기법)

  • Min, Chanhong;Jeong, Hyuntae;Yang, Sejung;Shin, Jennifer Hyunjong
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.5
    • /
    • pp.232-240
    • /
    • 2021
  • Heterogeneity in cancer is the major obstacle for precision medicine and has become a critical issue in the field of a cancer diagnosis. Many attempts were made to disentangle the complexity by molecular classification. However, multi-dimensional information from dynamic responses of cancer poses fundamental limitations on biomolecular marker-based conventional approaches. Cell morphology, which reflects the physiological state of the cell, can be used to track the temporal behavior of cancer cells conveniently. Here, we first present a hybrid learning-based platform that extracts cell morphology in a time-dependent manner using a deep convolutional neural network to incorporate multivariate data. Feature selection from more than 200 morphological features is conducted, which filters out less significant variables to enhance interpretation. Our platform then performs unsupervised clustering to unveil dynamic behavior patterns hidden from a high-dimensional dataset. As a result, we visualize morphology state-space by two-dimensional embedding as well as representative morphology clusters and trajectories. This cell morphology profiling strategy by hybrid learning enables simplification of the heterogeneous population of cancer.

Morpho-GAN: Unsupervised Learning of Data with High Morphology using Generative Adversarial Networks (Morpho-GAN: Generative Adversarial Networks를 사용하여 높은 형태론 데이터에 대한 비지도학습)

  • Abduazimov, Azamat;Jo, GeunSik
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.01a
    • /
    • pp.11-14
    • /
    • 2020
  • The importance of data in the development of deep learning is very high. Data with high morphological features are usually utilized in the domains where careful lens calibrations are needed by a human to capture those data. Synthesis of high morphological data for that domain can be a great asset to improve the classification accuracy of systems in the field. Unsupervised learning can be employed for this task. Generating photo-realistic objects of interest has been massively studied after Generative Adversarial Network (GAN) was introduced. In this paper, we propose Morpho-GAN, a method that unifies several GAN techniques to generate quality data of high morphology. Our method introduces a new suitable training objective in the discriminator of GAN to synthesize images that follow the distribution of the original dataset. The results demonstrate that the proposed method can generate plausible data as good as other modern baseline models while taking a less complex during training.

  • PDF

Optimizing shallow foundation design: A machine learning approach for bearing capacity estimation over cavities

  • Kumar Shubham;Subhadeep Metya;Abdhesh Kumar Sinha
    • Geomechanics and Engineering
    • /
    • v.37 no.6
    • /
    • pp.629-641
    • /
    • 2024
  • The presence of excavations or cavities beneath the foundations of a building can have a significant impact on their stability and cause extensive damage. Traditional methods for calculating the bearing capacity and subsidence of foundations over cavities can be complex and time-consuming, particularly when dealing with conditions that vary. In such situations, machine learning (ML) and deep learning (DL) techniques provide effective alternatives. This study concentrates on constructing a prediction model based on the performance of ML and DL algorithms that can be applied in real-world settings. The efficacy of eight algorithms, including Regression Analysis, k-Nearest Neighbor, Decision Tree, Random Forest, Multivariate Regression Spline, Artificial Neural Network, and Deep Neural Network, was evaluated. Using a Python-assisted automation technique integrated with the PLAXIS 2D platform, a dataset containing 272 cases with eight input parameters and one target variable was generated. In general, the DL model performed better than the ML models, and all models, except the regression models, attained outstanding results with an R2 greater than 0.90. These models can also be used as surrogate models in reliability analysis to evaluate failure risks and probabilities.

Improving the Recognition of Known and Unknown Plant Disease Classes Using Deep Learning

  • Yao Meng;Jaehwan Lee;Alvaro Fuentes;Mun Haeng Lee;Taehyun Kim;Sook Yoon;Dong Sun Park
    • Smart Media Journal
    • /
    • v.13 no.8
    • /
    • pp.16-25
    • /
    • 2024
  • Recently, there has been a growing emphasis on identifying both known and unknown diseases in plant disease recognition. In this task, a model trained only on images of known classes is required to classify an input image into either one of the known classes or into an unknown class. Consequently, the capability to recognize unknown diseases is critical for model deployment. To enhance this capability, we are considering three factors. Firstly, we propose a new logits-based scoring function for unknown scores. Secondly, initial experiments indicate that a compact feature space is crucial for the effectiveness of logits-based methods, leading us to employ the AM-Softmax loss instead of Cross-entropy loss during training. Thirdly, drawing inspiration from the efficacy of transfer learning, we utilize a large plant-relevant dataset, PlantCLEF2022, for pre-training a model. The experimental results suggest that our method outperforms current algorithms. Specifically, our method achieved a performance of 97.90 CSA, 91.77 AUROC, and 90.63 OSCR with the ResNet50 model and a performance of 98.28 CSA, 92.05 AUROC, and 91.12 OSCR with the ConvNext base model. We believe that our study will contribute to the community.

Sentimental Analysis of Twitter Data Using Machine Learning and Deep Learning: Nickel Ore Export Restrictions to Europe Under Jokowi's Administration 2022

  • Sophiana Widiastutie;Dairatul Maarif;Adinda Aulia Hafizha
    • Asia pacific journal of information systems
    • /
    • v.34 no.2
    • /
    • pp.400-420
    • /
    • 2024
  • Nowadays, social media has evolved into a powerful networked ecosystem in which governments and citizens publicly debate economic and political issues. This holds true for the pros and cons of Indonesia's ore nickel export restriction to Europe, which we aim to investigate further in this paper. Using Twitter as a dependable channel for conducting sentiment analysis, we have gathered 7070 tweets data for further processing using two sentiment analysis approaches, namely Support Vector Machine (SVM) and Long Short Term Memory (LSTM). Model construction stage has shown that Bidirectional LSTM performed better than LSTM and SVM kernels, with accuracy of 91%. The LSTM comes second and The SVM Radial Basis Function comes third in terms of best model, with 88% and 83% accuracies, respectively. In terms of sentiments, most Indonesians believe that the nickel ore provision will have a positive impact on the mining industry in Indonesia. However, a small number of Indonesian citizens contradict this policy due to fears of a trade dispute that could potentially harm Indonesia's bilateral relations with the EU. Hence, this study contributes to the advancement of measuring public opinions through big data tools by identifying Bidirectional LSTM as the optimal model for the dataset.

Comparative Analysis of Machine Learning Techniques for IoT Anomaly Detection Using the NSL-KDD Dataset

  • Zaryn, Good;Waleed, Farag;Xin-Wen, Wu;Soundararajan, Ezekiel;Maria, Balega;Franklin, May;Alicia, Deak
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.46-52
    • /
    • 2023
  • With billions of IoT (Internet of Things) devices populating various emerging applications across the world, detecting anomalies on these devices has become incredibly important. Advanced Intrusion Detection Systems (IDS) are trained to detect abnormal network traffic, and Machine Learning (ML) algorithms are used to create detection models. In this paper, the NSL-KDD dataset was adopted to comparatively study the performance and efficiency of IoT anomaly detection models. The dataset was developed for various research purposes and is especially useful for anomaly detection. This data was used with typical machine learning algorithms including eXtreme Gradient Boosting (XGBoost), Support Vector Machines (SVM), and Deep Convolutional Neural Networks (DCNN) to identify and classify any anomalies present within the IoT applications. Our research results show that the XGBoost algorithm outperformed both the SVM and DCNN algorithms achieving the highest accuracy. In our research, each algorithm was assessed based on accuracy, precision, recall, and F1 score. Furthermore, we obtained interesting results on the execution time taken for each algorithm when running the anomaly detection. Precisely, the XGBoost algorithm was 425.53% faster when compared to the SVM algorithm and 2,075.49% faster than the DCNN algorithm. According to our experimental testing, XGBoost is the most accurate and efficient method.

A study on improving self-inference performance through iterative retraining of false positives of deep-learning object detection in tunnels (터널 내 딥러닝 객체인식 오탐지 데이터의 반복 재학습을 통한 자가 추론 성능 향상 방법에 관한 연구)

  • Kyu Beom Lee;Hyu-Soung Shin
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.2
    • /
    • pp.129-152
    • /
    • 2024
  • In the application of deep learning object detection via CCTV in tunnels, a large number of false positive detections occur due to the poor environmental conditions of tunnels, such as low illumination and severe perspective effect. This problem directly impacts the reliability of the tunnel CCTV-based accident detection system reliant on object detection performance. Hence, it is necessary to reduce the number of false positive detections while also enhancing the number of true positive detections. Based on a deep learning object detection model, this paper proposes a false positive data training method that not only reduces false positives but also improves true positive detection performance through retraining of false positive data. This paper's false positive data training method is based on the following steps: initial training of a training dataset - inference of a validation dataset - correction of false positive data and dataset composition - addition to the training dataset and retraining. In this paper, experiments were conducted to verify the performance of this method. First, the optimal hyperparameters of the deep learning object detection model to be applied in this experiment were determined through previous experiments. Then, in this experiment, training image format was determined, and experiments were conducted sequentially to check the long-term performance improvement through retraining of repeated false detection datasets. As a result, in the first experiment, it was found that the inclusion of the background in the inferred image was more advantageous for object detection performance than the removal of the background excluding the object. In the second experiment, it was found that retraining by accumulating false positives from each level of retraining was more advantageous than retraining independently for each level of retraining in terms of continuous improvement of object detection performance. After retraining the false positive data with the method determined in the two experiments, the car object class showed excellent inference performance with an AP value of 0.95 or higher after the first retraining, and by the fifth retraining, the inference performance was improved by about 1.06 times compared to the initial inference. And the person object class continued to improve its inference performance as retraining progressed, and by the 18th retraining, it showed that it could self-improve its inference performance by more than 2.3 times compared to the initial inference.

Performance comparison evaluation of real and complex networks for deep neural network-based speech enhancement in the frequency domain (주파수 영역 심층 신경망 기반 음성 향상을 위한 실수 네트워크와 복소 네트워크 성능 비교 평가)

  • Hwang, Seo-Rim;Park, Sung Wook;Park, Youngcheol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.1
    • /
    • pp.30-37
    • /
    • 2022
  • This paper compares and evaluates model performance from two perspectives according to the learning target and network structure for training Deep Neural Network (DNN)-based speech enhancement models in the frequency domain. In this case, spectrum mapping and Time-Frequency (T-F) masking techniques were used as learning targets, and a real network and a complex network were used for the network structure. The performance of the speech enhancement model was evaluated through two objective evaluation metrics: Perceptual Evaluation of Speech Quality (PESQ) and Short-Time Objective Intelligibility (STOI) depending on the scale of the dataset. Test results show the appropriate size of the training data differs depending on the type of networks and the type of dataset. In addition, they show that, in some cases, using a real network may be a more realistic solution if the number of total parameters is considered because the real network shows relatively higher performance than the complex network depending on the size of the data and the learning target.