• Title/Summary/Keyword: Training dropout

Search Result 27, Processing Time 0.028 seconds

Training dropout Intention and Related Factors of Residents (전공의의 수련포기 의도와 영향요인)

  • Oh, Su-Hyun;Kim, Jin-Suk
    • Journal of Digital Convergence
    • /
    • v.19 no.8
    • /
    • pp.257-264
    • /
    • 2021
  • Resident training is the process of training specialists with excellent competency. This study analyzed the intention of resident's training dropout and influencing factors by utilizing the results of a survey on the training and working environment of the residents conducted in April 2017. According to a multivariate logistic regression analysis of 1,748 residents, 340(19.5%) answered 'yes', 518(29.6%) 'normal', 890(50.9%) 'no'. Factors related to the dropout of training by residents were found to be whether the hospital was affiliated with a enterprise, training year, training hours, appropriate educational supervision and adequate of the training process. For a desirable Resident education and training, it is necessary to normalize of the resident's working environment, develop of a substantial training program, provide fair and systematic evaluation, and financial support.

A Deep Neural Network Model Based on a Mutation Operator (돌연변이 연산 기반 효율적 심층 신경망 모델)

  • Jeon, Seung Ho;Moon, Jong Sub
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.12
    • /
    • pp.573-580
    • /
    • 2017
  • Deep Neural Network (DNN) is a large layered neural network which is consisted of a number of layers of non-linear units. Deep Learning which represented as DNN has been applied very successfully in various applications. However, many issues in DNN have been identified through past researches. Among these issues, generalization is the most well-known problem. A Recent study, Dropout, successfully addressed this problem. Also, Dropout plays a role as noise, and so it helps to learn robust feature during learning in DNN such as Denoising AutoEncoder. However, because of a large computations required in Dropout, training takes a lot of time. Since Dropout keeps changing an inter-layer representation during the training session, the learning rates should be small, which makes training time longer. In this paper, using mutation operation, we reduce computation and improve generalization performance compared with Dropout. Also, we experimented proposed method to compare with Dropout method and showed that our method is superior to the Dropout one.

A Machine Learning-Based Vocational Training Dropout Prediction Model Considering Structured and Unstructured Data (정형 데이터와 비정형 데이터를 동시에 고려하는 기계학습 기반의 직업훈련 중도탈락 예측 모형)

  • Ha, Manseok;Ahn, Hyunchul
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.1
    • /
    • pp.1-15
    • /
    • 2019
  • One of the biggest difficulties in the vocational training field is the dropout problem. A large number of students drop out during the training process, which hampers the waste of the state budget and the improvement of the youth employment rate. Previous studies have mainly analyzed the cause of dropouts. The purpose of this study is to propose a machine learning based model that predicts dropout in advance by using various information of learners. In particular, this study aimed to improve the accuracy of the prediction model by taking into consideration not only structured data but also unstructured data. Analysis of unstructured data was performed using Word2vec and Convolutional Neural Network(CNN), which are the most popular text analysis technologies. We could find that application of the proposed model to the actual data of a domestic vocational training institute improved the prediction accuracy by up to 20%. In addition, the support vector machine-based prediction model using both structured and unstructured data showed high prediction accuracy of the latter half of 90%.

Implementation of Fish Detection Based on Convolutional Neural Networks (CNN 기반의 물고기 탐지 알고리즘 구현)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.3
    • /
    • pp.124-129
    • /
    • 2020
  • Autonomous underwater vehicle makes attracts to many researchers. This paper proposes a convolutional neural network (CNN) based fish detection method. Since there are not enough data sets in the process of training, overfitting problem can be occurred in deep learning. To solve the problem, we apply the dropout algorithm to simplify the model. Experimental result showed that the implemented method is promising, and the effectiveness of identification by dropout approach is highly enhanced.

Comparative Analysis for Emotion Expression Using Three Methods Based by CNN (CNN기초로 세 가지 방법을 이용한 감정 표정 비교분석)

  • Yang, Chang Hee;Park, Kyu Sub;Kim, Young Seop;Lee, Yong Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.4
    • /
    • pp.65-70
    • /
    • 2020
  • CNN's technologies that represent emotional detection include primitive CNN algorithms, deployment normalization, and drop-off. We present the methods and data of the three experiments in this paper. The training database and the test database are set up differently. The first experiment is to extract emotions using Batch Normalization, which complemented the shortcomings of distribution. The second experiment is to extract emotions using Dropout, which is used for rapid computation. The third experiment uses CNN using convolution and maxpooling. All three results show a low detection rate, To supplement these problems, We will develop a deep learning algorithm using feature extraction method specialized in image processing field.

Improving Accuracy of Instance Segmentation of Teeth

  • Jongjin Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.280-286
    • /
    • 2024
  • In this paper, layered UNet with warmup and dropout tricks was used to segment teeth instantly by using data labeled for each individual tooth and increase performance of the result. The layered UNet proposed before showed very good performance in tooth segmentation without distinguishing tooth number. To do instance segmentation of teeth, we labeled teeth CBCT data according to tooth numbering system which is devised by FDI World Dental Federation notation. Colors for labeled teeth are like AI-Hub teeth dataset. Simulation results show that layered UNet does also segment very well for each tooth distinguishing tooth number by color. Layered UNet model using warmup trick was the best with IoU values of 0.80 and 0.77 for training, validation data. To increase the performance of instance segmentation of teeth, we need more labeled data later. The results of this paper can be used to develop medical software that requires tooth recognition, such as orthodontic treatment, wisdom tooth extraction, and implant surgery.

Hybrid dropout (하이브리드 드롭아웃)

  • Park, Chongsun;Lee, MyeongGyu
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.6
    • /
    • pp.899-908
    • /
    • 2019
  • Massive in-depth neural networks with numerous parameters are powerful machine learning methods, but they have overfitting problems due to the excessive flexibility of the models. Dropout is one methods to overcome the problem of oversized neural networks. It is also an effective method that randomly drops input and hidden nodes from the neural network during training. Every sample is fed to a thinned network from an exponential number of different networks. In this study, instead of feeding one sample for each thinned network, two or more samples are used in fitting for one thinned network known as a Hybrid Dropout. Simulation results using real data show that the new method improves the stability of estimates and reduces the minimum error for the verification data.

Research on a handwritten character recognition algorithm based on an extended nonlinear kernel residual network

  • Rao, Zheheng;Zeng, Chunyan;Wu, Minghu;Wang, Zhifeng;Zhao, Nan;Liu, Min;Wan, Xiangkui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.413-435
    • /
    • 2018
  • Although the accuracy of handwritten character recognition based on deep networks has been shown to be superior to that of the traditional method, the use of an overly deep network significantly increases time consumption during parameter training. For this reason, this paper took the training time and recognition accuracy into consideration and proposed a novel handwritten character recognition algorithm with newly designed network structure, which is based on an extended nonlinear kernel residual network. This network is a non-extremely deep network, and its main design is as follows:(1) Design of an unsupervised apriori algorithm for intra-class clustering, making the subsequent network training more pertinent; (2) presentation of an intermediate convolution model with a pre-processed width level of 2;(3) presentation of a composite residual structure that designs a multi-level quick link; and (4) addition of a Dropout layer after the parameter optimization. The algorithm shows superior results on MNIST and SVHN dataset, which are two character benchmark recognition datasets, and achieves better recognition accuracy and higher recognition efficiency than other deep structures with the same number of layers.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

CNN-Based Fake Image Identification with Improved Generalization (일반화 능력이 향상된 CNN 기반 위조 영상 식별)

  • Lee, Jeonghan;Park, Hanhoon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1624-1631
    • /
    • 2021
  • With the continued development of image processing technology, we live in a time when it is difficult to visually discriminate processed (or tampered) images from real images. However, as the risk of fake images being misused for crime increases, the importance of image forensic science for identifying fake images is emerging. Currently, various deep learning-based identifiers have been studied, but there are still many problems to be used in real situations. Due to the inherent characteristics of deep learning that strongly relies on given training data, it is very vulnerable to evaluating data that has never been viewed. Therefore, we try to find a way to improve generalization ability of deep learning-based fake image identifiers. First, images with various contents were added to the training dataset to resolve the over-fitting problem that the identifier can only classify real and fake images with specific contents but fails for those with other contents. Next, color spaces other than RGB were exploited. That is, fake image identification was attempted on color spaces not considered when creating fake images, such as HSV and YCbCr. Finally, dropout, which is commonly used for generalization of neural networks, was used. Through experimental results, it has been confirmed that the color space conversion to HSV is the best solution and its combination with the approach of increasing the training dataset significantly can greatly improve the accuracy and generalization ability of deep learning-based identifiers in identifying fake images that have never been seen before.