• 제목/요약/키워드: Deep Learning Dataset

Search Result 776, Processing Time 0.028 seconds

Diagnostic Performance of a New Convolutional Neural Network Algorithm for Detecting Developmental Dysplasia of the Hip on Anteroposterior Radiographs

  • Hyoung Suk Park;Kiwan Jeon;Yeon Jin Cho;Se Woo Kim;Seul Bi Lee;Gayoung Choi;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon;Woo Sun Kim;Young Jin Ryu;Jae-Yeon Hwang
    • Korean Journal of Radiology
    • /
    • v.22 no.4
    • /
    • pp.612-623
    • /
    • 2021
  • Objective: To evaluate the diagnostic performance of a deep learning algorithm for the automated detection of developmental dysplasia of the hip (DDH) on anteroposterior (AP) radiographs. Materials and Methods: Of 2601 hip AP radiographs, 5076 cropped unilateral hip joint images were used to construct a dataset that was further divided into training (80%), validation (10%), or test sets (10%). Three radiologists were asked to label the hip images as normal or DDH. To investigate the diagnostic performance of the deep learning algorithm, we calculated the receiver operating characteristics (ROC), precision-recall curve (PRC) plots, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) and compared them with the performance of radiologists with different levels of experience. Results: The area under the ROC plot generated by the deep learning algorithm and radiologists was 0.988 and 0.988-0.919, respectively. The area under the PRC plot generated by the deep learning algorithm and radiologists was 0.973 and 0.618-0.958, respectively. The sensitivity, specificity, PPV, and NPV of the proposed deep learning algorithm were 98.0, 98.1, 84.5, and 99.8%, respectively. There was no significant difference in the diagnosis of DDH by the algorithm and the radiologist with experience in pediatric radiology (p = 0.180). However, the proposed model showed higher sensitivity, specificity, and PPV, compared to the radiologist without experience in pediatric radiology (p < 0.001). Conclusion: The proposed deep learning algorithm provided an accurate diagnosis of DDH on hip radiographs, which was comparable to the diagnosis by an experienced radiologist.

Impacts of label quality on performance of steel fatigue crack recognition using deep learning-based image segmentation

  • Hsu, Shun-Hsiang;Chang, Ting-Wei;Chang, Chia-Ming
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.207-220
    • /
    • 2022
  • Structural health monitoring (SHM) plays a vital role in the maintenance and operation of constructions. In recent years, autonomous inspection has received considerable attention because conventional monitoring methods are inefficient and expensive to some extent. To develop autonomous inspection, a potential approach of crack identification is needed to locate defects. Therefore, this study exploits two deep learning-based segmentation models, DeepLabv3+ and Mask R-CNN, for crack segmentation because these two segmentation models can outperform other similar models on public datasets. Additionally, impacts of label quality on model performance are explored to obtain an empirical guideline on the preparation of image datasets. The influence of image cropping and label refining are also investigated, and different strategies are applied to the dataset, resulting in six alternated datasets. By conducting experiments with these datasets, the highest mean Intersection-over-Union (mIoU), 75%, is achieved by Mask R-CNN. The rise in the percentage of annotations by image cropping improves model performance while the label refining has opposite effects on the two models. As the label refining results in fewer error annotations of cracks, this modification enhances the performance of DeepLabv3+. Instead, the performance of Mask R-CNN decreases because fragmented annotations may mistake an instance as multiple instances. To sum up, both DeepLabv3+ and Mask R-CNN are capable of crack identification, and an empirical guideline on the data preparation is presented to strengthen identification successfulness via image cropping and label refining.

Joint Demosaicing and Super-resolution of Color Filter Array Image based on Deep Image Prior Network

  • Kurniawan, Edwin;Lee, Suk-Ho
    • International journal of advanced smart convergence
    • /
    • v.11 no.2
    • /
    • pp.13-21
    • /
    • 2022
  • In this paper, we propose a learning based joint demosaicing and super-resolution framework which uses only the mosaiced color filter array(CFA) image as the input. As the proposed method works only on the mosaicied CFA image itself, there is no need for a large dataset. Based on our framework, we proposed two different structures, where the first structure uses one deep image prior network, while the second uses two. Experimental results show that even though we use only the CFA image as the training image, the proposed method can result in better visual quality than other bilinear interpolation combined demosaicing methods, and therefore, opens up a new research area for joint demosaicing and super-resolution on raw images.

Deep learning system for distinguishing between nasopalatine duct cysts and radicular cysts arising in the midline region of the anterior maxilla on panoramic radiographs

  • Yoshitaka Kise;Chiaki Kuwada;Mizuho Mori;Motoki Fukuda;Yoshiko Ariji;Eiichiro Ariji
    • Imaging Science in Dentistry
    • /
    • v.54 no.1
    • /
    • pp.33-41
    • /
    • 2024
  • Purpose: The aims of this study were to create a deep learning model to distinguish between nasopalatine duct cysts (NDCs), radicular cysts, and no-lesions (normal) in the midline region of the anterior maxilla on panoramic radiographs and to compare its performance with that of dental residents. Materials and Methods: One hundred patients with a confirmed diagnosis of NDC (53 men, 47 women; average age, 44.6±16.5 years), 100 with radicular cysts (49 men, 51 women; average age, 47.5±16.4 years), and 100 with normal groups (56 men, 44 women; average age, 34.4±14.6 years) were enrolled in this study. Cases were randomly assigned to the training datasets (80%) and the test dataset (20%). Then, 20% of the training data were randomly assigned as validation data. A learning model was created using a customized DetectNet built in Digits version 5.0 (NVIDIA, Santa Clara, USA). The performance of the deep learning system was assessed and compared with that of two dental residents. Results: The performance of the deep learning system was superior to that of the dental residents except for the recall of radicular cysts. The areas under the curve (AUCs) for NDCs and radicular cysts in the deep learning system were significantly higher than those of the dental residents. The results for the dental residents revealed a significant difference in AUC between NDCs and normal groups. Conclusion: This study showed superior performance in detecting NDCs and radicular cysts and in distinguishing between these lesions and normal groups.

빅데이터 기반 패션 추천 도우미 Shoes Navigator 설계 및 구현

  • 조현우 ;장지완 ;최현선;정목동
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.389-390
    • /
    • 2023
  • 본 논문에서는 패션 매칭의 어려움을 해결해주기 위하여 '무신사' 쇼핑몰을 이용하여 크롤링하고 이를 정제한 dataset을 이용하여 패션 스타일의 핵심 요소 중 하나인 신발에 초점을 맞추어, 이미지 기반의 패션 매칭 시스템인 빅데이터 기반 패션 도우미, Shoes Navigator 를 제안한다. 이를 위해 컴퓨터 비전 및 딥 러닝 기술을 활용하여 이미지에서 의류 항목을 자동으로 감지하고, 스타일, 색상과 같은 패션 특성을 추출한다. 또한, 사용자의 개인적인 스타일을 고려하여 최적의 매칭을 제안하기 때문에 패션 코디 문제를 용이하게 해결할 수 있다.

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Real-Time Earlobe Detection System on the Web

  • Kim, Jaeseung;Choi, Seyun;Lee, Seunghyun;Kwon, Soonchul
    • International journal of advanced smart convergence
    • /
    • v.10 no.4
    • /
    • pp.110-116
    • /
    • 2021
  • This paper proposed a real-time earlobe detection system using deep learning on the web. Existing deep learning-based detection methods often find independent objects such as cars, mugs, cats, and people. We proposed a way to receive an image through the camera of the user device in a web environment and detect the earlobe on the server. First, we took a picture of the user's face with the user's device camera on the web so that the user's ears were visible. After that, we sent the photographed user's face to the server to find the earlobe. Based on the detected results, we printed an earring model on the user's earlobe on the web. We trained an existing YOLO v5 model using a dataset of about 200 that created a bounding box on the earlobe. We estimated the position of the earlobe through a trained deep learning model. Through this process, we proposed a real-time earlobe detection system on the web. The proposed method showed the performance of detecting earlobes in real-time and loading 3D models from the web in real-time.

Management Software Development of Hyper Spectral Image Data for Deep Learning Training (딥러닝 학습을 위한 초분광 영상 데이터 관리 소프트웨어 개발)

  • Lee, Da-Been;Kim, Hong-Rak;Park, Jin-Ho;Hwang, Seon-Jeong;Shin, Jeong-Seop
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.111-116
    • /
    • 2021
  • The hyper-spectral image is data obtained by dividing the electromagnetic wave band in the infrared region into hundreds of wavelengths. It is used to find or classify objects in various fields. Recently, deep learning classification method has been attracting attention. In order to use hyper-spectral image data as deep learning training data, a processing technique is required compared to conventional visible light image data. To solve this problem, we developed a software that selects specific wavelength images from the hyper-spectral data cube and performs the ground truth task. We also developed software to manage data including environmental information. This paper describes the configuration and function of the software.

Deep Learning for Weeds' Growth Point Detection based on U-Net

  • Arsa, Dewa Made Sri;Lee, Jonghoon;Won, Okjae;Kim, Hyongsuk
    • Smart Media Journal
    • /
    • v.11 no.7
    • /
    • pp.94-103
    • /
    • 2022
  • Weeds bring disadvantages to crops since they can damage them, and a clean treatment with less pollution and contamination should be developed. Artificial intelligence gives new hope to agriculture to achieve smart farming. This study delivers an automated weeds growth point detection using deep learning. This study proposes a combination of semantic graphics for generating data annotation and U-Net with pre-trained deep learning as a backbone for locating the growth point of the weeds on the given field scene. The dataset was collected from an actual field. We measured the intersection over union, f1-score, precision, and recall to evaluate our method. Moreover, Mobilenet V2 was chosen as the backbone and compared with Resnet 34. The results showed that the proposed method was accurate enough to detect the growth point and handle the brightness variation. The best performance was achieved by Mobilenet V2 as a backbone with IoU 96.81%, precision 97.77%, recall 98.97%, and f1-score 97.30%.