• Title/Summary/Keyword: deep neural net

Search Result 327, Processing Time 0.03 seconds

Automatic Wood Species Identification of Korean Softwood Based on Convolutional Neural Networks

  • Kwon, Ohkyung;Lee, Hyung Gu;Lee, Mi-Rim;Jang, Sujin;Yang, Sang-Yun;Park, Se-Yeong;Choi, In-Gyu;Yeo, Hwanmyeong
    • Journal of the Korean Wood Science and Technology
    • /
    • v.45 no.6
    • /
    • pp.797-808
    • /
    • 2017
  • Automatic wood species identification systems have enabled fast and accurate identification of wood species outside of specialized laboratories with well-trained experts on wood species identification. Conventional automatic wood species identification systems consist of two major parts: a feature extractor and a classifier. Feature extractors require hand-engineering to obtain optimal features to quantify the content of an image. A Convolutional Neural Network (CNN), which is one of the Deep Learning methods, trained for wood species can extract intrinsic feature representations and classify them correctly. It usually outperforms classifiers built on top of extracted features with a hand-tuning process. We developed an automatic wood species identification system utilizing CNN models such as LeNet, MiniVGGNet, and their variants. A smartphone camera was used for obtaining macroscopic images of rough sawn surfaces from cross sections of woods. Five Korean softwood species (cedar, cypress, Korean pine, Korean red pine, and larch) were under classification by the CNN models. The highest and most stable CNN model was LeNet3 that is two additional layers added to the original LeNet architecture. The accuracy of species identification by LeNet3 architecture for the five Korean softwood species was 99.3%. The result showed the automatic wood species identification system is sufficiently fast and accurate as well as small to be deployed to a mobile device such as a smartphone.

Grading of Harvested 'Mihwang' Peach Maturity with Convolutional Neural Network (합성곱 신경망을 이용한 '미황' 복숭아 과실의 성숙도 분류)

  • Shin, Mi Hee;Jang, Kyeong Eun;Lee, Seul Ki;Cho, Jung Gun;Song, Sang Jun;Kim, Jin Gook
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.270-278
    • /
    • 2022
  • This study was conducted using deep learning technology to classify for 'Mihwang' peach maturity with RGB images and fruit quality attributes during fruit development and maturation periods. The 730 images of peach were used in the training data set and validation data set at a ratio of 8:2. The remains of 170 images were used to test the deep learning models. In this study, among the fruit quality attributes, firmness, Hue value, and a* value were adapted to the index with maturity classification, such as immature, mature, and over mature fruit. This study used the CNN (Convolutional Neural Networks) models for image classification; VGG16 and InceptionV3 of GoogLeNet. The performance results show 87.1% and 83.6% with Hue left value in VGG16 and InceptionV3, respectively. In contrast, the performance results show 72.2% and 76.9% with firmness in VGG16 and InceptionV3, respectively. The loss rate shows 54.3% and 62.1% with firmness in VGG16 and InceptionV3, respectively. It considers increasing for adapting a field utilization with firmness index in peach.

Classification of mandibular molar furcation involvement in periapical radiographs by deep learning

  • Katerina Vilkomir;Cody Phen;Fiondra Baldwin;Jared Cole;Nic Herndon;Wenjian Zhang
    • Imaging Science in Dentistry
    • /
    • v.54 no.3
    • /
    • pp.257-263
    • /
    • 2024
  • Purpose: The purpose of this study was to classify mandibular molar furcation involvement (FI) in periapical radiographs using a deep learning algorithm. Materials and Methods: Full mouth series taken at East Carolina University School of Dental Medicine from 2011-2023 were screened. Diagnostic-quality mandibular premolar and molar periapical radiographs with healthy or FI mandibular molars were included. The radiographs were cropped into individual molar images, annotated as "healthy" or "FI," and divided into training, validation, and testing datasets. The images were preprocessed by PyTorch transformations. ResNet-18, a convolutional neural network model, was refined using the PyTorch deep learning framework for the specific imaging classification task. CrossEntropyLoss and the AdamW optimizer were employed for loss function training and optimizing the learning rate, respectively. The images were loaded by PyTorch DataLoader for efficiency. The performance of ResNet-18 algorithm was evaluated with multiple metrics, including training and validation losses, confusion matrix, accuracy, sensitivity, specificity, the receiver operating characteristic (ROC) curve, and the area under the ROC curve. Results: After adequate training, ResNet-18 classified healthy vs. FI molars in the testing set with an accuracy of 96.47%, indicating its suitability for image classification. Conclusion: The deep learning algorithm developed in this study was shown to be promising for classifying mandibular molar FI. It could serve as a valuable supplemental tool for detecting and managing periodontal diseases.

An Improved PeleeNet Algorithm with Feature Pyramid Networks for Image Detection

  • Yangfan, Bai;Joe, Inwhee
    • Annual Conference of KIPS
    • /
    • 2019.05a
    • /
    • pp.398-400
    • /
    • 2019
  • Faced with the increasing demand for image recognition on mobile devices, how to run convolutional neural network (CNN) models on mobile devices with limited computing power and limited storage resources encourages people to study efficient model design. In recent years, many effective architectures have been proposed, such as mobilenet_v1, mobilenet_v2 and PeleeNet. However, in the process of feature selection, all these models neglect some information of shallow features, which reduces the capture of shallow feature location and semantics. In this study, we propose an effective framework based on Feature Pyramid Networks to improve the recognition accuracy of deep and shallow images while guaranteeing the recognition speed of PeleeNet structured images. Compared with PeleeNet, the accuracy of structure recognition on CIFA-10 data set increased by 4.0%.

Physics informed neural networks for surrogate modeling of accidental scenarios in nuclear power plants

  • Federico Antonello;Jacopo Buongiorno;Enrico Zio
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3409-3416
    • /
    • 2023
  • Licensing the next-generation of nuclear reactor designs requires extensive use of Modeling and Simulation (M&S) to investigate system response to many operational conditions, identify possible accidental scenarios and predict their evolution to undesirable consequences that are to be prevented or mitigated via the deployment of adequate safety barriers. Deep Learning (DL) and Artificial Intelligence (AI) can support M&S computationally by providing surrogates of the complex multi-physics high-fidelity models used for design. However, DL and AI are, generally, low-fidelity 'black-box' models that do not assure any structure based on physical laws and constraints, and may, thus, lack interpretability and accuracy of the results. This poses limitations on their credibility and doubts about their adoption for the safety assessment and licensing of novel reactor designs. In this regard, Physics Informed Neural Networks (PINNs) are receiving growing attention for their ability to integrate fundamental physics laws and domain knowledge in the neural networks, thus assuring credible generalization capabilities and credible predictions. This paper presents the use of PINNs as surrogate models for accidental scenarios simulation in Nuclear Power Plants (NPPs). A case study of a Loss of Heat Sink (LOHS) accidental scenario in a Nuclear Battery (NB), a unique class of transportable, plug-and-play microreactors, is considered. A PINN is developed and compared with a Deep Neural Network (DNN). The results show the advantages of PINNs in providing accurate solutions, avoiding overfitting, underfitting and intrinsically ensuring physics-consistent results.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

Comparison of CNN Structures for Detection of Surface Defects (표면 결함 검출을 위한 CNN 구조의 비교)

  • Choi, Hakyoung;Seo, Kisung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.7
    • /
    • pp.1100-1104
    • /
    • 2017
  • A detector-based approach shows the limited performances for the defect inspections such as shallow fine cracks and indistinguishable defects from background. Deep learning technique is widely used for object recognition and it's applications to detect defects have been gradually attempted. Deep learning requires huge scale of learning data, but acquisition of data can be limited in some industrial application. The possibility of applying CNN which is one of the deep learning approaches for surface defect inspection is investigated for industrial parts whose detection difficulty is challenging and learning data is not sufficient. VOV is adopted for pre-processing and to obtain a resonable number of ROIs for a data augmentation. Then CNN method is applied for the classification. Three CNN networks, AlexNet, VGGNet, and mofified VGGNet are compared for experiments of defects detection.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Automatically Diagnosing Skull Fractures Using an Object Detection Method and Deep Learning Algorithm in Plain Radiography Images

  • Tae Seok, Jeong;Gi Taek, Yee; Kwang Gi, Kim;Young Jae, Kim;Sang Gu, Lee;Woo Kyung, Kim
    • Journal of Korean Neurosurgical Society
    • /
    • v.66 no.1
    • /
    • pp.53-62
    • /
    • 2023
  • Objective : Deep learning is a machine learning approach based on artificial neural network training, and object detection algorithm using deep learning is used as the most powerful tool in image analysis. We analyzed and evaluated the diagnostic performance of a deep learning algorithm to identify skull fractures in plain radiographic images and investigated its clinical applicability. Methods : A total of 2026 plain radiographic images of the skull (fracture, 991; normal, 1035) were obtained from 741 patients. The RetinaNet architecture was used as a deep learning model. Precision, recall, and average precision were measured to evaluate the deep learning algorithm's diagnostic performance. Results : In ResNet-152, the average precision for intersection over union (IOU) 0.1, 0.3, and 0.5, were 0.7240, 0.6698, and 0.3687, respectively. When the intersection over union (IOU) and confidence threshold were 0.1, the precision was 0.7292, and the recall was 0.7650. When the IOU threshold was 0.1, and the confidence threshold was 0.6, the true and false rates were 82.9% and 17.1%, respectively. There were significant differences in the true/false and false-positive/false-negative ratios between the anterior-posterior, towne, and both lateral views (p=0.032 and p=0.003). Objects detected in false positives had vascular grooves and suture lines. In false negatives, the detection performance of the diastatic fractures, fractures crossing the suture line, and fractures around the vascular grooves and orbit was poor. Conclusion : The object detection algorithm applied with deep learning is expected to be a valuable tool in diagnosing skull fractures.

Prediction of aerodynamics using VGG16 and U-Net (VGG16 과 U-Net 구조를 이용한 공력특성 예측)

  • Bo Ra, Kim;Seung Hun, Lee;Seung Hyun, Jang;Gwang Il, Hwang;Min, Yoon
    • Journal of the Korean Society of Visualization
    • /
    • v.20 no.3
    • /
    • pp.109-116
    • /
    • 2022
  • The optimized design of airfoils is essential to increase the performance and efficiency of wind turbines. The aerodynamic characteristics of airfoils near the stall show large deviation from experiments and numerical simulations. Hence, it is needed to perform repetitive analysis of various shapes near the stall. To overcome this, the artificial intelligence is used and combined with numerical simulations. In this study, three types of airfoils are chosen, which are S809, S822 and SD7062 used in wind turbines. A convolutional neural network model is proposed in the combination of VGG16 and U-Net. Learning data are constructed by extracting pressure fields and aerodynamic characteristics through numerical analysis of 2D shape. Based on these data, the pressure field and lift coefficient of untrained airfoils are predicted. As a result, even in untrained airfoils, the pressure field is accurately predicted with an error of within 0.04%.