• Title/Summary/Keyword: 인공지능-딥러닝

Search Result 699, Processing Time 0.028 seconds

Deep Learning-based Fracture Mode Determination in Composite Laminates (복합 적층판의 딥러닝 기반 파괴 모드 결정)

  • Muhammad Muzammil Azad;Atta Ur Rehman Shah;M.N. Prabhakar;Heung Soo Kim
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.4
    • /
    • pp.225-232
    • /
    • 2024
  • This study focuses on the determination of the fracture mode in composite laminates using deep learning. With the increase in the use of laminated composites in numerous engineering applications, the insurance of their integrity and performance is of paramount importance. However, owing to the complex nature of these materials, the identification of fracture modes is often a tedious and time-consuming task that requires critical domain knowledge. Therefore, to alleviate these issues, this study aims to utilize modern artificial intelligence technology to automate the fractographic analysis of laminated composites. To accomplish this goal, scanning electron microscopy (SEM) images of fractured tensile test specimens are obtained from laminated composites to showcase various fracture modes. These SEM images are then categorized based on numerous fracture modes, including fiber breakage, fiber pull-out, mix-mode fracture, matrix brittle fracture, and matrix ductile fracture. Next, the collective data for all classes are divided into train, test, and validation datasets. Two state-of-the-art, deep learning-based pre-trained models, namely, DenseNet and GoogleNet, are trained to learn the discriminative features for each fracture mode. The DenseNet models shows training and testing accuracies of 94.01% and 75.49%, respectively, whereas those of the GoogleNet model are 84.55% and 54.48%, respectively. The trained deep learning models are then validated on unseen validation datasets. This validation demonstrates that the DenseNet model, owing to its deeper architecture, can extract high-quality features, resulting in 84.44% validation accuracy. This value is 36.84% higher than that of the GoogleNet model. Hence, these results affirm that the DenseNet model is effective in performing fractographic analyses of laminated composites by predicting fracture modes with high precision.

Investigation of the Super-resolution Algorithm for the Prediction of Periodontal Disease in Dental X-ray Radiography (치주질환 예측을 위한 치과 X-선 영상에서의 초해상화 알고리즘 적용 가능성 연구)

  • Kim, Han-Na
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.2
    • /
    • pp.153-158
    • /
    • 2021
  • X-ray image analysis is a very important field to improve the early diagnosis rate and prediction accuracy of periodontal disease. Research on the development and application of artificial intelligence-based algorithms to improve the quality of such dental X-ray images is being widely conducted worldwide. Thus, the aim of this study was to design a super-resolution algorithm for predicting periodontal disease and to evaluate its applicability in dental X-ray images. The super-resolution algorithm was constructed based on the convolution layer and ReLU, and an image obtained by up-sampling a low-resolution image by 2 times was used as an input data. Also, 1,500 dental X-ray data used for deep learning training were used. Quantitative evaluation of images used root mean square error and structural similarity, which are factors that can measure similarity through comparison of two images. In addition, the recently developed no-reference based natural image quality evaluator and blind/referenceless image spatial quality evaluator were additionally analyzed. According to the results, we confirmed that the average similarity and no-reference-based evaluation values were improved by 1.86 and 2.14 times, respectively, compared to the existing bicubic-based upsampling method when the proposed method was used. In conclusion, the super-resolution algorithm for predicting periodontal disease proved useful in dental X-ray images, and it is expected to be highly applicable in various fields in the future.

Character Detection and Recognition of Steel Materials in Construction Drawings using YOLOv4-based Small Object Detection Techniques (YOLOv4 기반의 소형 물체탐지기법을 이용한 건설도면 내 철강 자재 문자 검출 및 인식기법)

  • Sim, Ji-Woo;Woo, Hee-Jo;Kim, Yoonhwan;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.391-401
    • /
    • 2022
  • As deep learning-based object detection and recognition research have been developed recently, the scope of application to industry and real life is expanding. But deep learning-based systems in the construction system are still much less studied. Calculating materials in the construction system is still manual, so it is a reality that transactions of wrong volumn calculation are generated due to a lot of time required and difficulty in accurate accumulation. A fast and accurate automatic drawing recognition system is required to solve this problem. Therefore, we propose an AI-based automatic drawing recognition accumulation system that detects and recognizes steel materials in construction drawings. To accurately detect steel materials in construction drawings, we propose data augmentation techniques and spatial attention modules for improving small object detection performance based on YOLOv4. The detected steel material area is recognized by text, and the number of steel materials is integrated based on the predicted characters. Experimental results show that the proposed method increases the accuracy and precision by 1.8% and 16%, respectively, compared with the conventional YOLOv4. As for the proposed method, Precision performance was 0.938. The recall was 1. Average Precision AP0.5 was 99.4% and AP0.5:0.95 was 67%. Accuracy for character recognition obtained 99.9.% by configuring and learning a suitable dataset that contains fonts used in construction drawings compared to the 75.6% using the existing dataset. The average time required per image was 0.013 seconds in the detection, 0.65 seconds in character recognition, and 0.16 seconds in the accumulation, resulting in 0.84 seconds.

Performance Evaluation of Vision Transformer-based Pneumonia Detection Model using Chest X-ray Images (흉부 X-선 영상을 이용한 Vision transformer 기반 폐렴 진단 모델의 성능 평가)

  • Junyong Chang;Youngeun Choi;Seungwan Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.5
    • /
    • pp.541-549
    • /
    • 2024
  • The various structures of artificial neural networks, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have been extensively studied and served as the backbone of numerous models. Among these, a transformer architecture has demonstrated its potential for natural language processing and become a subject of in-depth research. Currently, the techniques can be adapted for image processing through the modifications of its internal structure, leading to the development of Vision transformer (ViT) models. The ViTs have shown high accuracy and performance with large data-sets. This study aims to develop a ViT-based model for detecting pneumonia using chest X-ray images and quantitatively evaluate its performance. The various architectures of the ViT-based model were constructed by varying the number of encoder blocks, and different patch sizes were applied for network training. Also, the performance of the ViT-based model was compared to the CNN-based models, such as VGGNet, GoogLeNet, and ResNet. The results showed that the traninig efficiency and accuracy of the ViT-based model depended on the number of encoder blocks and the patch size, and the F1 scores of the ViT-based model ranged from 0.875 to 0.919. The training effeciency of the ViT-based model with a large patch size was superior to the CNN-based models, and the pneumonia detection accuracy of the ViT-based model was higher than that of the VGGNet. In conclusion, the ViT-based model can be potentially used for pneumonia detection using chest X-ray images, and the clinical availability of the ViT-based model would be improved by this study.

An Implementation of Federated Learning based on Blockchain (블록체인 기반의 연합학습 구현)

  • Park, June Beom;Park, Jong Sou
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.89-96
    • /
    • 2020
  • Deep learning using an artificial neural network has been recently researched and developed in various fields such as image recognition, big data and data analysis. However, federated learning has emerged to solve issues of data privacy invasion and problems that increase the cost and time required to learn. Federated learning presented learning techniques that would bring the benefits of distributed processing system while solving the problems of existing deep learning, but there were still problems with server-client system and motivations for providing learning data. So, we replaced the role of the server with a blockchain system in federated learning, and conducted research to solve the privacy and security problems that are associated with federated learning. In addition, we have implemented a blockchain-based system that motivates users by paying compensation for data provided by users, and requires less maintenance costs while maintaining the same accuracy as existing learning. In this paper, we present the experimental results to show the validity of the blockchain-based system, and compare the results of the existing federated learning with the blockchain-based federated learning. In addition, as a future study, we ended the thesis by presenting solutions to security problems and applicable business fields.

Implementation of Image based Fire Detection System Using Convolution Neural Network (합성곱 신경망을 이용한 이미지 기반 화재 감지 시스템의 구현)

  • Bang, Sang-Wan
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.2
    • /
    • pp.331-336
    • /
    • 2017
  • The need for early fire detection technology is increasing in order to prevent fire disasters. Sensor device detection for heat, smoke and fire is widely used to detect flame and smoke, but this system is limited by the factors of the sensor environment. To solve these problems, many image-based fire detection systems are being developed. In this paper, we implemented a system to detect fire and smoke from camera input images using a convolution neural network. Through the implemented system using the convolution neural network, a feature map is generated for the smoke image and the fire image, and learning for classifying the smoke and fire is performed on the generated feature map. Experimental results on various images show excellent effects for classifying smoke and fire.

The Study for Type of Mask Wearing Dataset for Deep learning and Detection Model (딥러닝을 위한 마스크 착용 유형별 데이터셋 구축 및 검출 모델에 관한 연구)

  • Hwang, Ho Seong;Kim, Dong heon;Kim, Ho Chul
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.3
    • /
    • pp.131-135
    • /
    • 2022
  • Due to COVID-19, Correct method of wearing mask is important to prevent COVID-19 and the other respiratory tract infections. And the deep learning technology in the image processing has been developed. The purpose of this study is to create the type of mask wearing dataset for deep learning models and select the deep learning model to detect the wearing mask correctly. The Image dataset is the 2,296 images acquired using a web crawler. Deep learning classification models provided by tensorflow are used to validate the dataset. And Object detection deep learning model YOLOs are used to select the detection deep learning model to detect the wearing mask correctly. In this process, this paper proposes to validate the type of mask wearing datasets and YOLOv5 is the effective model to detect the type of mask wearing. The experimental results show that reliable dataset is acquired and the YOLOv5 model effectively recognize type of mask wearing.

Multi Domain Dialog State Tracking using Domain State (도메인 상태를 이용한 다중 도메인 대화 상태 추적)

  • Jeon, Hyunmin;Lee, Geunbae
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.421-426
    • /
    • 2020
  • 다중 도메인 목적 지향 대화에서 기존 딥 러닝을 이용한 대화 상태 추적(Dialog state tracking)은 여러 턴 동안 누적된 사용자와 시스템 간 대화를 입력 받아 슬롯 밸류(Slot value)를 추출하는 모델들이 연구되었다. 하지만 이 모델들은 대화가 길어질수록 연산량이 증가한다. 이에 본 논문에서는 다중 도메인 대화에서 누적된 대화의 history 없이 슬롯 밸류를 추출하는 방법을 제안한다. 하지만, 단순하게 history를 제거하고 현재 턴의 발화만 입력 받는 방법은 문맥 정보의 손실로 이어진다. 따라서 본 논문에서는 도메인 상태(Domain state)를 도입하여 매 턴 마다 대화 상태와 함께 추적하는 모델을 제안한다. 도메인 상태를 같이 추적함으로써 현재 어떠한 도메인에 대하여 대화가 진행되고 있는지를 파악한다. 또한, 함축된 문맥 정보를 담고 있는 이전 턴의 대화 상태와 도메인 상태를 현재 턴의 발화와 같이 입력 받아 정보의 손실을 줄였다. 대표적인 데이터 셋인 MultiWOZ 2.0과 MultiWOZ 2.1에서 실험한 결과, 대화의 history를 사용하지 않고도 대화 상태 추적에 있어 좋은 성능을 보이는 것을 확인하였다. 또한, 시스템 응답과 과거 발화에 대한 의존성을 제거하여 end-to-end 대화 시스템으로의 확장이 좀 더 용이할 것으로 기대된다.

  • PDF

The Influence and Impact of syntactic-grammatical knowledge on the Phonetic Outputs of a 'Reading Machine' (통사문법적 지식이 '독서기계'의 음성출력에 미치는 영향과 중요성)

  • Hong, Sungshim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.225-230
    • /
    • 2020
  • This paper highlights the influence and the importance of the syntactic-grammatical knowledge on "the reading machine", appeared in Jackendoff (1999). Due to the lack of the detailed testing and implementation in his research, this paper tests an extensive data array using a component of Google Translate, currently available freely and most widely on the internet. Although outdated, Jackendoff's paper, "Why can't Computers use English?", argues that syntactic-grammatical knowledge plays a key role in the outputs of computers and computer-based reading machines. The current research has implemented some testings of his thought-provoking examples, in order to find out whether Google Translate can handle the same problems after two decades or so. As a result, it is argued that in the field of NLP, I-language in the sense of Chomsky (1986, 1995 etc) is real and the syntactic, grammatical, and categorial knowledge is essential in the faculty of language. Therefore, it is reassured in this paper that when it comes to human language, even the most advanced "machine" is still no match for human faculty of language, the syntactic-grammatical knowledge.

A Study on Atmospheric Data Anomaly Detection Algorithm based on Unsupervised Learning Using Adversarial Generative Neural Network (적대적 생성 신경망을 활용한 비지도 학습 기반의 대기 자료 이상 탐지 알고리즘 연구)

  • Yang, Ho-Jun;Lee, Seon-Woo;Lee, Mun-Hyung;Kim, Jong-Gu;Choi, Jung-Mu;Shin, Yu-mi;Lee, Seok-Chae;Kwon, Jang-Woo;Park, Ji-Hoon;Jung, Dong-Hee;Shin, Hye-Jung
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.4
    • /
    • pp.260-269
    • /
    • 2022
  • In this paper, We propose an anomaly detection model using deep neural network to automate the identification of outliers of the national air pollution measurement network data that is previously performed by experts. We generated training data by analyzing missing values and outliers of weather data provided by the Institute of Environmental Research and based on the BeatGAN model of the unsupervised learning method, we propose a new model by changing the kernel structure, adding the convolutional filter layer and the transposed convolutional filter layer to improve anomaly detection performance. In addition, by utilizing the generative features of the proposed model to implement and apply a retraining algorithm that generates new data and uses it for training, it was confirmed that the proposed model had the highest performance compared to the original BeatGAN models and other unsupervised learning model like Iforest and One Class SVM. Through this study, it was possible to suggest a method to improve the anomaly detection performance of proposed model while avoiding overfitting without additional cost in situations where training data are insufficient due to various factors such as sensor abnormalities and inspections in actual industrial sites.