• Title/Summary/Keyword: Neural Net

Search Result 750, Processing Time 0.028 seconds

Classification of Clothing Using Googlenet Deep Learning and IoT based on Artificial Intelligence (인공지능 기반 구글넷 딥러닝과 IoT를 이용한 의류 분류)

  • Noh, Sun-Kuk
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.41-45
    • /
    • 2020
  • Recently, artificial intelligence (AI) and the Internet of things (IoT), which are represented by machine learning and deep learning among IT technologies related to the Fourth Industrial Revolution, are applied to our real life in various fields through various researches. In this paper, IoT and AI using object recognition technology are applied to classify clothing. For this purpose, the image dataset was taken using webcam and raspberry pi, and GoogLeNet, a convolutional neural network artificial intelligence network, was applied to transfer the photographed image data. The clothing image dataset was classified into two categories (shirtwaist, trousers): 900 clean images, 900 loss images, and total 1800 images. The classification measurement results showed that the accuracy of the clean clothing image was about 97.78%. In conclusion, the study confirmed the applicability of other objects using artificial intelligence networks on the Internet of Things based platform through the measurement results and the supplementation of more image data in the future.

MSaGAN: Improved SaGAN using Guide Mask and Multitask Learning Approach for Facial Attribute Editing

  • Yang, Hyeon Seok;Han, Jeong Hoon;Moon, Young Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.37-46
    • /
    • 2020
  • Recently, studies of facial attribute editing have obtained realistic results using generative adversarial net (GAN) and encoder-decoder structure. Spatial attention GAN (SaGAN), one of the latest researches, is the method that can change only desired attribute in a face image by spatial attention mechanism. However, sometimes unnatural results are obtained due to insufficient information on face areas. In this paper, we propose an improved SaGAN (MSaGAN) using a guide mask for learning and applying multitask learning approach to improve the limitations of the existing methods. Through extensive experiments, we evaluated the results of the facial attribute editing in therms of the mask loss function and the neural network structure. It has been shown that the proposed method can efficiently produce more natural results compared to the previous methods.

A Fully Convolutional Network Model for Classifying Liver Fibrosis Stages from Ultrasound B-mode Images (초음파 B-모드 영상에서 FCN(fully convolutional network) 모델을 이용한 간 섬유화 단계 분류 알고리즘)

  • Kang, Sung Ho;You, Sun Kyoung;Lee, Jeong Eun;Ahn, Chi Young
    • Journal of Biomedical Engineering Research
    • /
    • v.41 no.1
    • /
    • pp.48-54
    • /
    • 2020
  • In this paper, we deal with a liver fibrosis classification problem using ultrasound B-mode images. Commonly representative methods for classifying the stages of liver fibrosis include liver biopsy and diagnosis based on ultrasound images. The overall liver shape and the smoothness and roughness of speckle pattern represented in ultrasound images are used for determining the fibrosis stages. Although the ultrasound image based classification is used frequently as an alternative or complementary method of the invasive biopsy, it also has the limitations that liver fibrosis stage decision depends on the image quality and the doctor's experience. With the rapid development of deep learning algorithms, several studies using deep learning methods have been carried out for automated liver fibrosis classification and showed superior performance of high accuracy. The performance of those deep learning methods depends closely on the amount of datasets. We propose an enhanced U-net architecture to maximize the classification accuracy with limited small amount of image datasets. U-net is well known as a neural network for fast and precise segmentation of medical images. We design it newly for the purpose of classifying liver fibrosis stages. In order to assess the performance of the proposed architecture, numerical experiments are conducted on a total of 118 ultrasound B-mode images acquired from 78 patients with liver fibrosis symptoms of F0~F4 stages. The experimental results support that the performance of the proposed architecture is much better compared to the transfer learning using the pre-trained model of VGGNet.

Indoor Scene Classification based on Color and Depth Images for Automated Reverberation Sound Editing (자동 잔향 편집을 위한 컬러 및 깊이 정보 기반 실내 장면 분류)

  • Jeong, Min-Heuk;Yu, Yong-Hyun;Park, Sung-Jun;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.384-390
    • /
    • 2020
  • The reverberation effect on the sound when producing movies or VR contents is a very important factor in the realism and liveliness. The reverberation time depending the space is recommended in a standard called RT60(Reverberation Time 60 dB). In this paper, we propose a scene recognition technique for automatic reverberation editing. To this end, we devised a classification model that independently trains color images and predicted depth images in the same model. Indoor scene classification is limited only by training color information because of the similarity of internal structure. Deep learning based depth information extraction technology is used to use spatial depth information. Based on RT60, 10 scene classes were constructed and model training and evaluation were conducted. Finally, the proposed SCR + DNet (Scene Classification for Reverb + Depth Net) classifier achieves higher performance than conventional CNN classifiers with 92.4% accuracy.

Application of CCTV Image and Semantic Segmentation Model for Water Level Estimation of Irrigation Channel (관개용수로 CCTV 이미지를 이용한 CNN 딥러닝 이미지 모델 적용)

  • Kim, Kwi-Hoon;Kim, Ma-Ga;Yoon, Pu-Reun;Bang, Je-Hong;Myoung, Woo-Ho;Choi, Jin-Yong;Choi, Gyu-Hoon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.3
    • /
    • pp.63-73
    • /
    • 2022
  • A more accurate understanding of the irrigation water supply is necessary for efficient agricultural water management. Although we measure water levels in an irrigation canal using ultrasonic water level gauges, some errors occur due to malfunctions or the surrounding environment. This study aims to apply CNN (Convolutional Neural Network) Deep-learning-based image classification and segmentation models to the irrigation canal's CCTV (Closed-Circuit Television) images. The CCTV images were acquired from the irrigation canal of the agricultural reservoir in Cheorwon-gun, Gangwon-do. We used the ResNet-50 model for the image classification model and the U-Net model for the image segmentation model. Using the Natural Breaks algorithm, we divided water level data into 2, 4, and 8 groups for image classification models. The classification models of 2, 4, and 8 groups showed the accuracy of 1.000, 0.987, and 0.634, respectively. The image segmentation model showed a Dice score of 0.998 and predicted water levels showed R2 of 0.97 and MAE (Mean Absolute Error) of 0.02 m. The image classification models can be applied to the automatic gate-controller at four divisions of water levels. Also, the image segmentation model results can be applied to the alternative measurement for ultrasonic water gauges. We expect that the results of this study can provide a more scientific and efficient approach for agricultural water management.

Short-Term Crack in Sewer Forecasting Method Based on CNN-LSTM Hybrid Neural Network Model (CNN-LSTM 합성모델에 의한 하수관거 균열 예측모델)

  • Jang, Seung-Ju;Jang, Seung-Yup
    • Journal of the Korean Geosynthetics Society
    • /
    • v.21 no.2
    • /
    • pp.11-19
    • /
    • 2022
  • In this paper, we propose a GoogleNet transfer learning and CNN-LSTM combination method to improve the time-series prediction performance for crack detection using crack data captured inside the sewer pipes. LSTM can solve the long-term dependency problem of CNN, so spatial and temporal characteristics can be considered at the same time. The predictive performance of the proposed method is excellent in all test variables as a result of comparing the RMSE(Root Mean Square Error) for time series sections using the crack data inside the sewer pipe. In addition, as a result of examining the prediction performance at the time of data generation, the proposed method was verified that it is effective in predicting crack detection by comparing with the existing CNN-only model. If the proposed method and experimental results obtained through this study are utilized, it can be applied in various fields such as the environment and humanities where time series data occurs frequently as well as crack data of concrete structures.

Integrated Water Resources Management in the Era of nGreat Transition

  • Ashkan Noori;Seyed Hossein Mohajeri;Milad Niroumand Jadidi;Amir Samadi
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.34-34
    • /
    • 2023
  • The Chah-Nimeh reservoirs, which are a sort of natural lakes located in the border of Iran and Afghanistan, are the main drinking and agricultural water resources of Sistan arid region. Considering the occurrence of intense seasonal wind, locally known as levar wind, this study aims to explore the possibility to provide a TSM (Total Suspended Matter) monitoring model of Chah-Nimeh reservoirs using multi-temporal satellite images and in-situ wind speed data. The results show that a strong correlation between TSM concentration and wind speed are present. The developed empirical model indicated high performance in retrieving spatiotemporal distribution of the TSM concentration with R2=0.98 and RMSE=0.92g/m3. Following this observation, we also consider a machine learning-based model to predicts the average TSM using only wind speed. We connect our in-situ wind speed data to the TSM data generated from the inversion of multi-temporal satellite imagery to train a neural network based mode l(Wind2TSM-Net). Examining Wind2TSM-Net model indicates this model can retrieve the TSM accurately utilizing only wind speed (R2=0.88 and RMSE=1.97g/m3). Moreover, this results of this study show tha the TSM concentration can be estimated using only in situ wind speed data independent of the satellite images. Specifically, such model can supply a temporally persistent means of monitoring TSM that is not limited by the temporal resolution of imagery or the cloud cover problem in the optical remote sensing.

  • PDF

Automatically Diagnosing Skull Fractures Using an Object Detection Method and Deep Learning Algorithm in Plain Radiography Images

  • Tae Seok, Jeong;Gi Taek, Yee; Kwang Gi, Kim;Young Jae, Kim;Sang Gu, Lee;Woo Kyung, Kim
    • Journal of Korean Neurosurgical Society
    • /
    • v.66 no.1
    • /
    • pp.53-62
    • /
    • 2023
  • Objective : Deep learning is a machine learning approach based on artificial neural network training, and object detection algorithm using deep learning is used as the most powerful tool in image analysis. We analyzed and evaluated the diagnostic performance of a deep learning algorithm to identify skull fractures in plain radiographic images and investigated its clinical applicability. Methods : A total of 2026 plain radiographic images of the skull (fracture, 991; normal, 1035) were obtained from 741 patients. The RetinaNet architecture was used as a deep learning model. Precision, recall, and average precision were measured to evaluate the deep learning algorithm's diagnostic performance. Results : In ResNet-152, the average precision for intersection over union (IOU) 0.1, 0.3, and 0.5, were 0.7240, 0.6698, and 0.3687, respectively. When the intersection over union (IOU) and confidence threshold were 0.1, the precision was 0.7292, and the recall was 0.7650. When the IOU threshold was 0.1, and the confidence threshold was 0.6, the true and false rates were 82.9% and 17.1%, respectively. There were significant differences in the true/false and false-positive/false-negative ratios between the anterior-posterior, towne, and both lateral views (p=0.032 and p=0.003). Objects detected in false positives had vascular grooves and suture lines. In false negatives, the detection performance of the diastatic fractures, fractures crossing the suture line, and fractures around the vascular grooves and orbit was poor. Conclusion : The object detection algorithm applied with deep learning is expected to be a valuable tool in diagnosing skull fractures.

Development of Methodology for Measuring Water Level in Agricultural Water Reservoir through Deep Learning anlaysis of CCTV Images (딥러닝 기법을 이용한 농업용저수지 CCTV 영상 기반의 수위계측 방법 개발)

  • Joo, Donghyuk;Lee, Sang-Hyun;Choi, Gyu-Hoon;Yoo, Seung-Hwan;Na, Ra;Kim, Hayoung;Oh, Chang-Jo;Yoon, Kwang-Sik
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.65 no.1
    • /
    • pp.15-26
    • /
    • 2023
  • This study aimed to evaluate the performance of water level classification from CCTV images in agricultural facilities such as reservoirs. Recently, the CCTV system, widely used for facility monitor or disaster detection, can automatically detect and identify people and objects from the images by developing new technologies such as a deep learning system. Accordingly, we applied the ResNet-50 deep learning system based on Convolutional Neural Network and analyzed the water level of the agricultural reservoir from CCTV images obtained from TOMS (Total Operation Management System) of the Korea Rural Community Corporation. As a result, the accuracy of water level detection was improved by excluding night and rainfall CCTV images and applying measures. For example, the error rate significantly decreased from 24.39 % to 1.43 % in the Bakseok reservoir. We believe that the utilization of CCTVs should be further improved when calculating the amount of water supply and establishing a supply plan according to the integrated water management policy.

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.