• Title/Summary/Keyword: deep learning models

Search Result 1,393, Processing Time 0.025 seconds

Examination of Aggregate Quality Using Image Processing Based on Deep-Learning (딥러닝 기반 영상처리를 이용한 골재 품질 검사)

  • Kim, Seong Kyu;Choi, Woo Bin;Lee, Jong Se;Lee, Won Gok;Choi, Gun Oh;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.255-266
    • /
    • 2022
  • The quality control of coarse aggregate among aggregates, which are the main ingredients of concrete, is currently carried out by SPC(Statistical Process Control) method through sampling. We construct a smart factory for manufacturing innovation by changing the quality control of coarse aggregates to inspect the coarse aggregates based on this image by acquired images through the camera instead of the current sieve analysis. First, obtained images were preprocessed, and HED(Hollistically-nested Edge Detection) which is the filter learned by deep learning segment each object. After analyzing each aggregate by image processing the segmentation result, fineness modulus and the aggregate shape rate are determined by analyzing result. The quality of aggregate obtained through the video was examined by calculate fineness modulus and aggregate shape rate and the accuracy of the algorithm was more than 90% accurate compared to that of aggregates through the sieve analysis. Furthermore, the aggregate shape rate could not be examined by conventional methods, but the content of this paper also allowed the measurement of the aggregate shape rate. For the aggregate shape rate, it was verified with the length of models, which showed a difference of ±4.5%. In the case of measuring the length of the aggregate, the algorithm result and actual length of the aggregate showed a ±6% difference. Analyzing the actual three-dimensional data in a two-dimensional video made a difference from the actual data, which requires further research.

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts

  • June-Goo Lee;HeeSoo Kim;Heejun Kang;Hyun Jung Koo;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1764-1776
    • /
    • 2021
  • Objective: This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods: We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1-10, 11-100, 101-400, > 400) was evaluated. Results: In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion: The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

An Efficient CT Image Denoising using WT-GAN Model

  • Hae Chan Jeong;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.21-29
    • /
    • 2024
  • Reducing the radiation dose during CT scanning can lower the risk of radiation exposure, but not only does the image resolution significantly deteriorate, but the effectiveness of diagnosis is reduced due to the generation of noise. Therefore, noise removal from CT images is a very important and essential processing process in the image restoration. Until now, there are limitations in removing only the noise by separating the noise and the original signal in the image area. In this paper, we aim to effectively remove noise from CT images using the wavelet transform-based GAN model, that is, the WT-GAN model in the frequency domain. The GAN model used here generates images with noise removed through a U-Net structured generator and a PatchGAN structured discriminator. To evaluate the performance of the WT-GAN model proposed in this paper, experiments were conducted on CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. As a result of the performance experiment, the WT-GAN model is better than the traditional filter, that is, the BM3D filter, as well as the existing deep learning models, such as DnCNN, CDAE model, and U-Net GAN model, in qualitative and quantitative measures, that is, PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure) showed excellent results.

Automated Data Extraction from Unstructured Geotechnical Report based on AI and Text-mining Techniques (AI 및 텍스트 마이닝 기법을 활용한 지반조사보고서 데이터 추출 자동화)

  • Park, Jimin;Seo, Wanhyuk;Seo, Dong-Hee;Yun, Tae-Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.4
    • /
    • pp.69-79
    • /
    • 2024
  • Field geotechnical data are obtained from various field and laboratory tests and are documented in geotechnical investigation reports. For efficient design and construction, digitizing these geotechnical parameters is essential. However, current practices involve manual data entry, which is time-consuming, labor-intensive, and prone to errors. Thus, this study proposes an automatic data extraction method from geotechnical investigation reports using image-based deep learning models and text-mining techniques. A deep-learning-based page classification model and a text-searching algorithm were employed to classify geotechnical investigation report pages with 100% accuracy. Computer vision algorithms were utilized to identify valid data regions within report pages, and text analysis was used to match and extract the corresponding geotechnical data. The proposed model was validated using a dataset of 205 geotechnical investigation reports, achieving an average data extraction accuracy of 93.0%. Finally, a user-interface-based program was developed to enhance the practical application of the extraction model. It allowed users to upload PDF files of geotechnical investigation reports, automatically analyze these reports, and extract and edit data. This approach is expected to improve the efficiency and accuracy of digitizing geotechnical investigation reports and building geotechnical databases.

A Study on Detecting Abnormal Air Quality Data Related to Vehicle Emissions Using a Deep Learning Model (딥러닝 모델 기반의 자동차 배출가스 관련 대기환경 이상 데이터 탐지 연구)

  • Jungmu Choi;Jangwoo Kwon;Junpyo Lee;Sunwoo Lee;Park Jung Min;Shin Hye Jung;An Chan Jung;Kang Soyoung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.5
    • /
    • pp.261-273
    • /
    • 2024
  • Automobiles are one of the major sources of air pollution, and analyzing data on air pollutants, where vehicles are the primary pollutants, can help elucidate the correlation between factors like electric vehicles, traffic volume, and actual air pollution. Ensuring the reliability of air pollutant data is crucial for such analyses. This paper proposes a method for detecting sections of data exhibiting 'baseline anomalies' measured at air pollutant monitoring stations across the country by combining deep learning models with algorithms such as dynamic time warping and change point detection. While previous studies have focused on detecting data with unprecedented patterns and defined them as anomalies, this approach was not suitable for detecting baseline anomalies. In this study, we modify the U-Net model, typically used for image segmentation, to be more suitable for time-series data and apply dynamic time warping and change point detection algorithms to compare with nearby monitoring stations, thereby minimizing false detections.

Deep Learning based Brachial Plexus Ultrasound Images Segmentation by Leveraging an Object Detection Algorithm (객체 검출 알고리즘을 활용한 딥러닝 기반 상완 신경총 초음파 영상의 분할에 관한 연구)

  • Kukhyun Cho;Hyunseung Ryu;Myeongjin Lee;Suhyung Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.5
    • /
    • pp.557-566
    • /
    • 2024
  • Ultrasound-guided regional anesthesia is one of the most common techniques used in peripheral nerve blockade by enhancing pain control and recovery time. However, accurate Brachial Plexus (BP) nerve detection and identification remains a challenging task due to the difficulty in data acquisition such as speckle and Doppler artifacts even for experienced anesthesiologists. To mitigate the issue, we introduce a BP nerve small target segmentation network by incorporating BP object detection and U-Net based semantic segmentation into a single deep learning framework based on the multi-scale approach. To this end, the current BP detection and identification was estimated: 1) A RetinaNet model was used to roughly locate the BP nerve region using multi-scale based feature representations, and 2) U-Net was then used by feeding plural BP nerve features for each scale. The experimental results demonstrate that our proposed model produces high quality BP segmentation by increasing the accuracies of the BP nerve identification with the assistance of roughly locating the BP nerve area compared to competing methods such as segmentation-only models.

A Comparative Study on Reservoir Level Prediction Performance Using a Deep Neural Network with ASOS, AWS, and Thiessen Network Data

  • Hye-Seung Park;Hyun-Ho Yang;Ho-Jun Lee; Jongwook Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.67-74
    • /
    • 2024
  • In this paper, we present a study aimed at analyzing how different rainfall measurement methods affect the performance of reservoir water level predictions. This work is particularly timely given the increasing emphasis on climate change and the sustainable management of water resources. To this end, we have employed rainfall data from ASOS, AWS, and Thiessen Network-based measures provided by the KMA Weather Data Service to train our neural network models for reservoir yield predictions. Our analysis, which encompasses 34 reservoirs in Jeollabuk-do Province, examines how each method contributes to enhancing prediction accuracy. The results reveal that models using rainfall data based on the Thiessen Network's area rainfall ratio yield the highest accuracy. This can be attributed to the method's accounting for precise distances between observation stations, offering a more accurate reflection of the actual rainfall across different regions. These findings underscore the importance of precise regional rainfall data in predicting reservoir yields. Additionally, the paper underscores the significance of meticulous rainfall measurement and data analysis, and discusses the prediction model's potential applications in agriculture, urban planning, and flood management.

Estimation of Duck House Litter Evaporation Rate Using Machine Learning (기계학습을 활용한 오리사 바닥재 수분 발생량 분석)

  • Kim, Dain;Lee, In-bok;Yeo, Uk-hyeon;Lee, Sang-yeon;Park, Sejun;Decano, Cristina;Kim, Jun-gyu;Choi, Young-bae;Cho, Jeong-hwa;Jeong, Hyo-hyeog;Kang, Solmoe
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.63 no.6
    • /
    • pp.77-88
    • /
    • 2021
  • Duck industry had a rapid growth in recent years. Nevertheless, researches to improve duck house environment are still not sufficient enough. Moisture generation of duck house litter is an important factor because it may cause severe illness and low productivity. However, the measuring process is difficult because it could be disturbed with animal excrements and other factors. Therefore, it has to be calculated according to the environmental data around the duck house litter. To cut through all these procedures, we built several machine learning regression model forecasting moisture generation of litter by measured environment data (air temperature, relative humidity, wind velocity and water contents). 5 models (Multi Linear Regression, k-Nearest Neighbors, Support Vector Regression, Random Forest and Deep Neural Network). have been selected for regression. By using R-Square, RMSE and MAE as evaluation metrics, the best accurate model was estimated according to the variables for each machine learning model. In addition, to address the small amount of data acquired through lab experiments, bootstrapping method, a technique utilized in statistics, was used. As a result, the most accurate model selected was Random Forest, with parameters of n-estimator 200 by bootstrapping the original data nine times.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.