• Title/Summary/Keyword: AI-based image analysis

Search Result 120, Processing Time 0.023 seconds

Application Verification of AI&Thermal Imaging-Based Concrete Crack Depth Evaluation Technique through Mock-up Test (Mock-up Test를 통한 AI 및 열화상 기반 콘크리트 균열 깊이 평가 기법의 적용성 검증)

  • Jeong, Sang-Gi;Jang, Arum;Park, Jinhan;Kang, Chang-hoon;Ju, Young K.
    • Journal of Korean Association for Spatial Structures
    • /
    • v.23 no.3
    • /
    • pp.95-103
    • /
    • 2023
  • With the increasing number of aging buildings across Korea, emerging maintenance technologies have surged. One such technology is the non-contact detection of concrete cracks via thermal images. This study aims to develop a technique that can accurately predict the depth of a crack by analyzing the temperature difference between the crack part and the normal part in the thermal image of the concrete. The research obtained temperature data through thermal imaging experiments and constructed a big data set including outdoor variables such as air temperature, illumination, and humidity that can influence temperature differences. Based on the collected data, the team designed an algorithm for learning and predicting the crack depth using machine learning. Initially, standardized crack specimens were used in experiments, and the big data was updated by specimens similar to actual cracks. Finally, a crack depth prediction technology was implemented using five regression analysis algorithms for approximately 24,000 data points. To confirm the practicality of the development technique, crack simulators with various shapes were added to the study.

Spatial Replicability Assessment of Land Cover Classification Using Unmanned Aerial Vehicle and Artificial Intelligence in Urban Area (무인항공기 및 인공지능을 활용한 도시지역 토지피복 분류 기법의 공간적 재현성 평가)

  • Geon-Ung, PARK;Bong-Geun, SONG;Kyung-Hun, PARK;Hung-Kyu, LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.63-80
    • /
    • 2022
  • As a technology to analyze and predict an issue has been developed by constructing real space into virtual space, it is becoming more important to acquire precise spatial information in complex cities. In this study, images were acquired using an unmanned aerial vehicle for urban area with complex landscapes, and land cover classification was performed object-based image analysis and semantic segmentation techniques, which were image classification technique suitable for high-resolution imagery. In addition, based on the imagery collected at the same time, the replicability of land cover classification of each artificial intelligence (AI) model was examined for areas that AI model did not learn. When the AI models are trained on the training site, the land cover classification accuracy is analyzed to be 89.3% for OBIA-RF, 85.0% for OBIA-DNN, and 95.3% for U-Net. When the AI models are applied to the replicability assessment site to evaluate replicability, the accuracy of OBIA-RF decreased by 7%, OBIA-DNN by 2.1% and U-Net by 2.3%. It is found that U-Net, which considers both morphological and spectroscopic characteristics, performs well in land cover classification accuracy and replicability evaluation. As precise spatial information becomes important, the results of this study are expected to contribute to urban environment research as a basic data generation method.

Development of Cloud-Based Medical Image Labeling System and It's Quantitative Analysis of Sarcopenia (클라우드기반 의료영상 라벨링 시스템 개발 및 근감소증 정량 분석)

  • Lee, Chung-Sub;Lim, Dong-Wook;Kim, Ji-Eon;Noh, Si-Hyeong;Yu, Yeong-Ju;Kim, Tae-Hoon;Yoon, Kwon-Ha;Jeong, Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.7
    • /
    • pp.233-240
    • /
    • 2022
  • Most of the recent AI researches has focused on developing AI models. However, recently, artificial intelligence research has gradually changed from model-centric to data-centric, and the importance of learning data is getting a lot of attention based on this trend. However, it takes a lot of time and effort because the preparation of learning data takes up a significant part of the entire process, and the generation of labeling data also differs depending on the purpose of development. Therefore, it is need to develop a tool with various labeling functions to solve the existing unmetneeds. In this paper, we describe a labeling system for creating precise and fast labeling data of medical images. To implement this, a semi-automatic method using Back Projection, Grabcut techniques and an automatic method predicted through a machine learning model were implemented. We not only showed the advantage of running time for the generation of labeling data of the proposed system, but also showed superiority through comparative evaluation of accuracy. In addition, by analyzing the image data set of about 1,000 patients, meaningful diagnostic indexes were presented for men and women in the diagnosis of sarcopenia.

Performance Analysis of Cloud-Net with Cross-sensor Training Dataset for Satellite Image-based Cloud Detection

  • Kim, Mi-Jeong;Ko, Yun-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.103-110
    • /
    • 2022
  • Since satellite images generally include clouds in the atmosphere, it is essential to detect or mask clouds before satellite image processing. Clouds were detected using physical characteristics of clouds in previous research. Cloud detection methods using deep learning techniques such as CNN or the modified U-Net in image segmentation field have been studied recently. Since image segmentation is the process of assigning a label to every pixel in an image, precise pixel-based dataset is required for cloud detection. Obtaining accurate training datasets is more important than a network configuration in image segmentation for cloud detection. Existing deep learning techniques used different training datasets. And test datasets were extracted from intra-dataset which were acquired by same sensor and procedure as training dataset. Different datasets make it difficult to determine which network shows a better overall performance. To verify the effectiveness of the cloud detection network such as Cloud-Net, two types of networks were trained using the cloud dataset from KOMPSAT-3 images provided by the AIHUB site and the L8-Cloud dataset from Landsat8 images which was publicly opened by a Cloud-Net author. Test data from intra-dataset of KOMPSAT-3 cloud dataset were used for validating the network. The simulation results show that the network trained with KOMPSAT-3 cloud dataset shows good performance on the network trained with L8-Cloud dataset. Because Landsat8 and KOMPSAT-3 satellite images have different GSDs, making it difficult to achieve good results from cross-sensor validation. The network could be superior for intra-dataset, but it could be inferior for cross-sensor data. It is necessary to study techniques that show good results in cross-senor validation dataset in the future.

Evolutionary Computation-based Hybird Clustring Technique for Manufacuring Time Series Data (제조 시계열 데이터를 위한 진화 연산 기반의 하이브리드 클러스터링 기법)

  • Oh, Sanghoun;Ahn, Chang Wook
    • Smart Media Journal
    • /
    • v.10 no.3
    • /
    • pp.23-30
    • /
    • 2021
  • Although the manufacturing time series data clustering technique is an important grouping solution in the field of detecting and improving manufacturing large data-based equipment and process defects, it has a disadvantage of low accuracy when applying the existing static data target clustering technique to time series data. In this paper, an evolutionary computation-based time series cluster analysis approach is presented to improve the coherence of existing clustering techniques. To this end, first, the image shape resulting from the manufacturing process is converted into one-dimensional time series data using linear scanning, and the optimal sub-clusters for hierarchical cluster analysis and split cluster analysis are derived based on the Pearson distance metric as the target of the transformation data. Finally, by using a genetic algorithm, an optimal cluster combination with minimal similarity is derived for the two cluster analysis results. And the performance superiority of the proposed clustering is verified by comparing the performance with the existing clustering technique for the actual manufacturing process image.

Research Trend of the Remote Sensing Image Analysis Using Deep Learning (딥러닝을 이용한 원격탐사 영상분석 연구동향)

  • Kim, Hyungwoo;Kim, Minho;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.819-834
    • /
    • 2022
  • Artificial Intelligence (AI) techniques have been effectively used for image classification, object detection, and image segmentation. Along with the recent advancement of computing power, deep learning models can build deeper and thicker networks and achieve better performance by creating more appropriate feature maps based on effective activation functions and optimizer algorithms. This review paper examined technical and academic trends of Convolutional Neural Network (CNN) and Transformer models that are emerging techniques in remote sensing and suggested their utilization strategies and development directions. A timely supply of satellite images and real-time processing for deep learning to cope with disaster monitoring will be required for future work. In addition, a big data platform dedicated to satellite images should be developed and integrated with drone and Closed-circuit Television (CCTV) images.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.

Development of online drone control management information platform (온라인 드론방제 관리 정보 플랫폼 개발)

  • Lim, Jin-Taek;Lee, Sang-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.4
    • /
    • pp.193-198
    • /
    • 2021
  • Recently, interests in the 4th industry have increased the level of demand for pest control by farmers in the field of rice farming, and the interests and use of agricultural pest control drones. Therefore, the diversification of agricultural control drones that spray high-concentration pesticides and the increase of agricultural exterminators due to the acquisition of national drone certifications are rapidly developing the agricultural sector in the drone industry. In addition, as detailed projects, an effective platform is required to construct large-scale big data due to pesticide management, exterminator management, precise spraying, pest control work volume classification, settlement, soil management, prediction and monitoring of damages by pests, etc. and to process the data. However, studies in South Korea and other countries on development of models and programs to integrate and process the big data such as data analysis algorithms, image analysis algorithms, growth management algorithms, AI algorithms, etc. are insufficient. This paper proposed an online drone pest control management information platform to meet the needs of managers and farmers in the agricultural field and to realize precise AI pest control based on the agricultural drone pest control processor using drones and presented foundation for development of a comprehensive management system through empirical experiments.

A Comparative Study of Deep Learning Models for Pneumonia Detection: CNN, VUNO, LUIT Models (폐렴 및 정상군 판별을 위한 딥러닝 모델 성능 비교연구: CNN, VUNO, LUNIT 모델 중심으로)

  • Ji-Hyeon Lee;Soo-Young Ye
    • Journal of Radiation Industry
    • /
    • v.18 no.3
    • /
    • pp.177-182
    • /
    • 2024
  • The purpose of this study is to develop a CNN based deep learning model that can effectively detect pneumonia by analyzing chest X-ray images of adults over the age of 20 and compare it with VUNO, LUNIT a commercialized AI model. The data of chest X-ray image was evaluate based on accuracy, precision, recall, F1 score, and AUC score. The CNN model recored an accuracy of 82%, precision 76%, recall 99%, F1 score 86%, and AUC score 0.7937. The VUNO model recordded an accuracy of 84%, precision 81%, recall 94%, F1 score 87%, and AUC score 0.8233. The LUNIT model recorded an accuracy of 77%, precision 72%, recall 96%, F1 score 83%, and AUC score 0.7436. As a result of the Confusion Matrix analysis, the CNN model showe FN (3), showing the highest recall rate (99%) in the diagnosis of pneumonia. The VUNO model showed excellent overall perfomance with high accuracy (84%) and AUC score (0.8233), and the LUNIT model showed high recall rate (96%) but the accuracy and precision showed relatively low results. This study will be able to provide basic data useful for the development of a pneumonia diagnosis system by comprehensively considers the perfomance of the medel is necessary to effectively discriminate between penumonia and normal groups.