• Title/Summary/Keyword: automatic detection

Search Result 1,700, Processing Time 0.033 seconds

Unmanned Multi-Sensor based Observation System for Frost Detection - Design, Installation and Test Operation (서리 탐지를 위한 '무인 다중센서 기반의 관측 시스템' 고안, 설치 및 시험 운영)

  • Kim, Suhyun;Lee, Seung-Jae;Son, Seungwon;Cho, Sungsik;Jo, Eunsu;Kim, Kyurang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.2
    • /
    • pp.95-114
    • /
    • 2022
  • This study presented the possibility of automatic frost observation and the related image data acquisition through the design and installation of a Multiple-sensor based Frost Observation System (MFOS). The MFOS is composed of an RGB camera, a thermal camera and a leaf wetness sensor, and each device performs complementary roles. Through the test operation of the equipment before the occurrence of frost, the voltage value of the leaf wetness sensor increased when maintaining high relative humidity in the case of no precipitation. In the case of Gapyeong- gun, the high relative humidity was maintained due to the surrounding agricultural waterways, so the voltage value increased significantly. In the RGB camera image, leaf wetness sensor and the surface were not observed before sunrise and after sunset, but were observed for the rest of the time. In the case of precipitation, the voltage value of the leaf wetness sensor rapidly increased during the precipitation period and decreased after the precipitation was terminated. In the RGB camera image, the leaf wetness sensor and surface were observed regardless of the precipitation phenomenon, but the thermal camera image was taken due to the precipitation phenomenon, but the leaf wetness sensor and surface were not observed. Through, where actual frost occurred, it was confirmed that the voltage value of leaf wetness sensor was higher than the range corresponding to frost, but frost was observed on the surface and equipment surface by the RGB camera.

A Comparison of Pre-Processing Techniques for Enhanced Identification of Paralichthys olivaceus Disease based on Deep Learning (딥러닝 기반 넙치 질병 식별 향상을 위한 전처리 기법 비교)

  • Kang, Ja Young;Son, Hyun Seung;Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.3
    • /
    • pp.71-80
    • /
    • 2022
  • In the past, fish diseases were bacterial in aqua farms, but in recent years, the frequency of fish diseases has increased as they have become viral and mixed. Viral diseases in an enclosed space called a aqua farm have a high spread rate, so it is very likely to lead to mass death. Fast identification of fish diseases is important to prevent group death. However, diagnosis of fish diseases requires a high level of expertise and it is difficult to visually check the condition of fish every time. In order to prevent the spread of the disease, an automatic identification system of diseases or fish is needed. In this paper, in order to improve the performance of the disease identification system of Paralichthys olivaceus based on deep learning, the existing pre-processing method is compared and tested. Target diseases were selected from three most frequent diseases such as Scutica, Vibrio, and Lymphocystis in Paralichthys olivaceus. The RGB, HLS, HSV, LAB, LUV, XYZ, and YCRCV were used as image pre-processing methods. As a result of the experiment, HLS was able to get the best results than using general RGB. It is expected that the fish disease identification system can be advanced by improving the recognition rate of diseases in a simple way.

D4AR - A 4-DIMENSIONAL AUGMENTED REALITY - MODEL FOR AUTOMATION AND VISUALIZATION OF CONSTRUCTION PROGRESS MONITORING

  • Mani Golparvar-Fard;Feniosky Pena-Mora
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.30-31
    • /
    • 2009
  • Early detection of schedule delay in field construction activities is vital to project management. It provides the opportunity to initiate remedial actions and increases the chance of controlling such overruns or minimizing their impacts. This entails project managers to design, implement, and maintain a systematic approach for progress monitoring to promptly identify, process and communicate discrepancies between actual and as-planned performances as early as possible. Despite importance, systematic implementation of progress monitoring is challenging: (1) Current progress monitoring is time-consuming as it needs extensive as-planned and as-built data collection; (2) The excessive amount of work required to be performed may cause human-errors and reduce the quality of manually collected data and since only an approximate visual inspection is usually performed, makes the collected data subjective; (3) Existing methods of progress monitoring are also non-systematic and may also create a time-lag between the time progress is reported and the time progress is actually accomplished; (4) Progress reports are visually complex, and do not reflect spatial aspects of construction; and (5) Current reporting methods increase the time required to describe and explain progress in coordination meetings and in turn could delay the decision making process. In summary, with current methods, it may be not be easy to understand the progress situation clearly and quickly. To overcome such inefficiencies, this research focuses on exploring application of unsorted daily progress photograph logs - available on any construction site - as well as IFC-based 4D models for progress monitoring. Our approach is based on computing, from the images themselves, the photographer's locations and orientations, along with a sparse 3D geometric representation of the as-built scene using daily progress photographs and superimposition of the reconstructed scene over the as-planned 4D model. Within such an environment, progress photographs are registered in the virtual as-planned environment, allowing a large unstructured collection of daily construction images to be interactively explored. In addition, sparse reconstructed scenes superimposed over 4D models allow site images to be geo-registered with the as-planned components and consequently, a location-based image processing technique to be implemented and progress data to be extracted automatically. The result of progress comparison study between as-planned and as-built performances can subsequently be visualized in the D4AR - 4D Augmented Reality - environment using a traffic light metaphor. In such an environment, project participants would be able to: 1) use the 4D as-planned model as a baseline for progress monitoring, compare it to daily construction photographs and study workspace logistics; 2) interactively and remotely explore registered construction photographs in a 3D environment; 3) analyze registered images and quantify as-built progress; 4) measure discrepancies between as-planned and as-built performances; and 5) visually represent progress discrepancies through superimposition of 4D as-planned models over progress photographs, make control decisions and effectively communicate those with project participants. We present our preliminary results on two ongoing construction projects and discuss implementation, perceived benefits and future potential enhancement of this new technology in construction, in all fronts of automatic data collection, processing and communication.

  • PDF

Acceleration of Viewport Extraction for Multi-Object Tracking Results in 360-degree Video (360도 영상에서 다중 객체 추적 결과에 대한 뷰포트 추출 가속화)

  • Heesu Park;Seok Ho Baek;Seokwon Lee;Myeong-jin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.306-313
    • /
    • 2023
  • Realistic and graphics-based virtual reality content is based on 360-degree videos, and viewport extraction through the viewer's intention or automatic recommendation function is essential. This paper designs a viewport extraction system based on multiple object tracking in 360-degree videos and proposes a parallel computing structure necessary for multiple viewport extraction. The viewport extraction process in 360-degree videos is parallelized by composing pixel-wise threads, through 3D spherical surface coordinate transformation from ERP coordinates and 2D coordinate transformation of 3D spherical surface coordinates within the viewport. The proposed structure evaluated the computation time for up to 30 viewport extraction processes in aerial 360-degree video sequences and confirmed up to 5240 times acceleration compared to the CPU-based computation time proportional to the number of viewports. When using high-speed I/O or memory buffers that can reduce ERP frame I/O time, viewport extraction time can be further accelerated by 7.82 times. The proposed parallelized viewport extraction structure can be applied to simultaneous multi-access services for 360-degree videos or virtual reality contents and video summarization services for individual users.

A Comparison of Image Classification System for Building Waste Data based on Deep Learning (딥러닝기반 건축폐기물 이미지 분류 시스템 비교)

  • Jae-Kyung Sung;Mincheol Yang;Kyungnam Moon;Yong-Guk Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.199-206
    • /
    • 2023
  • This study utilizes deep learning algorithms to automatically classify construction waste into three categories: wood waste, plastic waste, and concrete waste. Two models, VGG-16 and ViT (Vision Transformer), which are convolutional neural network image classification algorithms and NLP-based models that sequence images, respectively, were compared for their performance in classifying construction waste. Image data for construction waste was collected by crawling images from search engines worldwide, and 3,000 images, with 1,000 images for each category, were obtained by excluding images that were difficult to distinguish with the naked eye or that were duplicated and would interfere with the experiment. In addition, to improve the accuracy of the models, data augmentation was performed during training with a total of 30,000 images. Despite the unstructured nature of the collected image data, the experimental results showed that VGG-16 achieved an accuracy of 91.5%, and ViT achieved an accuracy of 92.7%. This seems to suggest the possibility of practical application in actual construction waste data management work. If object detection techniques or semantic segmentation techniques are utilized based on this study, more precise classification will be possible even within a single image, resulting in more accurate waste classification

Drone-mounted fruit recognition algorithm and harvesting mechanism for automatic fruit harvesting (자동 과일 수확을 위한 드론 탑재형 과일 인식 알고리즘 및 수확 메커니즘)

  • Joo, Kiyoung;Hwang, Bohyun;Lee, Sangmin;Kim, Byungkyu;Baek, Joong-Hwan
    • Journal of Aerospace System Engineering
    • /
    • v.16 no.1
    • /
    • pp.49-55
    • /
    • 2022
  • The role of drones has been expanded to various fields such as agriculture, construction, and logistics. In particular, agriculture drones are emerging as an effective alternative to solve the problem of labor shortage and reduce the input cost. In this study therefore, we proposed the fruit recognition algorithm and harvesting mechanism for fruit harvesting drone system that can safely harvest fruits at high positions. In the fruit recognition algorithm, we employ "You-Only-Look-Once" which is a deep learning-based object detection algorithm and verify its feasibility by establishing a virtual simulation environment. In addition, we propose the fruit harvesting mechanism which can be operated by a single driving motor. The rotational motion of the motor is converted into a linear motion by the scotch yoke, and the opened gripper moves forward, grips a fruit and rotates it for harvesting. The feasibility of the proposed mechanism is verified by performing Multi-body dynamics analysis.

Monitoring of some heavy metals in oriental animality medicines (동물성 생약에 함유되어 있는 몇 가지 중금속에 대한 실태 조사)

  • Baek, Sunyoung;Chung, Jaeyoen;Lee, Jihye;Park, Kyungsu;Kang, Inho;Kang, Sinjung;Kim, Yunje
    • Analytical Science and Technology
    • /
    • v.22 no.3
    • /
    • pp.201-209
    • /
    • 2009
  • Four heavy metals (Pb, Cd, As, and Hg) in 38 species (total 325 samples) of oriental animality medicines were monitored by inductively coupled plasma-mass spectrometry (ICP-MS) and automatic mercury analyzer (AMA). The detected concentration range of Pb, Cd, As was presented as $0.02{\mu}gkg^{-1}$ $(D.L){\sim}11.29mgkg^{-1}$, $0.01{\mu}gkg^{-1}$ $(D.L){\sim}2.50 mgkg^{-1}$, $0.12{\mu}gkg^{-1}$ $(D.L){\sim}5.27mgkg^{-1}$, respectively. In case of Hg, it the concentration range was $0.01{\sim}77.11mgkg^{-1}$ except one sample which exceeded detection limit. In all samples of Amydae Carapax and Gallnut, it was not detected over the maximum residue limits of metals. Pb is in charge of the greatest portion of contamination in 22 species of animality medicines, and in case of Hg, 54.46% of total samples were over the maximum residue limits. Therefore, environmental levels of Pb and Hg are needed to continue the researches and the studies for tracking pollution source are required.

Detection Fastener Defect using Semi Supervised Learning and Transfer Learning (준지도 학습과 전이 학습을 이용한 선로 체결 장치 결함 검출)

  • Sangmin Lee;Seokmin Han
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.91-98
    • /
    • 2023
  • Recently, according to development of artificial intelligence, a wide range of industry being automatic and optimized. Also we can find out some research of using supervised learning for deteceting defect of railway in domestic rail industry. However, there are structures other than rails on the track, and the fastener is a device that binds the rail to other structures, and periodic inspections are required to prevent safety accidents. In this paper, we present a method of reducing cost for labeling using semi-supervised and transfer model trained on rail fastener data. We use Resnet50 as the backbone network pretrained on ImageNet. At first we randomly take training data from unlabeled data and then labeled that data to train model. After predict unlabeled data by trained model, we adopted a method of adding the data with the highest probability for each class to the training data by a predetermined size. Futhermore, we also conducted some experiments to investigate the influence of the number of initially labeled data. As a result of the experiment, model reaches 92% accuracy which has a performance difference of around 5% compared to supervised learning. This is expected to improve the performance of the classifier by using relatively few labels without additional labeling processes through the proposed method.

Generation of Time-Series Data for Multisource Satellite Imagery through Automated Satellite Image Collection (자동 위성영상 수집을 통한 다종 위성영상의 시계열 데이터 생성)

  • Yunji Nam;Sungwoo Jung;Taejung Kim;Sooahm Rhee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1085-1095
    • /
    • 2023
  • Time-series data generated from satellite data are crucial resources for change detection and monitoring across various fields. Existing research in time-series data generation primarily relies on single-image analysis to maintain data uniformity, with ongoing efforts to enhance spatial and temporal resolutions by utilizing diverse image sources. Despite the emphasized significance of time-series data, there is a notable absence of automated data collection and preprocessing for research purposes. In this paper, to address this limitation, we propose a system that automates the collection of satellite information in user-specified areas to generate time-series data. This research aims to collect data from various satellite sources in a specific region and convert them into time-series data, developing an automatic satellite image collection system for this purpose. By utilizing this system, users can collect and extract data for their specific regions of interest, making the data immediately usable. Experimental results have shown the feasibility of automatically acquiring freely available Landsat and Sentinel images from the web and incorporating manually inputted high-resolution satellite images. Comparisons between automatically collected and edited images based on high-resolution satellite data demonstrated minimal discrepancies, with no significant errors in the generated output.

Identifying Analog Gauge Needle Objects Based on Image Processing for a Remote Survey of Maritime Autonomous Surface Ships (자율운항선박의 원격검사를 위한 영상처리 기반의 아날로그 게이지 지시바늘 객체의 식별)

  • Hyun-Woo Lee;Jeong-Bin Yim
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.410-418
    • /
    • 2023
  • Recently, advancements and commercialization in the field of maritime autonomous surface ships (MASS) has rapidly progressed. Concurrently, studies are also underway to develop methods for automatically surveying the condition of various on-board equipment remotely to ensure the navigational safety of MASS. One key issue that has gained prominence is the method to obtain values from analog gauges installed in various equipment through image processing. This approach has the advantage of enabling the non-contact detection of gauge values without modifying or changing already installed or planned equipment, eliminating the need for type approval changes from shipping classifications. The objective of this study was to identify a dynamically changing indicator needle within noisy images of analog gauges. The needle object must be identified because its position significantly affects the accurate reading of gauge values. An analog pressure gauge attached to an emergency fire pump model was used for image capture to identify the needle object. The acquired images were pre-processed through Gaussian filtering, thresholding, and morphological operations. The needle object was then identified through Hough Transform. The experimental results confirmed that the center and object of the indicator needle could be identified in images of noisy analog gauges. The findings suggest that the image processing method applied in this study can be utilized for shape identification in analog gauges installed on ships. This study is expected to be applicable as an image processing method for the automatic remote survey of MASS.