• Title/Summary/Keyword: Captured Image

Search Result 988, Processing Time 0.024 seconds

Development of Plant Phenology and Snow Cover Detection Technique in Mountains using Internet Protocol Camera System (무인카메라 기반 산악지역 식물계절 및 적설 탐지 기술 개발)

  • Keunchang, Jang;Jea-Chul, Kim;Junghwa, Chun;Seokil, Jang;Chi Hyeon, Ahn;Bong Cheol, Kim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.318-329
    • /
    • 2022
  • Plant phenology including flowering, leaf unfolding, and leaf coloring in a forest is important to understand the forest ecosystem. Temperature rise due to recent climate change, however, can lead to plant phenology change as well as snowfall in winter season. Therefore, accurate monitoring of forest environment changes such as plant phenology and snow cover is essential to understand the climate change effect on forest management. These changes can monitor using a digital camera system. This paper introduces the detection methods for plant phenology and snow cover at the mountain region using an unmanned camera system that is a way to monitor the change of forest environment. In this study, the Automatic Mountain Meteorology Stations (AMOS) operated by Korea Forest Service (KFS) were selected as the testbed sites in order to systematize the plant phenology and snow cover detection in complex mountain areas. Multi-directional Internet Protocol (IP) camera system that is a kind of unmanned camera was installed at AMOS located in Seoul, Pyeongchang, Geochang, and Uljin. To detect the forest plant phenology and snow cover, the Red-Green-Blue (RGB) analysis based on the IP camera imagery was developed. The results produced by using image analysis captured from IP camera showed good performance in comparison with in-situ data. This result indicates that the utilization technique of IP camera system can capture the forest environment effectively and can be applied to various forest fields such as secure safety, forest ecosystem and disaster management, forestry, etc.

Deep Learning Braille Block Recognition Method for Embedded Devices (임베디드 기기를 위한 딥러닝 점자블록 인식 방법)

  • Hee-jin Kim;Jae-hyuk Yoon;Soon-kak Kwon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.4
    • /
    • pp.1-9
    • /
    • 2023
  • In this paper, we propose a method to recognize the braille blocks for embedded devices in real time through deep learning. First, a deep learning model for braille block recognition is trained on a high-performance computer, and the learning model is applied to a lightweight tool to apply to an embedded device. To recognize the walking information of the braille block, an algorithm is used to determine the path using the distance from the braille block in the image. After detecting braille blocks, bollards, and crosswalks through the YOLOv8 model in the video captured by the embedded device, the walking information is recognized through the braille block path discrimination algorithm. We apply the model lightweight tool to YOLOv8 to detect braille blocks in real time. The precision of YOLOv8 model weights is lowered from the existing 32 bits to 8 bits, and the model is optimized by applying the TensorRT optimization engine. As the result of comparing the lightweight model through the proposed method with the existing model, the path recognition accuracy is 99.05%, which is almost the same as the existing model, but the recognition speed is reduced by 59% compared to the existing model, processing about 15 frames per second.

A Study on the Accuracy Comparison of Object Detection Algorithms for 360° Camera Images for BIM Model Utilization (BIM 모델 활용을 위한 360° 카메라 이미지의 객체 탐지 알고리즘 정확성 비교 연구)

  • Hyun-Chul Joo;Ju-Hyeong Lee;Jong-Won Lim;Jae-Hee Lee;Leen-Seok Kang
    • Land and Housing Review
    • /
    • v.14 no.3
    • /
    • pp.145-155
    • /
    • 2023
  • Recently, with the widespread adoption of Building Information Modeling (BIM) technology in the construction industry, various object detection algorithms have been used to verify errors between 3D models and actual construction elements. Since the characteristics of objects vary depending on the type of construction facility, such as buildings, bridges, and tunnels, appropriate methods for object detection technology need to be employed. Additionally, for object detection, initial object images are required, and to obtain these, various methods, such as drones and smartphones, can be used for image acquisition. The study uses a 360° camera optimized for internal tunnel imaging to capture initial images of the tunnel structures of railway and road facilities. Various object detection methodologies including the YOLO, SSD, and R-CNN algorithms are applied to detect actual objects from the captured images. And the Faster R-CNN algorithm had a higher recognition rate and mAP value than the SSD and YOLO v5 algorithms, and the difference between the minimum and maximum values of the recognition rates was small, showing equal detection ability. Considering the increasing adoption of BIM in current railway and road construction projects, this research highlights the potential utilization of 360° cameras and object detection methodologies for tunnel facility sections, aiming to expand their application in maintenance.

Physio-mechanical and X-ray CT characterization of bentonite as sealing material in geological radioactive waste disposal

  • Melvin B. Diaz;Sang Seob Kim;Gyung Won Lee;Kwang Yeom Kim;Changsoo Lee;Jin-Seop Kim;Minseop Kim
    • Geomechanics and Engineering
    • /
    • v.34 no.4
    • /
    • pp.449-459
    • /
    • 2023
  • The design and development of underground nuclear waste repositories should cover the performance evaluation of the different components such as the construction materials because the long term stability will depend on their response to the surrounding conditions. In South Korea, Gyeonju bentonite has been proposed as a candidate to be used as buffer and backfilling material, especially in the form of blocks to speed up the construction process. In this study, various cylindrical samples were prepared with different dry density and water content, and their physical and mechanical properties were analyzed and correlated with X-ray CT observations. The main objective was to characterize the samples and establish correlations for non-destructive estimation of physical and mechanical properties through the utilization of X-ray CT images. The results showed that the Uniaxial Compression Strength and the P-wave velocity have an increasing relationship with the dry density. Also, a higher water content increased the values of the measure parameters, especially for the P-wave velocity. The X-ray CT analysis indicated a clear relation between the mean CT value and the dry density, Uniaxial Compression Strength, and P-wave velocity. The effect of the higher water content was also captured by the mean CT value. Also, the relationship between the mean CT value and the dry density was used to plot CT dry densities using CT images only. Moreover, the histograms also provided information about the samples heterogeneity through the histograms' full width at half maximum values. Finally, the particle size and heterogeneity were also analyzed using the Madogram function. This function identified small particles in uniform samples and large particles in some samples as a result of poor mixing during preparation. Also, the μmax value correlated with the heterogeneity, and higher values represented samples with larger ranges of CT values or particle densities. These image-based tools have been shown to be useful on the non-destructive characterization of bentonite samples, and the establishment of correlations to obtain physical and mechanical parameters solely from CT images.

Utilization of Weather, Satellite and Drone Data to Detect Rice Blast Disease and Track its Propagation (벼 도열병 발생 탐지 및 확산 모니터링을 위한 기상자료, 위성영상, 드론영상의 공동 활용)

  • Jae-Hyun Ryu;Hoyong Ahn;Kyung-Do Lee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.245-257
    • /
    • 2023
  • The representative crop in the Republic of Korea, rice, is cultivated over extensive areas every year, which resulting in reduced resistance to pests and diseases. One of the major rice diseases, rice blast disease, can lead to a significant decrease in yields when it occurs on a large scale, necessitating early detection and effective control of rice blast disease. Drone-based crop monitoring techniques are valuable for detecting abnormal growth, but frequent image capture for potential rice blast disease occurrences can consume significant labor and resources. The purpose of this study is to early detect rice blast disease using remote sensing data, such as drone and satellite images, along with weather data. Satellite images was helpful in identifying rice cultivation fields. Effective detection of paddy fields was achieved by utilizing vegetation and water indices. Subsequently, air temperature, relative humidity, and number of rainy days were used to calculate the risk of rice blast disease occurrence. An increase in the risk of disease occurrence implies a higher likelihood of disease development, and drone measurements perform at this time. Spectral reflectance changes in the red and near-infrared wavelength regions were observed at the locations where rice blast disease occurred. Clusters with low vegetation index values were observed at locations where rice blast disease occurred, and the time series data for drone images allowed for tracking the spread of the disease from these points. Finally, drone images captured before harvesting was used to generate spatial information on the incidence of rice blast disease in each field.

A Study on the Creative Process of Creative Ballet <Youth> through Motion Capture Technology (모션캡처 활용을 통한 창작발레<청춘>창작과정연구)

  • Chang, So-Jung; Park, Arum
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.809-814
    • /
    • 2023
  • Currently, there is a lack of research that directly applies and integrates science and technology in the field of dance and translates it into creative work. In this study, the researcher applied motion capture to creative dance performance 'Youth' and described the process of incorporating motion capture into scenes for the performance. The research method involved utilizing practice-based research, which derives new knowledge and meaning from creative outcomes through the analysis of phenomena and experiences generated on-site. The creative ballet performance "<Youth>" consists of a total of 4 scenes, and the motion-captured video in these scenes serves as the highlight moments. It visually represents the image of a past ballerina while embodying the meaning of a scene that is both the 'past me' and the 'dream of the present.' The use of motion capture enhances the visual representation of the scenes and plays a role in increasing the audience's immersion. The dance field needs to become familiar with collaborating with scientific and technological advancements like motion capture to digitize intangible assets. It is essential to engage in experimental endeavors and continue training for such collaborations. Furthermore, through collaboration, the ongoing research should extend the scope of movement through digitized processes, performances, and performance records. This will continually confer value and meaning to the field of dance

Improvement of Multiple-sensor based Frost Observation System (MFOS v2) (다중센서 기반 서리관측 시스템의 개선: MFOS v2)

  • Suhyun Kim;Seung-Jae Lee;Kyu Rang Kim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.3
    • /
    • pp.226-235
    • /
    • 2023
  • This study aimed to supplement the shortcomings of the Multiple-sensor-based Frost Observation System (MFOS). The developed frost observation system is an improvement of the existing system. Based on the leaf wetness sensor (LWS), it not only detects frost but also functions to predict surface temperature, which is a major factor in frost occurrence. With the existing observation system, 1) it is difficult to observe ice (frost) formation on the surface when capturing an image of the LWS with an RGB camera because the surface of the sensor reflects most visible light, 2) images captured using the RGB camera before and after sunrise are dark, and 3) the thermal infrared camera only shows the relative high and low temperature. To identify the ice (frost) generated on the surface of the LWS, a LWS that was painted black and three sheets of glass at the same height to be used as an auxiliary tool to check the occurrence of ice (frost) were installed. For RGB camera shooting before and after sunrise, synchronous LED lighting was installed so the power turns on/off according to the camera shooting time. The existing thermal infrared camera, which could only assess the relative temperature (high or low), was improved to extract the temperature value per pixel, and a comparison with the surface temperature sensor installed by the National Institute of Meteorological Sciences (NIMS) was performed to verify its accuracy. As a result of installing and operating the MFOS v2, which reflects these improvements, the accuracy and efficiency of automatic frost observation were demonstrated to be improved, and the usefulness of the data as input data for the frost prediction model was enhanced.

Methodology for Generating UAV's Effective Flight Area that Satisfies the Required Spatial Resolution (요구 공간해상도를 만족하는 무인기의 유효 비행 영역 생성 방법)

  • Ji Won Woo;Yang Gon Kim;Jung Woo An;Sang Yun Park;Gyeong Rae Nam
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.4
    • /
    • pp.400-407
    • /
    • 2024
  • The role of unmanned aerial vehicles (UAVs) in modern warfare is increasingly significant, making their capacity for autonomous missions essential. Accordingly, autonomous target detection/identification based on captured images is crucial, yet the effectiveness of AI models depends on image sharpness. Therefore, this study describes how to determine the field of view (FOV) of the camera and the flight position of the UAV considering the required spatial resolution. Firstly, the calculation of the size of the acquisition area is discussed in relation to the relative position of the UAV and the FOV of the camera. Through this, this paper first calculates the area that can satisfy the spatial resolution and then calculates the relative position of the UAV and the FOV of the camera that can satisfy it. Furthermore, this paper propose a method for calculating the effective range of the UAV's position that can satisfy the required spatial resolution, centred on the coordinate to be photographed. This is then processed into a tabular format, which can be used for mission planning.

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Development of Robotic Inspection System over Bridge Superstructure (교량 상판 하부 안전점검 로봇개발)

  • Nam Soon-Sung;Jang Jung-Whan;Yang Kyung-Taek
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.180-185
    • /
    • 2003
  • The increase of traffic over a bridge has been emerged as one of the most severe problems in view of bridge maintenance, since the load effect caused by the vehicle passage over the bridge has brought out a long-term damage to bridge structure, and it is nearly impossible to maintain operational serviceability of bridge to user's satisfactory level without any concern on bridge maintenance at the phase of completion. Moreover, bridge maintenance operation should be performed by regular inspection over the bridge to prevent structural malfunction or unexpected accidents front breaking out by monitoring on cracks or deformations during service. Therefore, technical breakthrough related to this uninterested field of bridge maintenance leading the public to the turning point of recognition is desperately needed. This study has the aim of development on automated inspection system to lower surface of bridge superstructures to replace the conventional system of bridge inspection with the naked eye, where the monitoring staff is directly on board to refractive or other type of maintenance .vehicles, with which it is expected that we can solve the problems essentially where the results of inspection are varied to change with subjective manlier from monitoring staff, increase stabilities in safety during the inspection, and make contribution to construct data base by providing objective and quantitative data and materials through image processing method over data captured by cameras. By this system it is also expected that objective estimation over the right time of maintenance and reinforcement work will lead enormous decrease in maintenance cost.

  • PDF