• Title/Summary/Keyword: Vision sensing

Search Result 209, Processing Time 0.026 seconds

A Study on the Possibility of Using UAV Stereo Image for Measuring Tree Height in Urban Area (도심지역 수목 높이값 측정을 위한 무인항공기에서 취득된 스테레오 영상의 활용 가능성 고찰)

  • Rhee, Sooahm;Kim, Soohyeon;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_2
    • /
    • pp.1151-1157
    • /
    • 2017
  • Street Trees is an important object for urban environment improvement. Especially the height of the trees needs to be precisely measured as a factor that greatly influences the removal of air pollutants in the Urban Street Canyons. In this study, we extracted the height of the tree based on the stereo image using the precisely adjusted UAV Images of the target area. The adjustment of UAV image was applied photogrammetric SfM (Structure from motion) based on the collinear condition. We measured the height of the trees on the Street Canyon using stereoscopic vision on stereo plotting system. We also acquired the height of the building adjacent to the street trees and the average height of the road surface was calculated for accurate measurement of the height of each object. Through the visual analysis with the plotting operation system, it was possible to measure height of the tree and to calculate the relative height difference value with building quickly. This means that the height of buildings and trees can be calculated without making a 3D point cloud of UAV and it has the advantage of being able to utilize non-experts. In the future, further studies for semiautomatic/automation of this technique should be performed. The development and research of these technologies is expected to help to understand the current status of environmental policies and roadside trees in urban areas.

The Development of Image Processing System Using Area Camera for Feeding Lumber (영역카메라를 이용한 이송중인 제재목의 화상처리시스템 개발)

  • Kim, Byung Nam;Lee, Hyoung Woo;Kim, Kwang Mo
    • Journal of the Korean Wood Science and Technology
    • /
    • v.37 no.1
    • /
    • pp.37-47
    • /
    • 2009
  • For the inspection of wood, machine vision is the most common automated inspection method used at present. It is required to sort wood products by grade and to locate surface defects prior to cut-up. Many different sensing methods have been applied to inspection of wood including optical, ultrasonic, X-ray sensing in the wood industry. Nowadays the scanning system mainly employs CCD line-scan camera to meet the needs of accurate detection of lumber defects and real-time image processing. But this system needs exact feeding system and low deviation of lumber thickness. In this study low cost CCD area sensor was used for the development of image processing system for lumber being fed. When domestic red pine being fed on the conveyer belt, lumber images of irregular term of captured area were acquired because belt conveyor slipped between belt and roller. To overcome incorrect image merging by the unstable feeding speed of belt conveyor, it was applied template matching algorithm which was a measure of the similarity between the pattern of current image and the next one. Feeding the lumber over 13.8 m/min, general area sensor generates unreadable image pattern by the motion blur. The red channel of RGB filter showed a good performance for removing background of the green conveyor belt from merged image. Threshold value reduction method that was a image-based thresholding algorithm performed well for knot detection.

U-Net Cloud Detection for the SPARCS Cloud Dataset from Landsat 8 Images (Landsat 8 기반 SPARCS 데이터셋을 이용한 U-Net 구름탐지)

  • Kang, Jonggu;Kim, Geunah;Jeong, Yemin;Kim, Seoyeon;Youn, Youjeong;Cho, Soobin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1149-1161
    • /
    • 2021
  • With a trend of the utilization of computer vision for satellite images, cloud detection using deep learning also attracts attention recently. In this study, we conducted a U-Net cloud detection modeling using SPARCS (Spatial Procedures for Automated Removal of Cloud and Shadow) Cloud Dataset with the image data augmentation and carried out 10-fold cross-validation for an objective assessment of the model. Asthe result of the blind test for 1800 datasets with 512 by 512 pixels, relatively high performance with the accuracy of 0.821, the precision of 0.847, the recall of 0.821, the F1-score of 0.831, and the IoU (Intersection over Union) of 0.723. Although 14.5% of actual cloud shadows were misclassified as land, and 19.7% of actual clouds were misidentified as land, this can be overcome by increasing the quality and quantity of label datasets. Moreover, a state-of-the-art DeepLab V3+ model and the NAS (Neural Architecture Search) optimization technique can help the cloud detection for CAS500 (Compact Advanced Satellite 500) in South Korea.

The Accuracy Assessment of Species Classification according to Spatial Resolution of Satellite Image Dataset Based on Deep Learning Model (딥러닝 모델 기반 위성영상 데이터세트 공간 해상도에 따른 수종분류 정확도 평가)

  • Park, Jeongmook;Sim, Woodam;Kim, Kyoungmin;Lim, Joongbin;Lee, Jung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1407-1422
    • /
    • 2022
  • This study was conducted to classify tree species and assess the classification accuracy, using SE-Inception, a classification-based deep learning model. The input images of the dataset used Worldview-3 and GeoEye-1 images, and the size of the input images was divided into 10 × 10 m, 30 × 30 m, and 50 × 50 m to compare and evaluate the accuracy of classification of tree species. The label data was divided into five tree species (Pinus densiflora, Pinus koraiensis, Larix kaempferi, Abies holophylla Maxim. and Quercus) by visually interpreting the divided image, and then labeling was performed manually. The dataset constructed a total of 2,429 images, of which about 85% was used as learning data and about 15% as verification data. As a result of classification using the deep learning model, the overall accuracy of up to 78% was achieved when using the Worldview-3 image, the accuracy of up to 84% when using the GeoEye-1 image, and the classification accuracy was high performance. In particular, Quercus showed high accuracy of more than 85% in F1 regardless of the input image size, but trees with similar spectral characteristics such as Pinus densiflora and Pinus koraiensis had many errors. Therefore, there may be limitations in extracting feature amount only with spectral information of satellite images, and classification accuracy may be improved by using images containing various pattern information such as vegetation index and Gray-Level Co-occurrence Matrix (GLCM).

Detection of Marine Oil Spills from PlanetScope Images Using DeepLabV3+ Model (DeepLabV3+ 모델을 이용한 PlanetScope 영상의 해상 유출유 탐지)

  • Kang, Jonggu;Youn, Youjeong;Kim, Geunah;Park, Ganghyun;Choi, Soyeon;Yang, Chan-Su;Yi, Jonghyuk;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1623-1631
    • /
    • 2022
  • Since oil spills can be a significant threat to the marine ecosystem, it is necessary to obtain information on the current contamination status quickly to minimize the damage. Satellite-based detection of marine oil spills has the advantage of spatiotemporal coverage because it can monitor a wide area compared to aircraft. Due to the recent development of computer vision and deep learning, marine oil spill detection can also be facilitated by deep learning. Unlike the existing studies based on Synthetic Aperture Radar (SAR) images, we conducted a deep learning modeling using PlanetScope optical satellite images. The blind test of the DeepLabV3+ model for oil spill detection showed the performance statistics with an accuracy of 0.885, a precision of 0.888, a recall of 0.886, an F1-score of 0.883, and a Mean Intersection over Union (mIOU) of 0.793.

Effect of Visual and Somatosensory Information Inputs on Postural Sway in Patients With Stroke Using Tri-Axial Accelerometer Measurement

  • Chung, Jae-yeop
    • Physical Therapy Korea
    • /
    • v.23 no.1
    • /
    • pp.87-93
    • /
    • 2016
  • Background: Posture balance control is the ability to maintain the body's center of gravity in the minimal postural sway state on a supportive surface. This ability is obtained through a complicated process of sensing the movements of the human body through sensory organs and then integrating the information into the central nervous system and reacting to the musculoskeletal system and the support action of the musculoskeletal system. Motor function, including coordination, motor, and vision, vestibular sense, and sensory function, including proprioception, should act in an integrated way. However, more than half of stroke patients have motor, sensory, cognitive, and emotional disorders for a long time. Motor and sensory disorders cause the greatest difficulty in postural control among stroke patients. Objects: The purpose of this study is to determine the effect of visual and somatosensory information on postural sway in stroke patients and carrying out a kinematic analysis using a tri-axial accelerometer and a quantitative assessment. Methods: Thirty-four subjects posed four stance condition was accepted various sensory information for counterbalance. This experiment referred to the computerized dynamic posturography assessments and was redesigned four condition blocking visual and somatosensory information. To measure the postural sway of the subjects' trunk, a wireless tri-axial accelerometer was used by signal vector magnitude value. Ony-way measure analysis of variance was performed among four condition. Results: There were significant differences when somatosensory information input blocked (p<.05). Conclusion: The sensory significantly affecting the balance ability of stroke patients is somatosensory, and the amount of actual movement of the trunk could be objectively compared and analyzed through quantitative figures using a tri-axial accelerometer for balance ability.

A Study on the Productivity Improvement of Thermal Infrared Camera an Optical Lens (열적외선 카메라용 광학계 생산성 향상에 관한 연구)

  • Kim, Sung-Yong;Hyun, Dong-Hun
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.18 no.3
    • /
    • pp.285-293
    • /
    • 2009
  • Thermal infrared cameras have been conducted actively in various application areas, such as military, medical service, industries and cars. Because of their characteristic of sensing the radiant heat emitted from subjects in the range of long-wavelength($3{\sim}5{\mu}m$ or $8{\sim}12{\mu}m$), and of materializing a vision system, when general optics materials are used, they don't react to the light in the range of long-wavelength, and can't display their optic functions. Therefore, the materials with the feature of higher refractive index, reacting to the range of long-wavelength, are to be used. The kinds of materials with the characteristic of higher refractive index are limited, and their features are close to those of metals. Because of these metallic features, the existing producing method of optical systems were direct manufacturing method using grinding method or CAD/CAM, which put limit on productivity and made it difficult to properly cope with the increasing demand of markets. GASIR, a material, which can be molded easily, was selected among infrared ray optics materials in this study, and the optical system was designed with two Aspheric lenses. Because the lenses are molded in the environment of high temperature and high pressure, they require a special metallic pattern. The metallic pattern was produced with materials with ultra hardness that can stand high temperature and high pressure. As for the lens mold, GMP(Glass Molding Press) of the linear transfer method was used in order to improve the productivity of optical systems for thermal infrared cameras, which was the goal of this paper.

  • PDF

Realistic Building Modeling from Sequences of Digital Images

  • Song, Jeong-Heon;Kim, Min-Suk;Han, Dong-Yeob;Kim, Yong-Il
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.516-516
    • /
    • 2002
  • With the wide usage of LiDAR data and high-resolution satellite image, 3D modeling of buildings in urban areas has become an important research topic in the photogrammetry and computer vision field for many years. However the previous modeling has its limitations of merely texturing the image to the DSM surface of the study area and does not represent the relief of building surfaces. This study is focused on presenting a system of realistic 3D building modeling from consecutive stereo image sequences using digital camera. Generally when acquiring images through camera, various parameters such as zooming, focus, and attitude are necessary to extract accurate results, which in certain cases, some parameters have to be rectified. It is, however, not always possible or practical to precisely estimate or rectify the information of camera positions or attitudes. In this research, we constructed the collinearity condition of stereo images through extracting the distinctive points from stereo image sequence. In addition, we executed image matching with Graph Cut method, which has a very high accuracy. This system successfully performed the realistic modeling of building with a good visual quality. From the study, we concluded that 3D building modeling of city area could be acquired more realistically.

  • PDF

Implementation of Ubiquitous Robot in a Networked Environment (네트워크 환경에서 유비쿼터스 로봇의 구현)

  • Kim Jong-Hwan;Lee Ju-Jang;Yang Hyun-Seng;Oh Yung-Hwan;Yoo Chang-Dong;Lee Jang-Myung;Lee Min-Cheol;Kim Myung-Seok;Lee Kang-Hee
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.12
    • /
    • pp.1051-1061
    • /
    • 2005
  • This paper proposes a ubiquitous robot, Ubibot, as an integration of three forms of robots: Software robot (Sobot), Embedded robot (Embot) and Mobile robot (Mobot). A Sobot is a virtual robot, which has the ability to move to any place or connect to any device through a network in order to overcome spatial limitations. It has the capacity to interpret the context and thus interact with the user. An Embot is embedded within the environment or within physical robots. It can recognize the locations of and authenticate the user or robot, and synthesize sensing information. Also it has the ability to deliver essential information to the user or other components of Ubibot by using various types of output devices. A Mobot provides integrated mobile service. In addition, Middleware intervenes different protocols between Sobot, Embot, and Mobot in order to incorporate them reliably. The services provided by Ubibot will be seamless, calm and context-aware based on the combination of these components. This paper presents the basic concepts and structure of Ubibot. A Sobot, called Rity, is introduced in order to investigate the usability of the proposed concepts. Rity is a 3D synthetic character which exists in the virtual world, has a unique IP address and interacts with human beings through Vision Embot, Sound Embot, Position Embot and Voice Embot. Rity is capable of moving into a Mobot and controlling its mobility. In doing so, Rity can express its behavior in the virtual world, for example, wondering or moving about in the real world. The experimental results demonstrate the feasibility of implementing a Ubibot in a networked environment.

Development of Fire Detection Algorithm using Intelligent context-aware sensor (상황인지 센서를 활용한 지능형 화재감지 알고리즘 설계 및 구현)

  • Kim, Hyeng-jun;Shin, Gyu-young;Oh, Young-jun;Lee, Kang-whan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.93-96
    • /
    • 2015
  • In this paper, we introduce a fire detection system using context-aware sensor. In existing weather and based on vision sensor of fire detection system case, acquired image through sensor of camera is extracting features about fire range as processing to convert HSI(Hue, Saturation, Intensity) model HSI which is color space can have durability in illumination changes. However, in this case, until a fire occurs wide range of sensing a fire in a single camera sensor, it is difficult to detect the occurrence of a fire. Additionally, the fire detection in complex situations as well as difficult to separate continuous boundary is set for the required area is difficult. In this paper, we propose an algorithm for real-time by using a temperature sensor, humidity, Co2, the flame presence information acquired and comparing the data based on multiple conditions, analyze and determine the weighting according to fire it. In addition, it is possible to differential management to intensive fire detection is required zone dividing the state of fire.

  • PDF