• Title/Summary/Keyword: Vision sensor

Search Result 833, Processing Time 0.035 seconds

Network vision of disaster prevention management for seashore reclaimed u-City (해안매립 신도시의 재해 예방관리 네트워크 비젼)

  • Ahn, Sang-Ro
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2009.09a
    • /
    • pp.117-129
    • /
    • 2009
  • This paper studied the safety management network system of infrastructure which constructed smart sensors, closed-circuit television(CCTV) and monitoring system. This safety management of infrastructure applied to bridge, cut slop and tunnel, embankment etc. The system applied to technologies of standardization guidelines, data acquirement technologies, data analysis and judgment technologies, system integration setup technology, and IT technologies. It was constructed safety management network system of various infrastructure to improve efficient management and operation for many infrastructure. Integrated safety management network system of infrastructure consisted of the real-time structural health monitoring system of each infrastructure, integrated control center, measured data transmission using i of tet web-based, collecting data using sf ver, early alarm system which the dangerous event of infrastructure occurred. Integrated control center consisted of conference room, control room to manage and analysis the data, server room to present the measured data and to collect the raw data. Early alarm system proposed realization of warning and response within 5 minute or less through development of sensor-based progress report and propagation automation system using the media such as MMS, VMS, EMS, FMS, SMS and web services of report and propagation. Based on this, the most effective u-Infrastructure Safety Management System is expected to be stably established at a less cost, thus making people's life more comfortable. Information obtained from such systems could be useful for maintenance or structural safety evaluation of existing structures, rapid evaluation of conditions of damaged structures after an earthquake, estimation of residual life of structures, repair and retrofitting of structures, maintenance, management or rehabilitation of historical structures.

  • PDF

A Method for Eliminating Aiming Error of Unguided Anti-Tank Rocket Using Improved Target Tracking (향상된 표적 추적 기법을 이용한 무유도 대전차 로켓의 조준 오차 제거 방법)

  • Song, Jin-Mo;Kim, Tae-Wan;Park, Tai-Sun;Do, Joo-Cheol;Bae, Jong-sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.1
    • /
    • pp.47-60
    • /
    • 2018
  • In this paper, we proposed a method for eliminating aiming error of unguided anti-tank rocket using improved target tracking. Since predicted fire is necessary to hit moving targets with unguided rockets, a method was proposed to estimate the position and velocity of target using fire control system. However, such a method has a problem that the hit rate may be lowered due to the aiming error of the shooter. In order to solve this problem, we used an image-based target tracking method to correct error caused by the shooter. We also proposed a robust tracking method based on TLD(Tracking Learning Detection) considering characteristics of the FCS(Fire Control System) devices. To verify the performance of our proposed algorithm, we measured the target velocity using GPS and compared it with our estimation. It is proved that our method is robust to shooter's aiming error.

Implementation of saccadic eye movement system with saliency map model (Saliency map 모델을 갖는 도약 안구 시각 시스템의 구현)

  • Cho, Jun-Ki;Lee, Min-Ho;Shin, Jang-Kyoo;Koh, Kwang-Sik
    • Journal of Sensor Science and Technology
    • /
    • v.10 no.1
    • /
    • pp.52-61
    • /
    • 2001
  • We propose a new saccadic eye movement system with visual selective attention. Saliency map models generate the scan pathways in a natural scene, of which the output makes an attended location. Saccadic eye movement model is used for producing the target trajectories to move the attended locations very rapidly. To categorize human saccadic eye movement, saccadic eye movement model was divided into three parts, each of which was then individually modeled using different neural networks to reflect a principal functionality of brain structures related with the saccadic eye movement in our brain. Based on the proposed saliency map models and the saccadic eye movement model, an active vision system using a CCD type camera and BLDC motor was developed and demonstrated with experimental results.

  • PDF

A Fundamental Study on Detection of Weeds in Paddy Field using Spectrophotometric Analysis (분광특성 분석에 의한 논 잡초 검출의 기초연구)

  • 서규현;서상룡;성제훈
    • Journal of Biosystems Engineering
    • /
    • v.27 no.2
    • /
    • pp.133-142
    • /
    • 2002
  • This is a fundamental study to develop a sensor to detect weeds in paddy field using machine vision adopted spectralphotometric technique in order to use the sensor to spread herbicide selectively. A set of spectral reflectance data was collected from dry and wet soil and leaves of rice and 6 kinds of weed to select desirable wavelengths to classify soil, rice and weeds. Stepwise variable selection method of discriminant analysis was applied to the data set and wavelengths of 680 and 802 m were selected to distinguish plants (including rice and weeds) from dry and wet soil, respectively. And wavelengths of 580 and 680 nm were selected to classify rice and weeds by the same method. Validity of the wavelengths to distinguish the plants from soil was tested by cross-validation test with built discriminant function to prove that all of soil and plants were classified correctly without any failure. Validity of the wavelengths for classification of rice and weeds was tested by the same method and the test resulted that 98% of rice and 83% of weeds were classified correctly. Feasibility of CCD color camera to detect weeds in paddy field was tested with the spectral reflectance data by the same statistical method as above. Central wavelengths of RGB frame of color camera were tried as tile effective wavelengths to distingush plants from soil and weeds from plants. The trial resulted that 100% and 94% of plants in dry soil and wet soil, respectively, were classified correctly by the central wavelength or R frame only, and 95% of rice and 85% of weeds were classified correctly by the central wavelengths of RGB frames. As a result, it was concluded that CCD color camera has good potential to be used to detect weeds in paddy field.

Deep learning based symbol recognition for the visually impaired (시각장애인을 위한 딥러닝기반 심볼인식)

  • Park, Sangheon;Jeon, Taejae;Kim, Sanghyuk;Lee, Sangyoun;Kim, Juwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.249-256
    • /
    • 2016
  • Recently, a number of techniques to ensure the free walking for the visually impaired and transportation vulnerable have been studied. As a device for free walking, there are such as a smart cane and smart glasses to use the computer vision, ultrasonic sensor, acceleration sensor technology. In a typical technique, such as techniques for finds object and detect obstacles and walking area and recognizes the symbol information for notice environment information. In this paper, we studied recognization algorithm of the selected symbols that are required to visually impaired, with the deep learning algorithm. As a results, Use CNN(Convolutional Nueral Network) technique used in the field of deep-learning image processing, and analyzed by comparing through experimentation with various deep learning architectures.

Development of a Backpack-Based Wearable Proximity Detection System

  • Shin, Hyungsub;Chang, Seokhee;Yu, Namgyenong;Jeong, Chaeeun;Xi, Wen;Bae, Jihyun
    • Fashion & Textile Research Journal
    • /
    • v.24 no.5
    • /
    • pp.647-654
    • /
    • 2022
  • Wearable devices come in a variety of shapes and sizes in numerous fields in numerous fields and are available in various forms. They can be integrated into clothing, gloves, hats, glasses, and bags and used in healthcare, the medical field, and machine interfaces. These devices keep track individuals' biological and behavioral data to help with health communication and are often used for injury prevention. Those with hearing loss or impaired vision find it more difficult to recognize an approaching person or object; these sensing devices are particularly useful for such individuals, as they assist them with injury prevention by alerting them to the presence of people or objects in their immediate vicinity. Despite these obvious preventive benefits to developing Internet of Things based devices for the disabled, the development of these devices has been sluggish thus far. In particular, when compared with people without disabilities, people with hearing impairment have a much higher probability of averting danger when they are able to notice it in advance. However, research and development remain severely underfunded. In this study, we incorporated a wearable detection system, which uses an infrared proximity sensor, into a backpack. This system helps its users recognize when someone is approaching from behind through visual and tactile notification, even if they have difficulty hearing or seeing the objects in their surroundings. Furthermore, this backpack could help prevent accidents for all users, particularly those with visual or hearing impairments.

A Study on Sensor-Based Upper Full-Body Motion Tracking on HoloLens

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2021
  • In this paper, we propose a method for the motion recognition method required in the industrial field in mixed reality. In industrial sites, movements (grasping, lifting, and carrying) are required throughout the upper full-body, from trunk movements to arm movements. In this paper, we use a method composed of sensors and wearable devices that are not vision-based such as Kinect without using heavy motion capture equipment. We used two IMU sensors for the trunk and shoulder movement, and used Myo arm band for the arm movements. Real-time data coming from a total of 4 are fused to enable motion recognition for the entire upper body area. As an experimental method, a sensor was attached to the actual clothes, and objects were manipulated through synchronization. As a result, the method using the synchronization method has no errors in large and small operations. Finally, through the performance evaluation, the average result was 50 frames for single-handed operation on the HoloLens and 60 frames for both-handed operation.

AprilTag and Stereo Visual Inertial Odometry (A-SVIO) based Mobile Assets Localization at Indoor Construction Sites

  • Khalid, Rabia;Khan, Muhammad;Anjum, Sharjeel;Park, Junsung;Lee, Doyeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.344-352
    • /
    • 2022
  • Accurate indoor localization of construction workers and mobile assets is essential in safety management. Existing positioning methods based on GPS, wireless, vision, or sensor based RTLS are erroneous or expensive in large-scale indoor environments. Tightly coupled sensor fusion mitigates these limitations. This research paper proposes a state-of-the-art positioning methodology, addressing the existing limitations, by integrating Stereo Visual Inertial Odometry (SVIO) with fiducial landmarks called AprilTags. SVIO determines the relative position of the moving assets or workers from the initial starting point. This relative position is transformed to an absolute position when AprilTag placed at various entry points is decoded. The proposed solution is tested on the NVIDIA ISAAC SIM virtual environment, where the trajectory of the indoor moving forklift is estimated. The results show accurate localization of the moving asset within any indoor or underground environment. The system can be utilized in various use cases to increase productivity and improve safety at construction sites, contributing towards 1) indoor monitoring of man machinery coactivity for collision avoidance and 2) precise real-time knowledge of who is doing what and where.

  • PDF

Development of compound eye image quality improvement based on ESRGAN (ESRGAN 기반의 복안영상 품질 향상 알고리즘 개발)

  • Taeyoon Lim;Yongjin Jo;Seokhaeng Heo;Jaekwan Ryu
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.2
    • /
    • pp.11-19
    • /
    • 2024
  • Demand for small biomimetic robots that can carry out reconnaissance missions without being exposed to the enemy in underground spaces and narrow passages is increasing in order to increase the fighting power and survivability of soldiers in wartime situations. A small compound eye image sensor for environmental recognition has advantages such as small size, low aberration, wide angle of view, depth estimation, and HDR that can be used in various ways in the field of vision. However, due to the small lens size, the resolution is low, and the problem of resolution in the fused image obtained from the actual compound eye image occurs. This paper proposes a compound eye image quality enhancement algorithm based on Image Enhancement and ESRGAN to overcome the problem of low resolution. If the proposed algorithm is applied to compound eye image fusion images, image resolution and image quality can be improved, so it is expected that performance improvement results can be obtained in various studies using compound eye cameras.

A Studying on Gap Sensing using Fuzzy Filter and ART2 (퍼지필터와 ART2를 이용한 선박용 용접기술개발)

  • 김관형;이재현;이상배
    • Journal of Korean Port Research
    • /
    • v.14 no.3
    • /
    • pp.321-329
    • /
    • 2000
  • Welding is essential for the manufacture of a range of engineering components which may vary from very large structures such as ships and bridges to very complex structures such as aircraft engines, or miniature components for microelectronic applications. Especially, a domestic situation of the welding automation is still depend on the arc sensing system in comparison to the vision sensing system. Specially, the gap-detecting of workpiece using conventional arc sensor is proposed in this study. As a same principle, a welding current varies with the size of a welding gap. This study introduce to the fuzzy membership filter to cancel a high frequency noise of welding current, and ART2 which has the competitive learning network classifies the signal patterns the filtered welding signal. A welding current possesses a specific pattern according to the existence or the size of a welding gap. These specific patterns result in different classification in comparison with an occasion for no welding gap. The patterns in each case of 1mm, 2mm, 3mm and no welding gap are identified by the artificial neural network.

  • PDF