• Title/Summary/Keyword: Image Board

Search Result 581, Processing Time 0.022 seconds

BATHYMETRIC MODULATION ON WAVE SPECTRA

  • Liu, Cho-Teng;Doong, Dong-Jiing
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.344-347
    • /
    • 2008
  • Ocean surface waves may be modified by ocean current and their observation may be severely distorted if the observer is on a moving platform with changing speed. Tidal current near a sill varies inversely with the water depth, and results spatially inhomogeneous modulation on the surface waves near the sill. For waves propagating upstream, they will encounter stronger current before reaching the sill, and therefore, they will shorten their wavelength with frequency unchanged, increase its amplitude, and it may break if the wave height is larger than 1/7 of the wavelength. These small scale (${\sim}$ 1 km changes is not suitable for satellite radar observation. Spatial distribution of wave-height spectra S(x, y) can not be acquired from wave gauges that are designed for collecting 2-D wave spectra at fixed locations, nor from satellite radar image which is more suitable for observing long swells. Optical images collected from cameras on-board a ship, over high-ground, or onboard an unmanned auto-piloting vehicle (UAV) may have pixel size that is small enough to resolve decimeter-scale short gravity waves. If diffuse sky light is the only source of lighting and it is uniform in camera-viewing directions, then the image intensity is proportional to the surface reflectance R(x, y) of diffuse light, and R is directly related to the surface slope. The slope spectrum and wave-height spectra S(x, y) may then be derived from R(x, y). The results are compared with the in situ measurement of wave spectra over Keelung Sill from a research vessel. The application of this method is for analysis and interpretation of satellite images on studies of current and wave interaction that often require fine scale information of wave-height spectra S(x, y) that changes dynamically with time and space.

  • PDF

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

System Design and Performance Analysis of 3D Imaging Laser Radar for the Mapping Purpose (맵핑용 3차원 영상 레이저 레이다의 시스템 설계 및 성능 분석)

  • La, Jongpil;Ko, Jinsin;Lee, Changjae
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.17 no.1
    • /
    • pp.90-95
    • /
    • 2014
  • The system design and the system performance analysis of 3D imaging laser radar system for the mapping purpose is addressed in this article. For the mapping, a push-bloom scanning method is utilized. The pulsed fiber laser with high pulse energy and high pulse repetition rate is used for the light source of laser radar system. The high sensitive linear mode InGaAs avalanche photo-diode is used for the laser receiver module. The time-of-flight of laser pulse from the laser to the receiver is calculated by using high speed FPGA based signal processing board. To reduce the walk error of laser pulse regardless of the intensity differences between pulses, the time of flight is measured from peak to peak of laser pulses. To get 3D image with a single pixel detector, Risley scanner which stirs the laser beam in an ellipsoidal pattern is used. The system laser energy budget characteristics is modeled using LADAR equation, from which the system performances such as the pulse detection probability, false alarm and etc. are analyzed and predicted. The test results of the system performances are acquired and compared with the predicted system performance. According to test results, all the system requirements are satisfied. The 3D image which was acquired by using the laser radar system is also presented in this article.

Design of FPGA-based Wearable System for Checking Patients (환자 체크를 위한 FPGA 기반 웨어러블 시스템 설계)

  • Kang, Sungwoo;Ryoo, Kwangki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.477-479
    • /
    • 2017
  • With the recent advances in medical technology and health care, the prevention and treatment of diseases has developed. Accordingly aging has rapidly progressed. In this life span and aging society, demand for diagnostic centered medical care is increasing rapidly. In this paper, we propose a wearable patient check system based on FPGA that can be controlled by sensors. In the existing hospital, a doctor or nurse visited the patient every hour to check the condition. However, in this paper, patients, doctors and nurses can check the patient's condition at the desired time using patient check system. In addition, the tilt sensor is used for the patient who is uncomfortable to easily control. The proposed FPGA-based hardware architecture consists of an algorithm for enlarged image processing, a TFT-LCD Controller, a CIS Controller, and a Memory Controller to output the patient's status image. Implemented and validated using the DE2-115 test board with Cyclone IV EP4CE115F29C7 FPGA device and its operating frequency is 50MHz.

  • PDF

Remote Control System using Face and Gesture Recognition based on Deep Learning (딥러닝 기반의 얼굴과 제스처 인식을 활용한 원격 제어)

  • Hwang, Kitae;Lee, Jae-Moon;Jung, Inhwan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.115-121
    • /
    • 2020
  • With the spread of IoT technology, various IoT applications using facial recognition are emerging. This paper describes the design and implementation of a remote control system using deep learning-based face recognition and hand gesture recognition. In general, an application system using face recognition consists of a part that takes an image in real time from a camera, a part that recognizes a face from the image, and a part that utilizes the recognized result. Raspberry PI, a single board computer that can be mounted anywhere, has been used to shoot images in real time, and face recognition software has been developed using tensorflow's FaceNet model for server computers and hand gesture recognition software using OpenCV. We classified users into three groups: Known users, Danger users, and Unknown users, and designed and implemented an application that opens automatic door locks only for Known users who have passed both face recognition and hand gestures.

Study on Practical Use of Air Vehicle Test Equipment(AVTE) for UAV Operation Support (무인항공기 운용 지원을 위한 비행체 점검장비 활용에 관한 연구)

  • Song, Yong-Ha;Go, Eun-kyoung;Kwon, Sang-Eun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.2
    • /
    • pp.320-326
    • /
    • 2021
  • AVTE(Air Vehicle Test Equipment) is an equipment to inspect and check the status of on-board aircraft LRUs(Line Replacement Units) before and after flight for performing successful UAV(Unmanned Aerial Vehicle) missions. This paper suggests utilization of the AVTE as an operation support-equipment by implementing several critical functions for UAV-operation on the AVTE. The AVTE easily sets initialization(default) data and compensates for the installation and position errors of the LRUs which provide critical mission data and situation image with pilots without additional individual operation support-equipment. Major fault list and situation image data could be downloaded after flight using the AVTE in the event of UAV emergency situation or unusual occurrence on duty as well. We anticipate the suggested operational approach of the AVTE could dramatically reduce the cost and man power for design and manufacture of additional operation support equipment and effectively diminish workload of the operator.

Efficient Object Recognition by Masking Semantic Pixel Difference Region of Vision Snapshot for Lightweight Embedded Systems (경량화된 임베디드 시스템에서 의미론적인 픽셀 분할 마스킹을 이용한 효율적인 영상 객체 인식 기법)

  • Yun, Heuijee;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.813-826
    • /
    • 2022
  • AI-based image processing technologies in various fields have been widely studied. However, the lighter the board, the more difficult it is to reduce the weight of image processing algorithm due to a lot of computation. In this paper, we propose a method using deep learning for object recognition algorithm in lightweight embedded boards. We can determine the area using a deep neural network architecture algorithm that processes semantic segmentation with a relatively small amount of computation. After masking the area, by using more accurate deep learning algorithm we could operate object detection with improved accuracy for efficient neural network (ENet) and You Only Look Once (YOLO) toward executing object recognition in real time for lightweighted embedded boards. This research is expected to be used for autonomous driving applications, which have to be much lighter and cheaper than the existing approaches used for object recognition.

Implementation of Prevention and Eradication System for Harmful Wild Animals Based on YOLO (YOLO에 기반한 유해 야생동물 피해방지 및 퇴치 시스템 구현)

  • Min-Uk Chae;Choong-Ho Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.3
    • /
    • pp.137-142
    • /
    • 2022
  • Every year, the number of wild animals appearing in human settlements increases, resulting in increased damage to property and human life. In particular, the damage is more severe when wild animals appear on highways or farmhouses. To solve this problem, ecological pathways and guide fences are being installed on highways. In addition, in order to solve the problem in farms, horn repelling using sensors, installing a net, and repelling by smell of excrement are being used. However, these methods are expensive and their effectiveness is not high. In this paper, we used YOLO (You Only Look Once), an AI-based image analysis method, to analyze harmful animals in real time to reduce malfunctions, and high-brightness LEDs and ultrasonic frequency speakers were used as extermination devices. The speaker outputs an audible frequency that only animals can hear, increasing the efficiency to only exterminate wild animals. The proposed system is designed using a general-purpose board so that it can be installed economically, and the detection performance is higher than that of the devices using the existing sensor.

Learning efficiency checking system by measuring human motion detection (사람의 움직임 감지를 측정한 학습 능률 확인 시스템)

  • Kim, Sukhyun;Lee, Jinsung;Yu, Eunsang;Park, Seon-u;Kim, Eung-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.290-293
    • /
    • 2021
  • In this paper, we implement a learning efficiency verification system to inspire learning motivation and help improve concentration by detecting the situation of the user studying. To this aim, data on learning attitude and concentration are measured by extracting the movement of the user's face or body through a real-time camera. The Jetson board was used to implement the real-time embedded system, and a convolutional neural network (CNN) was implemented for image recognition. After detecting the feature part of the object using a CNN, motion detection is performed. The captured image is shown in a GUI written in PYQT5, and data is collected by sending push messages when each of the actions is obstructed. In addition, each function can be executed on the main screen made with the GUI, and functions such as a statistical graph that calculates the collected data, To do list, and white noise are performed. Through learning efficiency checking system, various functions including data collection and analysis of targets were provided to users.

  • PDF

Consideration of the Effect according to Variation of Material and Respiration in Cone-Beam CT (Cone-Beam CT에서 물질 및 호흡 변화가 영상에 미치는 영향에 대한 고찰)

  • Na, Jun-Young;Kim, Jung-Mi;Kim, Dae-Sup;Kang, Tae-Young;Baek, Geum-Mun;Kwon, Gyeong-Tae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.1
    • /
    • pp.15-21
    • /
    • 2012
  • Purpose: Image Guided Radiation Therapy (IGRT) has been carried out using On-Board Imager system (OBI) in Asan Medical Center. For this reason, This study was to analyze and evaluate the impact on Cone-Beam CT according to variation of material and respiration. Materials and Methods: This study was to acquire and analyze Cone-Beam CT three times for two material: Cylider acryl (lung equvalent material, diameter 3 cm), Fiducial Marker (using clinic) under Motion Phantom able to adjust respiration pattern randomly was varying period, amplitude and baseline vis-a-vis reference respiration pattern. Results: First, According to a kind of material, when being showed 100% in the acryl and 120% in the Fiducial Marker under the condition of same movement of the motion phantom. Second, According to the respiratory alteration, when being showed 1.13 in the baseline shift 1.8 mm and 1.27 in the baseline shift 3.3 mm for acryl. when being showed 1.01 in 1 sec of period and 1.045 in 2.5 sec of period for acryl. When being showed 0.86 in 0.7 times the standard of amplitude and 1.43 in 1.7 times the standard of amplitude for acryl. when being showed 1.18 in the baseline shift 1.8 mm and 1.34 in the baseline shift 3.3 mm for Fiducial Marker. when being showed 1.0 in 1 sec of period and 1.0 in 2.5 sec of period for Fiducial Marker. When being showed 0.99 in 0.7 times the standard of amplitude and 1.66 in 1.7 times the standard of amplitude for Fiducial Marker. Conclusion: The effect of image size of CBCT was 20% in the case of Fiducial marker. The impact of changes in breathing pattern was minimum 13% - maximum 43% for Arcyl, min. 18% - max. 66% for Fiducial marker. This difference makes serious uncertainty. So, Must be stabilized breathing of patient before acquiring CBCT. also must be monitored breathing of patient in the middle of acquire. If you observe considerable change of breathing when acquiring CBCT. After Image Guided, must be need to check treatment site using fluoroscopy. If a change is too big, re-acquiring CBCT.

  • PDF