• Title/Summary/Keyword: 근거리 영상

Search Result 125, Processing Time 0.02 seconds

Shape Extraction of Near Target Using Opening Operator with Adaptive Structure Element in Infrared hnages (적응적 구조요소를 이용한 열림 연산자에 의한 적외선 영상표적 추출)

  • Kwon, Hyuk-Ju;Bae, Tae-Wuk;Kim, Byoung-Ik;Lee, Sung-Hak;Kim, Young-Choon;Ahn, Sang-Ho;Sohng, Kyu-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.9C
    • /
    • pp.546-554
    • /
    • 2011
  • Near targets in the infrared (IR) images have the steady feature for inner region and the transient feature for the boundary region. Based on these features, this paper proposes a new method to extract the fine target shape of near targets in the IR images. First, we detect the boundary region of the candidate targets using the local variance weighted information entropy (WIE) of the original images. And then, a coarse target region can be estimated based on the labeling of the boundary region. For the coarse target region, we use the opening filter with an adaptive structure element to extract the fine target shape. The decision of the adaptive structure element size is optimized for the width information of target boundary by calculating the average WIE in the enlarged windows. The experimental results show that a proposed method has better extraction performance than the previous threshold algorithms.

Design of Ultra Wide Band Radar Transceiver for Foliage Penetration (수풀투과를 위한 초 광대역 레이더의 송수신기 설계)

  • Park, Gyu-Churl;Sun, Sun-Gu;Cho, Byung-Lae;Lee, Jung-Soo;Ha, Jong-Soo
    • Journal of Satellite, Information and Communications
    • /
    • v.7 no.1
    • /
    • pp.75-81
    • /
    • 2012
  • This study is to design the transmitter and receiver of short range UWB(Ultra Wide Band) imaging radar that is able to display high resolution radar image for front area of a UGV(Unmanned Ground Vehicle). This radar can help a UGV to navigate autonomously as it detects and avoids obstacles through foliage. The transmitter needs two transmitters to improve the azimuth resolution. Multi-channel receivers are required to synthesize radar image. Transmitter consists of high power amplifier, channel selection switch, and waveform generator. Receiver is composed of sixteen channel receivers, receiver channel converter, and frequency down converter, Before manufacturing it, the proposed architecture of transceiver is proved by modeling and simulation using several parameters. Then, it was manufactured by using industrial RF(Radio Frequency) components and all other measured parameters in the specification were satisfied as well.

A Study on the Development of YOLO-Based Maritime Object Detection System through Geometric Interpretation of Camera Images (카메라 영상의 기하학적 해석을 통한 YOLO 알고리즘 기반 해상물체탐지시스템 개발에 관한 연구)

  • Kang, Byung-Sun;Jung, Chang-Hyun
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.4
    • /
    • pp.499-506
    • /
    • 2022
  • For autonomous ships to be commercialized and be able to navigate in coastal water, they must be able to detect maritime obstacles. One of the most common obstacles seen in coastal area are the farm buoys. In this study, a maritime object detection system was developed that detects buoys using the YOLO algorithm and visualizes the distance and bearing between buoys and the ship through geometric interpretation of camera images. After training the maritime object detection model with 1,224 pictures of buoys, the precision of the model was 89.0%, the recall was 95.0%, and the F1-score was 92.0%. Camera calibration had been conducted to calculate the distance and bearing of an object away from the camera using the obtained image coordinates and Experiment A and B were designed to verify the performance of the maritime object detection system. As a result of verifying the performance of the maritime object detection system, it can be seen that the maritime object detection system is superior to radar in its short-distance detection capability, so that it can be used as a navigational aid along with the radar.

A Study on Extraction of Skin Region and Lip Using Skin Color of Eye Zone (눈 주위의 피부색을 이용한 피부영역검출과 입술검출에 관한 연구)

  • Park, Young-Jae;Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.19-30
    • /
    • 2009
  • In this paper, We propose a method with which we can detect facial components and face in input image. We use eye map and mouth map to detect facial components using eyes and mouth. First, We find out eye zone, and second, We find out color value distribution of skin region using the color around the eye zone. Skin region have characteristic distribution in YCbCr color space. By using it, we separate the skin region and background area. We find out the color value distribution of the extracted skin region and extract around the region. Then, detect mouth using mouthmap from extracted skin region. Proposed method is better than traditional method the reason for it comes good result with accurate mouth region.

Comparison of Stereoscopic Fusional Area between People with Good and Poor Stereo Acuity (입체 시력이 양호한 사람과 불량인 사람간의 입체시 융합 가능 영역 비교)

  • Kang, Hyungoo;Hong, Hyungki
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.21 no.1
    • /
    • pp.61-68
    • /
    • 2016
  • Purpose: This study investigated differences in stereoscopic fusional area between those with good and poor stereo acuity in viewing stereoscopic displays. Methods: Stereo acuity of 39 participants (18 males and 21 females, $23.6{\pm}3.15years$) was measured with the random dot stereo butterfly method. Participants with stereo-blindness were not included. Stereoscopic fusional area was measured using stereoscopic stimulus by varying the amount of horizontal disparity in a stereoscopic 3D TV. Participants were divided into two groups of good and poor stereo acuity. Criterion for good stereo acuity was determined as less than 60 arc seconds. Measurements arising from the participants were statistically analyzed. Results: 26 participants were measured to have good stereo acuity and 13 participants poor stereo acuity. In case of the stereoscopic stimulus farther than the fixation point, threshold of horizontal disparity for those with poor stereo acuity were measured to be smaller than the threshold for those with good stereo acuity, with a statistically significant difference. On the other hand, there was no statistically significant difference between the two groups, in case of the stereoscopic stimulus nearer to the fixation point. Conclusions: In viewing stereoscopic displays, the boundary of stereoscopic fusional area for the poor stereo acuity group was smaller than the boundary of good stereo acuity group only for the range behind the display. Hence, in viewing stereoscopic displays, participants with poor stereo acuity would have more difficulty perceiving the fused image at farther distances compared to participants with good stereo acuity.

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

The Functional Change of Accommodation and Convergence in the Mid-Forties by Using Smartphone (스마트폰 사용에 의한 40대 중년층의 조절 및 폭주기능 변화)

  • Kwon, Ki-il;Kim, Hyun Jin;Park, Mijung;Kim, So Ra
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.21 no.2
    • /
    • pp.127-135
    • /
    • 2016
  • Purpose: The present study was aimed to investigate the effect of excessive near work by using a smartphone on subjective symptoms and accommodative and convergent function in their 40s. Methods: A total of 40 subjects(male, 10; female, 30; age, $43{\pm}7.2year$) in their 40s who have monocular and binocular visual acuities of 0.8 and 1.0, respectively, were divided into presbyopia group and non-presbyopia group. The subjects were asked to watch a movie on the screen of smartphone for 30 minutes. Their accommodative amplitude and facility, and relative accommodation were measured and compared before and after the use of smartphone. Changes in fusional vergence and near heterophoria by using smartphone were also evaluated. Furthermore, the change of subjective symptoms was surveyed using a questionnaire. Results: The presbyopia in mid-40s reported discomfort in an order of asthenopia, blur and dryness after the use of smartphone. Accommodative function and non-strabismic binocular function were generally decreased. Accommodative functions such as monocular accommodative amplitude, and relative accommodation were significantly decreased after smartphone use, and the change of phoria was observed as a result of decreased convergence and divergence. Negative fusional vergence was also significantly reduced. On the other hand, non-presbyopia in mid-40s reported discomfort in an order of asthenopia, dryness and blur, and only accommodative amplitude among the accommodative functions was significantly reduced. Significant reduction of negative fusional vergence was also observed. Conclusion: From the results, it was confirmed that the subjective discomfort of mid-40s after smarphone use might be related to whether presbyopia or not. It was due to not only the reduction of accommodative function but also the overall deterioration of visual function including heterophoria and fusional vergence. Therefore, it suggests that the accurate determination of the cause based on the overall visual functional tests such as heterophoria, fusional vergence as well as the decrease of accommodation due to the aging may be necessary when the mid-40s feels discomfortable symptoms by near work.

The design of 4S-Van for implementation of ground-laser mapping system (지상 레이져 매핑시스템 구현을 위한 4S-Van 시스템 설계)

  • 김성백;이승용;김민수
    • Spatial Information Research
    • /
    • v.10 no.3
    • /
    • pp.407-419
    • /
    • 2002
  • In this study, the design of 4S-Van system is discussed fur the implementation of laser mapping system. Laser device is fast and accurate sensor that acquires 3D road and surface data. The orientation laser sensor is determined by loosely coupled (D)GPS/INS Integration. Considering current system architecture, (D)GPS/INS integration is performed far performance analysis of direct georeferencing and self-calibration is performed for interior and exterior orientation and displacement. We utilized 3 laser sensors for compensation and performance improvement. 3D surface data from laser scanner and texture image from CCD camera can be used to implement 3D visualization.

  • PDF

Stereo Matching the Orientation Point Using the Method of Color Channel Separation (색상분리기법을 이용한 표정점의 스테레오 매칭)

  • 이재기;이현직;박경식
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.15 no.1
    • /
    • pp.41-50
    • /
    • 1997
  • This study is aimed to suggest the method, color channel seperation, can match the common points in real-time automatically. Image coordinates which was calculated from the acquired image with CCDcamera in this study is checked with two methods; check the accuracy of image coordinate and common point matching through correct sort. In conclusion of check, The RMSE of object coordinate which is calculated by photogrammetry program with image coordinate is in the expect RMSE of close-range photogrammetry, and Match-ing of common point is also performed correctly by using sort. For these reason, this color channel separation method is adequate for the acquisition of accurate image coordinates and the matching of the common points. I think that this method will be useful for the fields of industry which need fast-correct processing with acquired information in real-time.

  • PDF

Width Estimation of Stationary Objects using Radar Image for Autonomous Driving of Unmanned Ground Vehicles (무인차량 자율주행을 위한 레이다 영상의 정지물체 너비추정 기법)

  • Kim, Seongjoon;Yang, Dongwon;Kim, Sujin;Jung, Younghun
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.6
    • /
    • pp.711-720
    • /
    • 2015
  • Recently many studies of Radar systems mounted on ground vehicles for autonomous driving, SLAM (Simultaneous localization and mapping) and collision avoidance have been reported. Since several pixels per an object may be generated in a close-range radar application, a width of an object can be estimated automatically by various signal processing techniques. In this paper, we tried to attempt to develop an algorithm to estimate obstacle width using Radar images. The proposed method consists of 5 steps - 1) background clutter reduction, 2) local peak pixel detection, 3) region growing, 4) contour extraction and 5)width calculation. For the performance validation of our method, we performed the test width estimation using a real data of two cars acquired by commercial radar system - I200 manufactured by Navtech. As a result, we verified that the proposed method can estimate the widths of targets.