• Title/Summary/Keyword: View Finder

Search Result 19, Processing Time 0.029 seconds

Making of View Finder for Drone Photography (드론 촬영을 위한 뷰파인더 제작)

  • Park, Sung-Dae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.12
    • /
    • pp.1645-1652
    • /
    • 2018
  • A drone which was developed first for military purpose has been expanded to various civil areas, with its technological development. Of the drones developed for such diverse purposes, a drone for photography has a camera installed and is actively applied to a variety of image contents making, beyond filming and broadcasting. A drone for photography makes it possible to shoot present and dynamic images which were hard to be photographed with conventional photography technology. This study made a view finder which helps a drone operator to control a drone and directly view an object to shoot with the drone camera. The view finder for drones is a type of glasses. It was developed in the way of printing out the data modelled with 3D MAX in a 3D printer and installing a ultra-small LCD monitor. The view finder for drones makes it possible to fly a drone safely and achieve accurate framing of an object to shoot.

A Study on 3D Reconstruction of Urban Area

  • Park Y. M.;Kwon K. R.;Lee K. W.
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.470-473
    • /
    • 2005
  • This paper proposes a reconstruction method for the shape and color information of 3-dimensional buildings. The proposed method is range scanning by laser range finder and image coordinates' color information mapping to laser coordinate by a fixed CCD camera on laser range finder. And we make a 'Far-View' using high-resolution satellite image. The 'Far-View' is created that the height of building using DEM after contours of building extraction. The user select a region of 'Far View' and then, appear detailed 3D-reconstruction of building The outcomes apply to city plan, 3D-environment game and movie background etc.

  • PDF

A study on the theoretical minimum resolution of the laser range finder (레이저 거리계의 이론적 최소 분해능에 관한 연구)

  • 차영엽;권대갑
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.644-647
    • /
    • 1996
  • In this study the theoretical minimum resolution analysis of an active vision system using laser range finder is performed for surrounding recognition and 3D data acquisition in unknown environment. The laser range finder consists of a slitted laser beam generator, a scanning mechanism, CCD camera, and a signal processing unit. A laser beam from laser source is slitted by a set of cylindrical lenses and the slitted laser beam is emitted up and down and rotates by the scanning mechanism. The image of laser beam reflected on the surface of an object is engraved on the CCD array. In the result, the resolution of range data in laser range finder is depend on distance between lens center of CCD camera and light emitter, view and beam angles, and parameters of CCD camera.

  • PDF

A study on the resolution of the laser range finder (레이저 거리계의 분해능에 관한 연구)

  • Cha, Yeong-Yeop;Yu, Chang-Mok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.1
    • /
    • pp.82-87
    • /
    • 1998
  • In this study, the theoretical resolution analysis of an active vision system using laser range finder is performed for surrounding recognition and 3D data acquisition in unknown environment. In the result, the resolution of range data in laser range finder is depend on the distance between lens center of CCD camera and light emitter, view angle, beam angle, and parameters of CCD camera. The theoretical resolutions of the laser range finders of various types which are based on parameters effected resolution are calculated and experimental results are obtained in real system.

  • PDF

Advanced Seam Finding Algorithm for Stitching of 360 VR Images (개선된 Seam Finder를 이용한 360 VR 이미지 스티칭 기술)

  • Son, Hui-Jeong;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.656-668
    • /
    • 2018
  • VR (Virtual Reality) is one of the important research topics in the field of multimedia application system. The quality of the visual data composed from multiple pictures depends on the performance of stitching technique. The stitching module consists of feature extraction, mapping of those, warping, seam finding, and blending. In this paper, we proposed a preprocessing scheme to provide the efficient mask for seam finder. Incorporating of the proposed mask removes the distortion, such as ghost and blurring, in the stitched image. The simulation results show that the proposed algorithm outperforms other conventional techniques in the respect of the subjective quality and the computational complexity.

A NEW AUTO-GUIDING SYSTEM FOR CQUEAN

  • CHOI, NAHYUN;PARK, WON-KEE;LEE, HYE-IN;JI, TAE-GEUN;JEON, YISEUL;IM, MYUNGSHI;PAK, SOOJONG
    • Journal of The Korean Astronomical Society
    • /
    • v.48 no.3
    • /
    • pp.177-185
    • /
    • 2015
  • We develop a new auto-guiding system for the Camera for QUasars in the EArly uNiverse (CQUEAN). CQUEAN is an optical CCD camera system attached to the 2.1-m Otto-Struve Telescope (OST) at McDonald Observatory, USA. The new auto-guiding system differs from the original one in the following: instead of the cassegrain focus of the OST, it is attached to the finder scope; it has its own filter system for observation of bright targets; and it is controlled with the CQUEAN Auto-guiding Package, a newly developed auto-guiding program. Finder scope commands a very wide field of view at the expense of poorer light gathering power than that of the OST. Based on the star count data and the limiting magnitude of the system, we estimate there are more than 5.9 observable stars with a single FOV using the new auto-guiding CCD camera. An adapter is made to attach the system to the finder scope. The new auto-guiding system successfully guided the OST to obtain science data with CQUEAN during the test run in 2014 February. The FWHM and ellipticity distributions of stellar profiles on CQUEAN, images guided with the new auto-guiding system, indicate similar guiding capabilities with the original auto-guiding system but with slightly poorer guiding performance at longer exposures, as indicated by the position angle distribution. We conclude that the new auto-guiding system has overall similar guiding performance to the original system. The new auto-guiding system will be used for the second generation CQUEAN, but it can be used for other cassegrain instruments of the OST.

Verification of Camera-Image-Based Target-Tracking Algorithm for Mobile Surveillance Robot Using Virtual Simulation (가상 시뮬레이션을 이용한 기동형 경계 로봇의 영상 기반 목표추적 알고리즘 검증)

  • Lee, Dong-Youm;Seo, Bong-Cheol;Kim, Sung-Soo;Park, Sung-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.11
    • /
    • pp.1463-1471
    • /
    • 2012
  • In this study, a 3-axis camera system design is proposed for application to an existing 2-axis surveillance robot. A camera-image-based target-tracking algorithm for this robot has also been proposed. The algorithm has been validated using a virtual simulation. In the algorithm, the heading direction vector of the camera system in the mobile surveillance robot is obtained by the position error between the center of the view finder and the center of the object in the camera image. By using the heading direction vector of the camera system, the desired pan and tilt angles for target-tracking and the desired roll angle for the stabilization of the camera image are obtained through inverse kinematics. The algorithm has been validated using a virtual simulation model based on MATLAB and ADAMS by checking the corresponding movement of the robot to the target motion and the virtual image error of the view finder.

Realization on the Integrated System of Navigation Communication and Fish Finder for Safety Operation of Fishing Vessel (어선의 안전조업을 위한 항해통신 및 어탐기의 통합시스템 구현)

  • In-suk Kang;In-ung Ju;Jeong-yeon Kim;Jo-cheon Choi
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.433-440
    • /
    • 2021
  • The problem of maritime accidents due to the carelessness of fishing vessels, which is affected by the aging of fishing vessel operators. And there is navigation, communication and fish finder that is installed inside the narrow bridge of a fishing vessel. Therefore these system are monitors as many as of each terminal, which is bad influence on obscuring view of front sea from a fishing vessel bridge. In addition a large problem, it is occurs to reduce of the information recognition ability due to the confusion, which is can not check the display information each of screen equipments. Therefore, there has been demand to simply integrated the equipment, and it has wanted the integrated support system of these equipment. The display must be provided on a fishing vessels such as electronic charts, communications equipments and fish detection into one case. In this paper, the integrated system will be installed the GPS plotter, AIS, VHF-DSC, V-pass, fish finder and power supply in the narrow wheelhouse on a fishing vessel, which is configured in one case and operated by multi function display (MFD). The MFD is integrated to simplify for several multi terminals and provided necessary information on a single screen. This integration fishery support system will has improved in sea safety operation and fishery environment of fishing vessels by this implementation.

A 3D Map Building Algorithm for a Mobile Robot Moving on the Slanted Surface (모바일 로봇의 경사 주행 시 3차원 지도작성 알고리즘)

  • Hwang, Yo-Seop;Han, Jong-Ho;Kim, Hyun-Woo;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.3
    • /
    • pp.243-250
    • /
    • 2012
  • This paper proposes a 3D map-building algorithm using one LRF (Laser Range Finder) while a mobile robot is navigating on the slanted surface. There are several researches on 3D map buildings using the LRF. However most of them are performing the map building only on the flat surface. While a mobile robot is moving on the slanted surface, the view angle of LRF is dynamically changing, which makes it very difficult to build the 3D map using encoder data. To cope with this dynamic change of the view angle in build 3D map, IMU and balance filters are fused to correct the unstable encoder data in this research. Through the real navigation experiments, it is verified that the fusion of multiple sensors are properly performed to correct the slope angle of the slanted surface. The effectiveness of the balance filter are also checked through the hill climbing navigations.