• Title/Summary/Keyword: Camera angles

Search Result 239, Processing Time 0.025 seconds

FABRICATION AND TEST OF AN OPTICAL GRISM (가시광선용 그리즘의 제작과 성능시험)

  • Lee, D.H.;Song, J.W.;Yoon, T.S.
    • Publications of The Korean Astronomical Society
    • /
    • v.28 no.3
    • /
    • pp.75-82
    • /
    • 2013
  • An optical grism for education is fabricated and tested. It is composed of a transmission grating as dispersion element and a prism as diffraction angle compensation device. The transmission grating is Edmundoptics #49-584(spatial frequency 600 lines/mm, dimension $50mm{\times}50mm$). The prism is the fused silica type with angles ($41.3^{\circ}$, $-48.7^{\circ}$, $-90^{\circ}$). The grism device is fabricated by bonding the transmission grating and the prism with an optical adhesive. The zig for assembling the grism, telescope and camera is composed of an aluminum tube, an aluminum disk ring and a T-ring camera adaptor. The fabricated optical grism spectrograph is tested in laboratory using Halogen lamp and Neon lamp with DSLR camera. And the grism assembled with reflector telescope is tested in a field using stellar light. The results show good agreements with design parameters. The wavelength coverage range of the grism is 250 nm at the un-deviated wavelength of 506 nm. The wavelength resolution is 0.11 nm/pixel.

Geometric Correction of Vehicle Fish-eye Lens Images (차량용 어안렌즈영상의 기하학적 왜곡 보정)

  • Kim, Sung-Hee;Cho, Young-Ju;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.601-605
    • /
    • 2009
  • Due to the fact that fish-eye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. However, vehicle fish-eye cameras have diagonal output images rather than circular images and have asymmetric distortion beyond the horizontal angle. In this paper, we introduce a camera model and metric calibration method for vehicle cameras which uses feature points of the image. And undistort the input image through a perspective projection, where straight lines should appear straight. The method fitted vehicle fish-eye lens with different field of views.

  • PDF

Viewpoint Invariant Person Re-Identification for Global Multi-Object Tracking with Non-Overlapping Cameras

  • Gwak, Jeonghwan;Park, Geunpyo;Jeon, Moongu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2075-2092
    • /
    • 2017
  • Person re-identification is to match pedestrians observed from non-overlapping camera views. It has important applications in video surveillance such as person retrieval, person tracking, and activity analysis. However, it is a very challenging problem due to illumination, pose and viewpoint variations between non-overlapping camera views. In this work, we propose a viewpoint invariant method for matching pedestrian images using orientation of pedestrian. First, the proposed method divides a pedestrian image into patches and assigns angle to a patch using the orientation of the pedestrian under the assumption that a person body has the cylindrical shape. The difference between angles are then used to compute the similarity between patches. We applied the proposed method to real-time global multi-object tracking across multiple disjoint cameras with non-overlapping field of views. Re-identification algorithm makes global trajectories by connecting local trajectories obtained by different local trackers. The effectiveness of the viewpoint invariant method for person re-identification was validated on the VIPeR dataset. In addition, we demonstrated the effectiveness of the proposed approach for the inter-camera multiple object tracking on the MCT dataset with ground truth data for local tracking.

Optical Principles of Beam Splitters

  • Lee, Chang-Kyung
    • Korean Journal of Geomatics
    • /
    • v.1 no.1
    • /
    • pp.69-74
    • /
    • 2001
  • In conventional photogrammetry, three-dimensional coordinates are obtained from two consecutive images of a stationary object photographed from two exposure stations, separated by a certain distance. However, it is impossible to photograph moving objects from two stations with one camera at the same time. Various methods to overcome this obstacle were devised e. g. taking the left and right scenes simultaneously with one camera using a beam splitter attached to the front, thus creating a stereo scene in one image. A beam splitter consists of two outer mirrors and two inner mirrors. This paper deals with research where the optical principles of the beam splitter were evaluated based on light path phenomena between the outer mirrors and the inner mirrors. A mathematical model of the geometric configuration was derived for the beam splitter. This allows us to design and control a beam splitter to obtain maximum scale and maximum base-height ratio by stepwise application of the mathematical model. The results show that the beam splitter is a very useful tool for stereophotography with one camera. The optimum geometric configurations ensuring maximum scale and base-height ratio are closely related to inner and outer reflector sizes, their inclination angles and the offsets between the outer mirrors.

  • PDF

Objects Tracking of the Mobile Robot Using the Hybrid Visual Servoing (혼합 비주얼 서보잉을 통한 모바일 로봇의 물체 추종)

  • Park, Kang-IL;Woo, Chang-Jun;Lee, Jangmyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.8
    • /
    • pp.781-787
    • /
    • 2015
  • This paper proposes a hybrid visual servoing algorithm for the object tracking by a mobile robot with the stereo camera. The mobile robot with the stereo camera performs an object recognition and object tracking using the SIFT and CAMSHIFT algorithms for the hybrid visual servoing. The CAMSHIFT algorithm using stereo camera images has been used to obtain the three-dimensional position and orientation of the mobile robot. With the hybrid visual servoing, a stable balance control has been realized by a control system which calculates a desired angle of the center of gravity whose location depends on variations of link rotation angles of the manipulator. A PID controller algorithm has adopted in this research for the control of the manipulator since the algorithm is simple to design and it does not require unnecessary complex dynamics. To demonstrate the control performance of the hybrid visual servoing, real experiments are performed using the mobile manipulator system developed for this research.

Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information (융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.8
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

Self-Organization of Visuo-Motor Map Considering an Obstacle

  • Maruki, Yuji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1168-1171
    • /
    • 2003
  • The visuo-motor map is based on the Kohonen's self-organizing map. The map is learned the relation of the end effecter coordinates and the joint angles. In this paper, a 3 d-o-fmanipulator which moves in the 2D space is targeted. A CCD camera is set beside the manipulator, and the end effecter coordinates are given from the image of a manipulator. As a result of learning, the end effecter can be moved to the destination without exact teaching.

  • PDF

Flow Field Analysis around Multi-Cylinders Using Particle Image Velocimetry (PIV를 이용한 다수원주 주위 유동장 해석)

  • 전완수;박준수;권순홍;하동대;최장운;이만형
    • Journal of Ocean Engineering and Technology
    • /
    • v.10 no.3
    • /
    • pp.89-95
    • /
    • 1996
  • The flow field around four cylinders for various angles was investigated utilizing particle image velocimetry(PIV) technique. Flow field was recorded by video camera first. Then application of PIV technique was done to the flow field. The results turned out to be useful to analyze complex flow field around multiple cylinders.

  • PDF

Uncertainty Analysis of Observation Matrix for 3D Reconstruction (3차원 복원을 위한 관측행렬의 불확실성 분석)

  • Koh, Sung-shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.3
    • /
    • pp.527-535
    • /
    • 2016
  • Statistical optimization algorithms have been variously developed to estimate the 3D shape and motion. However, statistical approaches are limited to analyze the sensitive effects of SfM(Shape from Motion) according to the camera's geometrical position or viewing angles and so on. This paper propose the quantitative estimation method about the uncertainties of an observation matrix by using camera imaging configuration factors predict the reconstruction ambiguities in SfM. This is a very efficient method to predict the final reconstruction performance of SfM algorithm. Moreover, the important point is that our method show how to derive the active guidelines in order to set the camera imaging configurations which can be expected to lead the reasonable reconstruction results. The experimental results verify the quantitative estimates of an observation matrix by using camera imaging configurations and confirm the effectiveness of our algorithm.

Implementation of a Helmet Azimuth Tracking System in the Vehicle (이동체 내의 헬멧 방위각 추적 시스템 구현)

  • Lee, Ji-Hoon;Chung, Hae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.529-535
    • /
    • 2020
  • It is important to secure the driver's external field view in armored vehicles surrounded by iron armor for preparation for the enemy's firepower. For this purpose, a 360 degree rotatable surveillance camera is mounted on the vehicle. In this case, the key idea is to recognize the head of the driver wearing a helmet so that the external camera rotated in exactly the same direction. In this paper, we introduce a method that uses a MEMS-based AHRS sensor and a illuminance sensor to compensate for the disadvantages of the existing optical method and implements it with low cost. The key idea is to set the direction of the camera by using the difference between the Euler angles detected by two sensors mounted on the camera and the helmet, and to adjust the direction with illuminance sensor from time to time to remove the drift error of sensors. The implemented prototype will show the camera's direction matches exactly in driver's one.