• Title/Summary/Keyword: Fish-eye lens

Search Result 37, Processing Time 0.021 seconds

Multi-robot Formation based on Object Tracking Method using Fisheye Images (어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어)

  • Choi, Yun Won;Kim, Jong Uk;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

IMPLEMENTATION OFWHOLE SHAPE MEASUREMENT SYSTEM USING A CYLINDRICAL MIRROR

  • Uranishi, Yuki;Manabe, Yoshitsugu;Sasaki, Hiroshi;Chihara, Kunihiro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.601-605
    • /
    • 2009
  • We have proposed a measurement system for measuring a whole shape of an object easily. The proposed system consists of a camera and a cylinder whose inside is coated by a mirror layer. A target object is placed inside the cylinder and an image is captured by the camera from right above. The captured image includes sets of points that are observed from multiple viewpoints: one is observed directly, and others are observed via the mirror. Therefore, the whole shape of the object can be measured using stereo vision in a single shot. This paper shows that a prototype of the proposed system was implemented and an actual object was measured using the prototype. A method based on a pattern matching which uses a value of SSD (Sum of Squared Difference), and a method based on DP (Dynamic Programming) are employed to identify a set of corresponding points in warped captured images.

  • PDF

Development of A Prototype Device to Capture Day/Night Cloud Images based on Whole-Sky Camera Using the Illumination Data (정밀조도정보를 이용한 전천카메라 기반의 주·야간 구름영상촬영용 원형장치 개발)

  • Lee, Jaewon;Park, Inchun;cho, Jungho;Ki, GyunDo;Kim, Young Chul
    • Atmosphere
    • /
    • v.28 no.3
    • /
    • pp.317-324
    • /
    • 2018
  • In this study, we review the ground-based whole-sky camera (WSC), which is developed to continuously capture day and night cloud images using the illumination data from a precision Lightmeter with a high temporal resolution. The WSC is combined with a precision Lightmeter developed in IYA (International Year of Astronomy) for analysis of an artificial light pollution at night and a DSLR camera equipped with a fish-eye lens widely applied in observational astronomy. The WSC is designed to adjust the shutter speed and ISO of the equipped camera according to illumination data in order to stably capture cloud images. And Raspberry Pi is applied to control automatically the related process of taking cloud and sky images every minute under various conditions depending on illumination data from Lightmeter for 24 hours. In addition, it is utilized to post-process and store the cloud images and to upload the data to web page in real time. Finally, we check the technical possibility of the method to observe the cloud distribution (cover, type, height) quantitatively and objectively by the optical system, through analysis of the captured cloud images from the developed device.

Vision-based Self Localization Using Ceiling Artificial Landmark for Ubiquitous Mobile Robot (유비쿼터스 이동로봇용 천장 인공표식을 이용한 비젼기반 자기위치인식법)

  • Lee Ju-Sang;Lim Young-Cheol;Ryoo Young-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.560-566
    • /
    • 2005
  • In this paper, a practical technique for correction of a distorted image for vision-based localization of ubiquitous mobile robot. The localization of mobile robot is essential and is realized by using camera vision system. In order to wide the view angle of camera, the vision system includes a fish-eye lens, which distorts the image. Because a mobile robot moves rapidly, the image processing should he fast to recognize the localization. Thus, we propose the practical correction technique for a distorted image, verify the Performance by experimental test.

A Study on 360° Image Production Method for VR Image Contents (VR 영상 콘텐츠 제작에 유용한 360도 이미지 제작 방법에 관한 연구)

  • Guo, Dawei;Chung, Jeanhun
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.543-548
    • /
    • 2017
  • $360^{\circ}$panoramic image can give people an unprecedented visual experience, and there are many different ways to make a $360^{\circ}$panoramic image. In this paper, we will introduce two easy and effective methods from those many ways. The first one is through 48 photos to make a $360^{\circ}$panoramic image, the second way is through 6 photos to make a $360^{\circ}$panoramic image. We will compare those methods and tell the audience which one suits themselves. Through those easy design methods introduced above, we can see VR works design became easy and popular, normal people can also make $360^{\circ}$panoramic image, and it promotes the industry of VR image contents.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Study on the Visual Cells in the Retina of Macropodus ocellatus (Pisces, Osphronemidae) Freshwater Fish from Korea (한국산 담수어류 버들붕어, Macropodus ocellatus (Pisces, Osphronemidae) 망막의 시각세포에 관한 연구)

  • Kim, Jae Goo;Park, Jong Yong
    • Korean Journal of Ichthyology
    • /
    • v.29 no.3
    • /
    • pp.218-223
    • /
    • 2017
  • Using both light and scanning electron microscopies, it was investigated on the visual cells as well as the eyes of Macropodus ocellatus (Pisces, Osphronemidae). This species had a circular lens and yellowish cornea. The eyes had $3.5{\pm}0.2mm$ which is $31.1{\pm}3.0%$ in a percentage of eye diameter relative to head length. The retina ($158.2{\pm}10.6{\mu}m$) was built of several layers, including the visual cell layer which consists of three types of cells: single cons ($27.8{\pm}1.6{\mu}m$) and equal double cone ($33.9{\pm}3.7{\mu}m$), and large rods ($57.3{\pm}1.3{\mu}m$). The visual cell layer then was classified into the correct pattern. All visual cells were clearly distinguished from two parts (inner and outer segments). The elongated rod cells were extend to the bottom of the retinal pigment epithelium. In scanning electron microscopy, the outer segment links to inner segment by so-called calyceal piles. The M. ocellatus single and double cones appearance form a flower-petal arrangement, which is a regular mosaic pattern that contains quadrilateral units by four double cones surrounding a single cone.