• Title/Summary/Keyword: Wide view angle

Search Result 116, Processing Time 0.024 seconds

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Study on Distortion Compensation of Underwater Archaeological Images Acquired through a Fisheye Lens and Practical Suggestions for Underwater Photography - A Case of Taean Mado Shipwreck No. 1 and No. 2 -

  • Jung, Young-Hwa;Kim, Gyuho;Yoo, Woo Sik
    • Journal of Conservation Science
    • /
    • v.37 no.4
    • /
    • pp.312-321
    • /
    • 2021
  • Underwater archaeology relies heavily on photography and video image recording during surveillances and excavations like ordinary archaeological studies on land. All underwater images suffer poor image quality and distortions due to poor visibility, low contrast and blur, caused by differences in refractive indices of water and air, properties of selected lenses and shapes of viewports. In the Yellow Sea (between mainland China and the Korean peninsula), the visibility underwater is far less than 1 m, typically in the range of 30 cm to 50 cm, on even a clear day, due to very high turbidity. For photographing 1 m x 1 m grids underwater, a very wide view angle (180°) fisheye lens with an 8 mm focal length is intentionally used despite unwanted severe barrel-shaped image distortion, even with a dome port camera housing. It is very difficult to map wide underwater archaeological excavation sites by combining severely distorted images. Development of practical compensation methods for distorted underwater images acquired through the fisheye lens is strongly desired. In this study, the source of image distortion in underwater photography is investigated. We have identified the source of image distortion as the mismatching, in optical axis and focal points, between dome port housing and fisheye lens. A practical image distortion compensation method, using customized image processing software, was explored and verified using archived underwater excavation images for effectiveness in underwater archaeological applications. To minimize unusable area due to severe distortion after distortion compensation, practical underwater photography guidelines are suggested.

Stereoscopic Camera with a CCD and Two Zoom Lenses (단일 CCD와 두개의 줌렌즈로 구성한 입체 카메라)

  • Lee, Sang-Eun;Jo, Jae-Heung;Jung, Eui-Min;Lee, Kag-Hyeon
    • Korean Journal of Optics and Photonics
    • /
    • v.17 no.1
    • /
    • pp.38-46
    • /
    • 2006
  • The stereoscopic camera based on the image formation principle on human eyes and the brain is designed and fabricated by using a CCD and two zoom lenses. As two zoom lenses are separated as 65 mm of the human ocular distance with the wide angle of view of $50^{\circ}$ and the variable convergence angle from $0^{\circ}$ to $16^{\circ}$, the camera can be operated by the similar binocular parallax as human eyes. In order to take the dynamic stereoscopic picture, a shutter blade for the selection of the left and right images in turns, an X-cube image combiner fur the composition of these two images through the blade, and a CCD with 60 frames per second are used.

Multi-screen Content Creation using Rig and Monitoring System (다면 콘텐츠 현장 촬영 시스템)

  • Lee, Sangwoo;Kim, Younghui;Cha, Seunghoon;Kwon, Jaehwan;Koh, Haejeong;Park, Kisu;Song, Isaac;Yoon, Hyungjin;Jang, Kyungyoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.5
    • /
    • pp.9-17
    • /
    • 2017
  • Filming using multiple cameras is required for the production of the multi-screen content. It can fill the viewer's field of view (FOV) entirely to provide an increased sense of immersion. In such a filming scenario, it is very important to monitor how images captured by multiple cameras are displayed as a single content or how the content will be displayed in an actual theatre. Most recent studies on creating the content of special format have been focused on their own purposes, such as stereoscopic and panoramic images. There is no research on content creation optimized for theatres that use three screens that are spreading recently. In this paper, we propose a novel content production system with a rig that can control three cameras and monitoring software specialized for multi-screen content. The proposed rig can precisely control the angles between the cameras and capture wide angle of view with three cameras. It works with monitoring software via remote communication. The monitoring software automatically aligned the content in real time, and the alignment of the content is updated according to the angle of camera rig. Futher, the producion efficiency is greatly improved by making the alignment information available for post-production.

Images Grouping Technology based on Camera Sensors for Efficient Stitching of Multiple Images (다수의 영상간 효율적인 스티칭을 위한 카메라 센서 정보 기반 영상 그룹핑 기술)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.713-723
    • /
    • 2017
  • Since the panoramic image can overcome the limitation of the viewing angle of the camera and have a wide field of view, it has been studied effectively in the fields of computer vision and stereo camera. In order to generate a panoramic image, stitching images taken by a plurality of general cameras instead of using a wide-angle camera, which is distorted, is widely used because it can reduce image distortion. The image stitching technique creates descriptors of feature points extracted from multiple images, compares the similarities of feature points, and links them together into one image. Each feature point has several hundreds of dimensions of information, and data processing time increases as more images are stitched. In particular, when a panorama is generated on the basis of an image photographed by a plurality of unspecified cameras with respect to an object, the extraction processing time of the overlapping feature points for similar images becomes longer. In this paper, we propose a preprocessing process to efficiently process stitching based on an image obtained from a number of unspecified cameras for one object or environment. In this way, the data processing time can be reduced by pre-grouping images based on camera sensor information and reducing the number of images to be stitched at one time. Later, stitching is done hierarchically to create one large panorama. Through the grouping preprocessing proposed in this paper, we confirmed that the stitching time for a large number of images is greatly reduced by experimental results.

Case study of SAMSUNG TFT-LCD Technology Innovation using TRIZ method (트리즈 기법을 활용한 삼성전자의 TFT-LCD 기술혁신 사례연구)

  • Ban, Byeong-Seob
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.6
    • /
    • pp.3429-3434
    • /
    • 2014
  • In the TFT-LCD(Thin Film Transistor-Liquid Crystal Display) panel manufacturing industry, SAMSUNG, a late entry, can catch up to leading first mover, SHARP. The changes the note, monitor, TV, and mobile markets in the TFT-LCD industry were studied using a system analysis method. In addition, the fast response time technology in SAMSUNG TFT-LCD technology was developed using the TRIZ method. For example, a new liquid crystal mixture of a wide view angle and fast response time were developed by doping a new high birefringence liquid crystal material in a base mixture using the contradiction method and su-field method of TRIZ. The response time of a new liquid crystal mixture was improved to approximately 60%(16.2ms with base LC mixture, 9.8ms with a new LC mixture).

A NEW AUTO-GUIDING SYSTEM FOR CQUEAN

  • CHOI, NAHYUN;PARK, WON-KEE;LEE, HYE-IN;JI, TAE-GEUN;JEON, YISEUL;IM, MYUNGSHI;PAK, SOOJONG
    • Journal of The Korean Astronomical Society
    • /
    • v.48 no.3
    • /
    • pp.177-185
    • /
    • 2015
  • We develop a new auto-guiding system for the Camera for QUasars in the EArly uNiverse (CQUEAN). CQUEAN is an optical CCD camera system attached to the 2.1-m Otto-Struve Telescope (OST) at McDonald Observatory, USA. The new auto-guiding system differs from the original one in the following: instead of the cassegrain focus of the OST, it is attached to the finder scope; it has its own filter system for observation of bright targets; and it is controlled with the CQUEAN Auto-guiding Package, a newly developed auto-guiding program. Finder scope commands a very wide field of view at the expense of poorer light gathering power than that of the OST. Based on the star count data and the limiting magnitude of the system, we estimate there are more than 5.9 observable stars with a single FOV using the new auto-guiding CCD camera. An adapter is made to attach the system to the finder scope. The new auto-guiding system successfully guided the OST to obtain science data with CQUEAN during the test run in 2014 February. The FWHM and ellipticity distributions of stellar profiles on CQUEAN, images guided with the new auto-guiding system, indicate similar guiding capabilities with the original auto-guiding system but with slightly poorer guiding performance at longer exposures, as indicated by the position angle distribution. We conclude that the new auto-guiding system has overall similar guiding performance to the original system. The new auto-guiding system will be used for the second generation CQUEAN, but it can be used for other cassegrain instruments of the OST.

Performance Improvement of Pedestrian Detection using a GM-PHD Filter (GM-PHD 필터를 이용한 보행자 탐지 성능 향상 방법)

  • Lee, Yeon-Jun;Seo, Seung-Woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.150-157
    • /
    • 2015
  • Pedestrian detection has largely been researched as one of the important technologies for autonomous driving vehicle and preventing accidents. There are two categories for pedestrian detection, camera-based and LIDAR-based. LIDAR-based methods have the advantage of the wide angle of view and insensitivity of illuminance change while camera-based methods have not. However, there are several problems with 3D LIDAR, such as insufficient resolution to detect distant pedestrians and decrease in detection rate in a complex situation due to segmentation error and occlusion. In this paper, two methods using GM-PHD filter are proposed to improve the poor rates of pedestrian detection algorithms based on 3D LIDAR. First one improves detection performance and resolution of object by automatic accumulation of points in previous frames onto current objects. Second one additionally enhances the detection results by applying the GM-PHD filter which is modified in order to handle the poor situation to classified multi target. A quantitative evaluation with autonomously acquired road environment data shows the proposed methods highly increase the performance of existing pedestrian detection algorithms.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

Background Treatment Technique of Various Time-Lapse Images (다양한 미속촬영 영상의 배경처리 기법)

  • Kim, Jong-Seong;Kim, Jong-Chan;Seo, Young-Sang;Song, Seung-Heon;Kim, Eung-Kon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.639-643
    • /
    • 2008
  • Since seasonal changes, blooming of a flower, etc., which take place in accordance with passage of time, transpire over an extended duration of time, one performs the Time-Lapse so that one can watch them quickly by compressing time. Because such a Time-Lapse records a wide range of conditions including the place, angle of view, and time found in a natural state in accordance with a precise interval of time and in the samestate, it is also referred to as Interval recording. The Time-Lapse technique is used widely in various fields such as education, science, documentary, and the media. In terms of acquiring a Time-Lapse image, by making it possible for one to treat an image by deleting and adjusting unnecessary backgrounds excluding the main subject for photography such as the flower unlike existing methods and by making it possible for one to apply extensively the real-life recorded image as a library or image data in 2D or 3D, the present study seeks to propose a technique for the background treatment of Time-Lapse image that allows for everyone to bestow creativity to image production easily and conveniently.

  • PDF