• Title/Summary/Keyword: 원거리 입체영상

Search Result 12, Processing Time 0.027 seconds

Studies on the acquisition of CONV and IOD according to the distance for long-distance 3D stereoscopic video shooting (원거리 3D 입체영상촬영을 위한 거리에 따른 IOD와 CONV의 획득에 관한 연구)

  • Kim, Hyun-jo;Kim, Min;Son, Kyung-Min;Kim, Kwan hyung;Byun, Gi-sik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.919-921
    • /
    • 2013
  • 영상시장의 개척과 디지털 기술의 발전과 더불어 차세대 3D 입체영상기술에 대한 관심과 수요가 증가하고 있다. 입체 정보는 크게 '단안 입체 정보(monoscopic depth cue)'와 '양안 입체 정보(stereoscopic depth cue)'로 분류 할 수 있다. 단안 입체 정보는 은폐, 상대적 크기, 상대적 밀도, 시야 안의 높이, 공기투시, 운동투시, 초점조절인 7가지로 경험에 의한 입체감을 지각하는 것을 말하며 양안 입체 정보는 두 눈으로 볼 때 처음으로 깊이를 지각 할 수 있는 것으로 크게 '동시시(simultaneous perception)', '융합(sensory fusion)', '입체시(stereoscopic vision)'의 세종류의 기능으로 분류한다. 3D 촬영은 이 양안시의 원리를 이용하여 두 대의 카메라의 좌우 영상을 합성하여 깊이감 있는 영상을 만들어 내게 된다. 본 논문에서는 3D 촬영방법은 촬영방식에 따라 크게 평행방식, 직교방식, 교차방식이 있는데 이중 중 원거리 촬영에 유리한 교차방식을 활용하여 사이드 바이 사이드 리그(Rig; 카메라를 수평으로 설치할 수 있도록 만들어진 장치)를 원거리 촬영에 맞게 축간거리를 기존의 리그 사이즈보다 2배 이상 긴 리그를 제작하여 보다 먼 거리에서의 상이한 좌우 영상획득이 가능하도록 설계하였다. 또한, 일정한 간격에 따라 피사체를 촬영하면서 거리에 따른 양 카메라의 가장 이상적인 IOD(Interocular Distance)와 CONV(Convergence)를 찾고, 교차방식촬영에 따른 특징적인 아티팩트인 키스톤 왜곡(Keystone distance)의 보정을 통한 원거리 입체영상을 효과적으로 획득하는데 본 연구방법을 제안하고자 한다.

  • PDF

Design of Fuzzy Inference System for Cameras Inter-Axial Distance Control of Remote Stereoscopic Photographs (원거리 입체촬영용 카메라 축간거리 조절을 위한 퍼지추론 시스템)

  • Byun, Gi-Sig;Oh, Sei-Woong;Kim, Gwan-Hyung;Kim, Min;Kim, Hyun-Jo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.1
    • /
    • pp.41-49
    • /
    • 2015
  • The common way to obtain a stereoscopic image of a subject at a distance is to place two cameras on the parallel axis rather than crossing axis. To find the IAD and maximum focal length, left and right images are obtained by varying the IAD of cameras and the focal length of the camera lens and the depth budget for the obtained images is analyzed through post production. Then, the database for IAD and focal length of the camera lens with the depth range that does not cause visual fatigue and visual discomfort are developed. These data are used to design fuzzy control and deduce the IAD and focal length of the camera lens to shoot a subject at a distance, and the function of the fuzzy control is confirmed through the actual shooting within the range of deduced IAD and focal length of the camera lens.

3D Panoramic Mosaiciking to Silppress the Ghost Effect at Long Distance Scene for Urban Area Visualization (도심영상 입체 가시화 중 발생하는 원거리 환영현상 해소를 위한 3차원 파노라믹 모자이크)

  • Chon, Jae-Choon;Kim, Hyong-Suk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.87-94
    • /
    • 2005
  • 3D image mosaicking is useful for 3D visualization of the roadside scene of urban area by projecting 2D images to the 3D planes. When a sequence of images are filmed from a side-looking video camera passing long distance areas, the ghost effect in which same objects appear repeatively occurs. To suppress such ghost effect, the long distance range areas are detected by using the distance between the image frame and the 3D coordinate of tracked optical flows. The ghost effects are suppressed by projecting the part of image frames onto 3D multiple planes utilizing vectors passing the focal point of frames and a virtual focal point. The virtual focal point is calculated by utilizing the first and last frames of the long distance range areas. We demonstrate algorithm that creates efficient 3D Panoramic mosaics without the ghost effect at the long distance area.

Comparison of Stereoscopic Fusional Area between People with Good and Poor Stereo Acuity (입체 시력이 양호한 사람과 불량인 사람간의 입체시 융합 가능 영역 비교)

  • Kang, Hyungoo;Hong, Hyungki
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.21 no.1
    • /
    • pp.61-68
    • /
    • 2016
  • Purpose: This study investigated differences in stereoscopic fusional area between those with good and poor stereo acuity in viewing stereoscopic displays. Methods: Stereo acuity of 39 participants (18 males and 21 females, $23.6{\pm}3.15years$) was measured with the random dot stereo butterfly method. Participants with stereo-blindness were not included. Stereoscopic fusional area was measured using stereoscopic stimulus by varying the amount of horizontal disparity in a stereoscopic 3D TV. Participants were divided into two groups of good and poor stereo acuity. Criterion for good stereo acuity was determined as less than 60 arc seconds. Measurements arising from the participants were statistically analyzed. Results: 26 participants were measured to have good stereo acuity and 13 participants poor stereo acuity. In case of the stereoscopic stimulus farther than the fixation point, threshold of horizontal disparity for those with poor stereo acuity were measured to be smaller than the threshold for those with good stereo acuity, with a statistically significant difference. On the other hand, there was no statistically significant difference between the two groups, in case of the stereoscopic stimulus nearer to the fixation point. Conclusions: In viewing stereoscopic displays, the boundary of stereoscopic fusional area for the poor stereo acuity group was smaller than the boundary of good stereo acuity group only for the range behind the display. Hence, in viewing stereoscopic displays, participants with poor stereo acuity would have more difficulty perceiving the fused image at farther distances compared to participants with good stereo acuity.

A Real-Time Stereoscopic Image Conversion Method Based on A Single Frame (단일 프레임 기반의 실시간 입체 영상 변환 방법)

  • Jung Jae-Sung;Cho Hwa-Hyun;Choi Myung-Ryul
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.1 s.307
    • /
    • pp.45-52
    • /
    • 2006
  • In this paper, a real-time stereoscopic image conversion method using a single frame from a 2-D image is proposed. The Stereoscopic image is generated by creating depth map using vortical position information and parallax processing. For a real-time processing of stereoscopic conversion and reduction of hardware complexity, it uses image sampling, object segmentation by standardizing luminance and depth map generation by boundary scan. The proposed method offers realistic 3-D effect regardless of the direction, velocity and scene conversion of the 2-D image. It offers effective stereoscopic conversion using images suitable conditions assumed in this paper such as recorded image at long distance, landscape and panorama photo because it creates different depth sense using vertical position information from a single frame. The proposed method can be applied to still image because it uses a single frame from a 2-D image. The proposed method has been evaluated using visual test and APD for comparing the stereoscopic image of the proposed method with that of MTD. It is confirmed that stereoscopic images conversed by the proposed method offers 3-D effect regardless of the direction and velocity of the 2-D image.

Implementation of Real-time Stereoscopic Image Conversion Algorithm Using Luminance and Vertical Position (휘도와 수직 위치 정보를 이용한 입체 변환 알고리즘 구현)

  • Yun, Jong-Ho;Choi, Myul-Rul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.5
    • /
    • pp.1225-1233
    • /
    • 2008
  • In this paper, the 2D/3D converting algorithm is proposed. The single frame of 2D image is used fur the real-time processing of the proposed algorithm. The proposed algorithm creates a 3D image with the depth map by using the vertical position information of a object in a single frame. In order to real-time processing and improve the hardware complexity, it performs the generation of a depth map using the image sampling, the object segmentation with the luminance standardization and the boundary scan. It might be suitable to a still image and a moving image, and it can provide a good 3D effect on a image such as a long distance image, a landscape, or a panorama photo because it uses a vertical position information. The proposed algorithm can adapt a 3D effect to a image without the restrictions of the direction, velocity or scene change of an object. It has been evaluated with the visual test and the comparing to the MTD(Modified Time Difference) method using the APD(Absolute Parallax Difference).

Changes in Visual Function After Viewing an Anaglyph 3D Image (Anaglyph 3D입체 영상 시청 후의 시기능 변화)

  • Lee, Wook-Jin;Kwak, Ho-Won;Son, Jeong-Sik;Kim, In-Su;Yu, Dong-Sik
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.16 no.2
    • /
    • pp.179-186
    • /
    • 2011
  • Purpose: This study aimed to compare and assess changes of visual functions in viewing an anaglyph 3D image. Methods: Visual functions were examined before and after viewing a 2D image and an anaglyph 3D image with red-green glasses on seventy college students (mean age = 22.29${\pm}$2.19 years). Visual function tests were carried out for von Graefe phoria test, accommodative amplitude test by (-) lens addition, negative relative accommodation (NRA) and positive relative accommodation (PRA) test, negative relative convergence (NRC) and positive relative convergence (PRC) test, accommodative facility, and vergence facility test. Results: Assessment of the visual functions indicated that near exophoria and accommodative amplitude were reduced after viewing a 3D image, and although there were small changes in relation to these findings, NRC and PRC showed tendencies to increase and decrease at near, respectively. There were no significant changes with NRA and PRA, and accommodative and vergence facility were shown to have improved. Conclusions: Changes of visual functions were more in the 3D image than the 2D image, especially at near than distance. Particularly, the improvement of accommodative and vergence facility could be related to an effect of subsequent accommodation and vergence shift to have stereopsis in the 3D image. These results indicate that an anaglyph 3D image may, to some extent, be the effect of vision training such as anaglyphs.

Stereo Video Delivery System for Enhanced Immersion (실감성 증진을 위한 스테레오 비디오 전송 시스템)

  • 장혜영;오세찬;김종원;우운택;변옥환
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.6
    • /
    • pp.602-609
    • /
    • 2003
  • Emerging high-speed next-generation Internet is enabling immersive media communication systems and applications to realize geographically distributed team collaborations while overcoming the limit of distance and time. Focusing on the reliable real-time delivery of 3D (i.e., stereo) video among corresponding parties, in this paper, key schemes for stereo video processing/display and reliable transport of stereo video packets over high-speed Internet are designed and implemented. The performance of proposed stereo video delivery system is evaluated both by emulating various network situations for quantitative comparison and by transmitting over real-world Internet up to the speed of around 100 Mbps. The results demonstrate the feasibility of the proposed system in supporting the desired immersive communication.

A Study on 3D Panoramic Generation using Depth-map (깊이지도를 이용한 3D 파노라마 생성에 관한 연구)

  • Cho, Seung-Il;Kim, Jong-Chan;Ban, Kyeong-Jin;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.6
    • /
    • pp.831-838
    • /
    • 2011
  • Computer vision research area, a division of computer graphics application area that creates realistic visualization in computer, conducts vigorously researches on developing realistic 3D model or virtual environment. As the popularization and development of 3D display makes common users easy to experience a solid 3D virtual reality, the demand for virtual reality contents are increasing. This paper proposes 3D panorama system using depth point location-based depth map generation method. 3D panorama using depth map gives an effect that makes users feel staying at real place and looking around nearby circumstances. Also, 3D panorama gives free sight point for both nearby object and remote one and provides solid 3D video.

Laser Pointer Interaction System Based on Image Processing (영상처리 기반의 레이저 포인터 인터랙션 시스템)

  • Kim, Nam-Woo;Lee, Seung-Jae;Lee, Joon-Jae;Lee, Byung-Gook
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.373-385
    • /
    • 2008
  • The evolution of input device for computer has pretty much slowed down after the introduction of mouse as feinting input device. Even though stylus and touch screen were invented later on which provide some alternatives, all these methods were designed to have close range interaction with computer. There are not many options available for user to interact with computer from afar, which is especially needed during presentation. Therefore, in this paper, we try to fill the gap by proposing a laser pointer interaction system to allow user to give pointing input command to the computer from some distance away using only laser pointer, which is cheap and readily available. With the combination of image processing based software, we could provide mouse-like pointing interaction with computer. The proposed system works well not only in currently plane screen, but also in flexible screen by incorporating the feature of non-linear coordinate mapping algorithm in our system so that our system can support non-linear environment, such as curved and flexible wall.

  • PDF