• Title/Summary/Keyword: Visual Scene

Search Result 370, Processing Time 0.021 seconds

Virtual Control of Optical Axis of the 3DTV Camera for Reducing Visual Fatigue in Stereoscopic 3DTV

  • Park, Jong-Il;Um, Gi-Mun;Ahn, Chung-Hyun;Ahn, Chie-Teuk
    • ETRI Journal
    • /
    • v.26 no.6
    • /
    • pp.597-604
    • /
    • 2004
  • In stereoscopic television, there is a trade-off between visual comfort and 3-dimensional (3D) impact with respect to the baseline-stretch of a 3DTV camera. It is necessary to adjust the baseline-stretch at an appropriate the distance depending on the contents of a scene if we want to obtain a subjectively optimal quality of an image. However, it is very hard to obtain a small baseline-stretch using commercially available cameras of broadcasting quality where the sizes of the lens and CCD module are large. In order to overcome this limitation, we attempt to freely control the baseline-stretch of a stereoscopic camera by synthesizing the virtual views at the desired location of interval between two cameras. This proposed technique is based on the stereo matching and view synthesis techniques. We first obtain a dense disparity map using a hierarchical stereo matching with the edge-adaptive multiple shifted windows. Then, we synthesize the virtual views using the disparity map. Simulation results with various stereoscopic images demonstrate the effectiveness of the proposed technique.

  • PDF

An Approach for Localization Around Indoor Corridors Based on Visual Attention Model (시각주의 모델을 적용한 실내 복도에서의 위치인식 기법)

  • Yoon, Kook-Yeol;Choi, Sun-Wook;Lee, Chong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.93-101
    • /
    • 2011
  • For mobile robot, recognizing its current location is very important to navigate autonomously. Especially, loop closing detection that robot recognize location where it has visited before is a kernel problem to solve localization. A considerable amount of research has been conducted on loop closing detection and localization based on appearance because vision sensor has an advantage in terms of costs and various approaching methods to solve this problem. In case of scenes that consist of repeated structures like in corridors, perceptual aliasing in which, the two different locations are recognized as the same, occurs frequently. In this paper, we propose an improved method to recognize location in the scenes which have similar structures. We extracted salient regions from images using visual attention model and calculated weights using distinctive features in the salient region. It makes possible to emphasize unique features in the scene to classify similar-looking locations. In the results of corridor recognition experiments, proposed method showed improved recognition performance. It shows 78.2% in the accuracy of single floor corridor recognition and 71.5% for multi floor corridors recognition.

Visual Sensing of Fires Using Color and Dynamic Features (컬러와 동적 특징을 이용한 화재의 시각적 감지)

  • Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.3
    • /
    • pp.211-216
    • /
    • 2012
  • Fires are the most common disaster and early fire detection is of great importance to minimize the consequent damage. Simple sensors including smoke detectors are widely used for the purpose but they are able to sense fires only at close proximity. Recently, due to the rapid advances of relevant technologies, vision-based fire sensing has attracted growing attention. In this paper, a novel visual sensing technique to automatically detect fire is presented. The proposed technique consists of multiple steps of image processing: pixel-level, block-level, and frame level. At the first step, fire flame pixel candidates are selected based on their color values in YIQ space from the image of a camera which is installed as a vision sensor at a fire scene. At the second step, the dynamic parts of flames are extracted by comparing two consecutive images. These parts are then represented in regularly divided image blocks to reduce pixel-level detection error and simplify following processing. Finally, the temporal change of the detected blocks is analyzed to confirm the spread of fire. The proposed technique was tested using real fire images and it worked quite reliably.

Augmented System for Immersive 3D Expansion and Interaction

  • Yang, Ungyeon;Kim, Nam-Gyu;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.149-158
    • /
    • 2016
  • In the field of augmented reality technologies, commercial optical see-through-type wearable displays have difficulty providing immersive visual experiences, because users perceive different depths between virtual views on display surfaces and see-through views to the real world. Many cases of augmented reality applications have adopted eyeglasses-type displays (EGDs) for visualizing simple 2D information, or video see-through-type displays for minimizing virtual- and real-scene mismatch errors. In this paper, we introduce an innovative optical see-through-type wearable display hardware, called an EGD. In contrast to common head-mounted displays, which are intended for a wide field of view, our EGD provides more comfortable visual feedback at close range. Users of an EGD device can accurately manipulate close-range virtual objects and expand their view to distant real environments. To verify the feasibility of the EGD technology, subject-based experiments and analysis are performed. The analysis results and EGD-related application examples show that EGD is useful for visually expanding immersive 3D augmented environments consisting of multiple displays.

Propriety analysis of Depth-Map production methods For Depth-Map based on 20 to 3D Conversion - the Last Bladesman (2D to 3D Conversion에서 Depth-Map 기반 제작 사례연구 - '명장 관우' 제작 중심으로 -)

  • Kim, Hyo In;Kim, Hyung Woo
    • Smart Media Journal
    • /
    • v.3 no.1
    • /
    • pp.52-62
    • /
    • 2014
  • Prevalence of common three-dimensional display progresses, increasing the demand for three-dimensional content. Starting from the year 2010 to meet increasing 2D to 3D conversion is insufficient to meet demand content was presented as an alternative. But, Convert 2D to 3D stereo effect only emphasizes content production as a three-dimensional visual fatigue and the degradation of the Quality problems are pointed out. In this study, opened in 2011 'Scenes Guan', the 13 selected Scene is made of the three-dimensional transform the content and the Quality of the transformation applied to the Depth-Map is a visual representation of three-dimensional fatigue and, the adequacy of whether the expert has group interviews and surveys were conducted. Many of the changes are applied to the motion picture of the three-dimensional configurations of Depth-Map conversion technology used in many ways before and after the analysis of the relationship of cascade configurations to create a depth map to the stage. Experiments, presented in this study is a three-dimensional configuration of Depth-Map transformation can lower the production of a three-dimensional visual fatigue and improve the results obtained for a reasonable place was more than half of the experiment accepted the expert group to show a positive reaction were. The results of this study with a rapid movement to convert 2D images into 3D images of applying Depth-map configuration cascade manner to reduce the visual fatigue, to increase the efficiency, and has a three-dimensional perception is the result derived.

Real-Time Shadow Generation Using Image-Based Rendering Technique (영상기반 렌더링 기법을 이용한 실시간 그림자 생성)

  • Lee, Jung-Yeon;Im, In-Seong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.7 no.1
    • /
    • pp.27-35
    • /
    • 2001
  • Shadows are important elements in producing a realistic image. In rendering. generation of the exact shape and position of shadow is crucial in providing the user with visual cues on the scene. While the shadow map technique quickly generates a shadow for the scene wherein objects and light sources are fixed. it gets slow down as they start to move. In this paper. we apply an image-based rendering technique to generate shadows in real-time using graphics hardware. Due to the heavy requirement of storage for a shadow map repository. we use a wavelet-based compression scheme for effective compression. Our method will be efficiently used in generating realistic scenes in many real-time applications such as 3D games and virtual reality systems.

  • PDF

The Development of Device and the Algorithm for the Haptic Rendering (가상현실 역감구현을 위한 알고리즘과 장치개발)

  • 김영호;이경백;김영배
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.106-109
    • /
    • 2000
  • The virtual reality - haptic device is developed for the purpose used in the work that human cannot approach and that need elaborate exercises. To render haptic, the total system is constituted master, haptic device, and slave, remote manipulator. Human operates the remote manipulator. Human operates the remote manipulator relying on the hapti devices and stereo graphic. And then the force and scene of the remote manipulator is fed-back from each haptic devices and virtual devices. The feedback information gets system gain exactly. The system gain provides the most exact haptic and virtual devices. The feedback information gets system gain exactly. The system gain provides the most exact haptic and scene to human by the location, the graphic rendering and the haptic rendering algorithm on real-time. In this research, 3D haptic device is developed for common usage and make human feel the haptic when human contacts virtual object rendered by computer graphic. The haptic device is good for tracing location and producing devices because of the row structure. Also, openGL and Visual Basic is utilized to the algorithms for haptic rendering. The haptic device of this research makes the interface possible not only with virtual reality but also with the real remote manipulator.

  • PDF

Mobility Improvement of an Internet-based Robot System Using the Position Prediction Simulator

  • Lee Kang Hee;Kim Soo Hyun;Kwak Yoon Keun
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.6 no.3
    • /
    • pp.29-36
    • /
    • 2005
  • With the rapid growth of the Internet, the Internet-based robot has been realized by connecting off-line robot to the Internet. However, because the Internet is often irregular and unreliable, the varying time delay in data transmission is a significant problem for the construction of the Internet-based robot system. Thus, this paper is concerned with the development of an Internet-based robot system, which is insensitive to the Internet time delay. For this purpose, the PPS (Position Prediction Simulator) is suggested and implemented on the system. The PPS consists of two parts : the robot position prediction part and the projective virtual scene part. In the robot position prediction part, the robot position is predicted for more accurate operation of the mobile robot, based on the time at which the user's command reaches the robot system. The projective virtual scene part shows the 3D visual information of a remote site, which is obtained through image processing and position prediction. For the verification of this proposed PPS, the robot was moved to follow the planned path under the various network traffic conditions. The simulation and experimental results showed that the path error of the robot motion could be reduced using the developed PPS.

The Design and Development of MPEG-4 Contents Authoring System (MPEG-4 컨텐츠 저작 시스템 설계 및 개발)

  • Cha, Kyung-Ae;Kim, Hee-Sun;Kim, Sang-Wook
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.4
    • /
    • pp.309-316
    • /
    • 2001
  • MPEG-4 describes audiovisual scenes that are composed of several media objects, organized in a hierarchical fashion. And for end users, it brings higher levels of interaction with content, within the limits set by the author. These spatio-temporal arrangements of the objects in the scene are specified using a parametric methodology, BIFS(BInary Format for Scenes). This paper proposes MPEG-4 Contents Authoring System that provides visual configuration of an MPEG-4 scene and its event information. The developed MPEG-4 Contents Authoring System generates streaming MPEG-4 Contents, such as BIFS stream, OD(Object Descriptor) stream automatically.

  • PDF

Interactive Multimedia Authoring Tool using MPEG-4 BIFS and Wireless Network (MPEG-4 BIFS와 무선데이터통신망을 이용한 인터렉티브 멀티미디어 저작 도구)

  • Ryu, Sung-Pil;Kwak, Nae-Jung;Kwon, Dong-Jin
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.458-460
    • /
    • 2006
  • MPEG-4 BIFS(Binary Format for Scene) is the format to describe the information of time-spatial location of each visual objects in the scene. This is broadcasting for test through T-DMB in the our country and is able to be converted to various multimedia formats. The DMB receivers used currently are loaded on mobile devices mostly and the supply of the receiver is increasing steadily. These enable various service for convergence of DMB and wireless network to do. Therefore, this paper propose a new method of interactive multimedia authoring tool using wireless network and MPEG-4 BIFS. The proposed method corrects and complements MPEG-4 BIFS and have DMB broadcasting and wireless network co-operate. Also using this method, users can make out contents by themselves and can retransmit them through DMB broadcasting. The proposed method produce interactive multimedia contents reconstructed at user's request.

  • PDF