• Title/Summary/Keyword: automatic camera placement

Search Result 4, Processing Time 0.015 seconds

Agent-based Automatic Camera Placement for Video Surveillance Systems (영상 감시 시스템을 위한 에이전트 기반의 자동화된 카메라 배치)

  • Burn, U-In;Nam, Yun-Young;Cho, We-Duke
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.103-116
    • /
    • 2010
  • In this paper, we propose an optimal camera placement using agent-based simulation. To derive importance of space and to cover the space efficiently, we accomplished an agent-based simulation based on classification of space and pattern analysis of moving people. We developed an agent-based camera placement method considering camera performance as well as space priority extracted from path finding algorithms. We demonstrate that the method not only determinates the optimal number of cameras, but also coordinates the position and orientation of the cameras with considering the installation costs. To validate the method, we compare simulation results with videos of real materials and show experimental results simulated in a specific space.

Relighting 3D Scenes with a Continuously Moving Camera

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.429-437
    • /
    • 2009
  • This paper proposes a novel technique for 3D scene relighting with interactive viewpoint changes. The proposed technique is based on a deep framebuffer framework for fast relighting computation which adopts image-based techniques to provide arbitrary view-changing. In the preprocessing stage, the shading parameters required for the surface shaders, such as surface color, normal, depth, ambient/diffuse/specular coefficients, and roughness, are cached into multiple deep framebuffers generated by several caching cameras which are created in an automatic manner. When the user designs the lighting setup, the relighting renderer builds a map to connect a screen pixel for the current rendering camera to the corresponding deep framebuffer pixel and then computes illumination at each pixel with the cache values taken from the deep framebuffers. All the relighting computations except the deep framebuffer pre-computation are carried out at interactive rates by the GPU.

An Automated Projection Welding System using Vision Processing Technique (영상인식 기술을 이용한 프로젝션용접 자동화시스템)

  • Park, Ki-Jung;Song, Ha-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.4
    • /
    • pp.517-522
    • /
    • 2011
  • Conventional projection welding systems suffer from lots of defective products caused by manual handling. In this paper, we introduce a projection welding system that performs automatic identification, welding and counting of components and products. The proposed system checks the existence and identifies placement of components to be welded by a vision camera. After welding of the components, it automatically updates product counts and dressing items. We show that the proposed welding system can reduce the defect rate and improve the productivity through experimental test with a existing system.

Design of Smart Device Assistive Emergency WayFinder Using Vision Based Emergency Exit Sign Detection

  • Lee, Minwoo;Mariappan, Vinayagam;Mfitumukiza, Joseph;Lee, Junghoon;Cho, Juphil;Cha, Jaesang
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.1
    • /
    • pp.101-106
    • /
    • 2017
  • In this paper, we present Emergency exit signs are installed to provide escape routes or ways in buildings like shopping malls, hospitals, industry, and government complex, etc. and various other places for safety purpose to aid people to escape easily during emergency situations. In case of an emergency situation like smoke, fire, bad lightings and crowded stamped condition at emergency situations, it's difficult for people to recognize the emergency exit signs and emergency doors to exit from the emergency building areas. This paper propose an automatic emergency exit sing recognition to find exit direction using a smart device. The proposed approach aims to develop an computer vision based smart phone application to detect emergency exit signs using the smart device camera and guide the direction to escape in the visible and audible output format. In this research, a CAMShift object tracking approach is used to detect the emergency exit sign and the direction information extracted using template matching method. The direction information of the exit sign is stored in a text format and then using text-to-speech the text synthesized to audible acoustic signal. The synthesized acoustic signal render on smart device speaker as an escape guide information to the user. This research result is analyzed and concluded from the views of visual elements selecting, EXIT appearance design and EXIT's placement in the building, which is very valuable and can be commonly referred in wayfinder system.