• Title/Summary/Keyword: Optimal Number of Cameras

Search Result 10, Processing Time 0.024 seconds

An Experimental Study on the Optimal Number of Cameras used for Vision Control System (비젼 제어시스템에 사용된 카메라의 최적개수에 대한 실험적 연구)

  • 장완식;김경석;김기영;안힘찬
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.2
    • /
    • pp.94-103
    • /
    • 2004
  • The vision system model used for this study involves the six parameters that permits a kind of adaptability in that relationship between the camera space location of manipulable visual cues and the vector of robot joint coordinates is estimated in real time. Also this vision control method requires the number of cameras to transform 2-D camera plane from 3-D physical space, and be used irrespective of location of cameras, if visual cues are displayed in the same camera plane. Thus, this study is to investigate the optimal number of cameras used for the developed vision control system according to the change of the number of cameras. This study is processed in the two ways : a) effectiveness of vision system model b) optimal number of cameras. These results show the evidence of the adaptability of the developed vision control method using the optimal number of cameras.

A Virtual Environment for Optimal use of Video Analytic of IP Cameras and Feasibility Study (IP 카메라의 VIDEO ANALYTIC 최적 활용을 위한 가상환경 구축 및 유용성 분석 연구)

  • Ryu, Hong-Nam;Kim, Jong-Hun;Yoo, Gyeong-Mo;Hong, Ju-Yeong;Choi, Byoung-Wook
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.29 no.11
    • /
    • pp.96-101
    • /
    • 2015
  • In recent years, researches regarding optimal placement of CCTV(Closed-circuit Television) cameras via architecture modeling has been conducted. However, for analyzing surveillance coverage through actual human movement, the application of VA(Video Analytics) function of IP(Internet Protocol) cameras has not been studied. This paper compares two methods using data captured from real-world cameras and data acquired from a virtual environment. In using real cameras, we develop GUI(Graphical User Interface) to be used as a logfile which is stored hourly and daily through VA functions and to be used commercially for placement of products inside a shop. The virtual environment was constructed to emulate an real world such as the building structure and the camera with its specifications. Moreover, suitable placement of the camera is done by recognizing obstacles and the number of people counted within the camera's range of view. This research aims to solve time and economic constraints of actual installation of surveillance cameras in real-world environment and to do feasibility study of virtual environment.

Analyzing the Influence of Spatial Sampling Rate on Three-dimensional Temperature-field Reconstruction

  • Shenxiang Feng;Xiaojian Hao;Tong Wei;Xiaodong Huang;Pan Pei;Chenyang Xu
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.246-258
    • /
    • 2024
  • In aerospace and energy engineering, the reconstruction of three-dimensional (3D) temperature distributions is crucial. Traditional methods like algebraic iterative reconstruction and filtered back-projection depend on voxel division for resolution. Our algorithm, blending deep learning with computer graphics rendering, converts 2D projections into light rays for uniform sampling, using a fully connected neural network to depict the 3D temperature field. Although effective in capturing internal details, it demands multiple cameras for varied angle projections, increasing cost and computational needs. We assess the impact of camera number on reconstruction accuracy and efficiency, conducting butane-flame simulations with different camera setups (6 to 18 cameras). The results show improved accuracy with more cameras, with 12 cameras achieving optimal computational efficiency (1.263) and low error rates. Verification experiments with 9, 12, and 15 cameras, using thermocouples, confirm that the 12-camera setup as the best, balancing efficiency and accuracy. This offers a feasible, cost-effective solution for real-world applications like engine testing and environmental monitoring, improving accuracy and resource management in temperature measurement.

Agent-based Automatic Camera Placement for Video Surveillance Systems (영상 감시 시스템을 위한 에이전트 기반의 자동화된 카메라 배치)

  • Burn, U-In;Nam, Yun-Young;Cho, We-Duke
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.103-116
    • /
    • 2010
  • In this paper, we propose an optimal camera placement using agent-based simulation. To derive importance of space and to cover the space efficiently, we accomplished an agent-based simulation based on classification of space and pattern analysis of moving people. We developed an agent-based camera placement method considering camera performance as well as space priority extracted from path finding algorithms. We demonstrate that the method not only determinates the optimal number of cameras, but also coordinates the position and orientation of the cameras with considering the installation costs. To validate the method, we compare simulation results with videos of real materials and show experimental results simulated in a specific space.

Setting of the Operating Conditions of Stereo CCTV Cameras by Weather Condition

  • Moon, Kwang;Pyeon, Mu Wook;Lee, Soo Bong;Lee, Do Rim
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.6
    • /
    • pp.591-597
    • /
    • 2014
  • A wide variety of image application methods, such as aerial image, terrestrial image, terrestrial laser, and stereo image point are currently under investigation to develop three-dimensional 3D geospatial information. In this study, matching points, which are needed to build a 3D model, were examined under diverse weather conditions by analyzing the stereo images recorded by closed circuit television (CCTV) cameras installed in the U-City. The tests on illuminance and precipitation conditions showed that the changes in the number of matching points were very sensitively correlated with the changes in the illuminance levels. Based on the performances of the CCTV cameras used in the test, this study was able to identify the optimal values of the shutter speed and iris. As a result, compared to an automatic control mode, improved matching points may be obtained for images filmed using the data obtained through this test in relation to different weather and illuminance conditions.

Optimal selection of fish assemblage survey method through comparing the result (어류군집 조사 결과 비교를 통한 최적의 방법 선택)

  • Jae-Young KIM;Sang-Min EOM;Byeong-Mo GIM;Tae Seob CHOI
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.60 no.2
    • /
    • pp.128-141
    • /
    • 2024
  • Fish resource surveys were conducted near Jeju Island in June, August and October 2021 using an underwater camera monitoring system, fish pots, and SCUBA diving methods. The efficiency of the methods used to survey fish resources was compared using the number of individuals compared to area per unit time (inds/m3/h) and the number of species compared to area per unit time (spp./m3/h). As a result of comparing the number of individuals compared to the area per unit time (inds/m3/h), the order was underwater camera 214.69, SCUBA diving 124.62, and fish pots 0.57 inds/m3/h. The number of species compared to area per unit time (spp./m3/h) is in the following order: SCUBA diving 0.85, underwater camera 0.38, and fish pots 0.01 spp./m3/h. The fish resource monitoring method using underwater cameras was found to be more efficient in individual counts, and the SCUBA diving method was found to be more efficient in species counts. When considering cost and survey efficiency, the fish resource survey method using underwater cameras was judged to be more effective. The results of this study are expected to be widely used in estimating the population density of fish, which is the core of future fisheries resource surveys.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Low Complexity Motion Estimation Search Method for Multi-view Video Coding (다시점 비디오 부호화를 위한 저 복잡도 움직임 추정 탐색 기법)

  • Yoon, Hyo-Sun;Kim, Mi-Young
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.5
    • /
    • pp.539-548
    • /
    • 2013
  • Although Motion estimation (ME) plays an important role in digital video compression, it requires a complicated search procedure to find an optimal motion vector. Multi-view video is obtained by capturing one three-dimensional scene with many cameras at different positions. The computational complexity of motion estimation for Multi-view video coding increases in proportion to the number of cameras. To reduce computational complexity and maintain the image quality, a low complexity motion estimation search method is proposed in this paper. The proposed search method consists of four-grid diamond search patten, two-gird diamond search pattern and TZ 2 Point search pattern. These search patterns exploit the characteristics of the distribution of motion vectors to place the search points. Experiment results show that the speedup improvement of the proposed method over TZ search method (JMVC) can be up to 1.8~4.5 times faster by reducing the computational complexity and the image quality degradation is about to 0.01~0.24 (dB).

A development of Intelligent Parking Control System Using Sensor-based on Arduino

  • LIM, Myung-Jae;JUNG, Dong-Kun;KWON, Young-Man
    • Korean Journal of Artificial Intelligence
    • /
    • v.9 no.2
    • /
    • pp.29-34
    • /
    • 2021
  • In this paper, for efficient parking control, in an Arduino environment, an intelligent parking control prototype was implemented to provide parking control and parking guidance information using HC-SR2O4 and RC522. The main elements of intelligent parking control are vehicle recognition sensors, parking control facilities, and integrated operating software. Whether the vehicle is parked on the parking surface may be confirmed through sensor or intelligent camera image analysis. Parking control equipment products include parking guidance and parking available display devices, vehicle number recognition cameras, and intelligent parking assistance systems. This paper applies and implements ultrasonic sensors and RFID concepts based on Arduino, recognizes registered vehicles, and displays empty spaces. When a vehicle enters a parking space to handle this function, the automatic parking management system distinguishes the registered vehicle from the external vehicle through the RC522 sensor. In addition, after checking whether the parking slot is empty, the HC-SR204 sensor is displayed through the LED so that the driver can visually check it. RFID is designed to check the parking status of the server in real time and provide the driver with optimal route service to the parking slot.

User-friendly 3D Object Reconstruction Method based on Structured Light in Ubiquitous Environments (유비쿼터스 환경에서 구조광 기반 사용자 친화적 3차원 객체 재구성 기법)

  • Jung, Sei-Hwa;Lee, Jeongjin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.11
    • /
    • pp.523-532
    • /
    • 2013
  • Since conventional methods for the reconstruction of 3D objects used a number of cameras or pictures, they required specific hardwares or they were sensitive to the photography environment with a lot of processing time. In this paper, we propose a 3D object reconstruction method using one photograph based on structured light in ubiquitous environments. We use color pattern of the conventional method for structured light. In this paper, we propose a novel pipeline consisting of various image processing techniques for line pattern extraction and matching, which are very important for the performance of the object reconstruction. And we propose the optimal cost function for the pattern matching. Using our method, it is possible to reconstruct a 3D object with efficient computation and easy setting in ubiquitous or mobile environments, for example, a smartphone with a subminiature projector like Galaxy Beam.