• Title/Summary/Keyword: Camera data

Search Result 2,420, Processing Time 0.029 seconds

Assessment of a smartphone-based monitoring system and its application

  • Ahn, Hoyong;Choi, Chuluong;Yu, Yeon
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.3
    • /
    • pp.383-397
    • /
    • 2014
  • Information technology advances are allowing conventional surveillance systems to be combined with mobile communication technologies, creating ubiquitous monitoring systems. This paper proposes monitoring system that uses smart camera technology. We discuss the dependence of interior orientation parameters on calibration target sheets and compare the accuracy of a three-dimensional monitoring system with camera location calculated by space resectioning using a Digital Surface Model (DSM) generated from stereo images. A monitoring housing is designed to protect a camera from various weather conditions and to provide the camera for power generated from solar panel. A smart camera is installed in the monitoring housing. The smart camera is operated and controlled through an Android application. At last the accuracy of a three-dimensional monitoring system is evaluated using a DSM. The proposed system was then tested against a DSM created from ground control points determined by Global Positioning Systems (GPSs) and light detection and ranging data. The standard deviation of the differences between DSMs are less than 0.12 m. Therefore the monitoring system is appropriate for extracting the information of objects' position and deformation as well as monitoring them. Through incorporation of components, such as camera housing, a solar power supply, the smart camera the system can be used as a ubiquitous monitoring system.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

Signal Level Analysis of a Camera System for Satellite Application

  • Kong, Jong-Pil;Kim, Bo-Gwan
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.220-223
    • /
    • 2008
  • A camera system for the satellite application performs the mission of observation by measuring radiated light energy from the target on the earth. As a development stage of the system, the signal level analysis by estimating the number of electron collected in a pixel of an applied CCD is a basic tool for the performance analysis like SNR as well as the data path design of focal plane electronic. In this paper, two methods are presented for the calculation of the number of electrons for signal level analysis. One method is a quantitative assessment based on the CCD characteristics and design parameters of optical module of the system itself in which optical module works for concentrating the light energy onto the focal plane where CCD is located to convert light energy into electrical signal. The other method compares the design\ parameters of the system such as quantum efficiency, focal length and the aperture size of the optics in comparison with existing camera system in orbit. By this way, relative count of electrons to the existing camera system is estimated. The number of electrons, as signal level of the camera system, calculated by described methods is used to design input circuits of AD converter for interfacing the image signal coming from the CCD module in the focal plane electronics. This number is also used for the analysis of the signal level of the CCD output which is critical parameter to design data path between CCD and A/D converter. The FPE(Focal Plane Electronics) designer should decide whether the dividing-circuit is necessary or not between them from the analysis. If it is necessary, the optimized dividing factor of the level should be implemented. This paper describes the analysis of the electron count of a camera system for a satellite application and then of the signal level for the interface design between CCD and A/D converter using two methods. One is a quantitative assessment based on the design parameters of the camera system, the other method compares the design parameters in comparison with those of the existing camera system in orbit for relative counting of the electrons and the signal level estimation. Chapter 2 describes the radiometry of the camera system of a satellite application to show equations for electron counting, Chapter 3 describes a camera system briefly to explain the data flow of imagery information from CCD and Chapter 4 explains the two methods for the analysis of the number of electrons and the signal level. Then conclusion is made in chapter 5.

  • PDF

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Traffic Safety Recommendation Using Combined Accident and Speeding Data

  • Onuean, Athita;Lee, Daesung;Jung, Hanmin
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.1
    • /
    • pp.49-54
    • /
    • 2020
  • Speed enforcement is one of the major challenges in traffic safety. The increasing number of accidents and fatalities has led governments to respond by implementing an intelligent control system. For example, the Korean government implemented a speed camera system for maintaining road safety. However, many drivers still engage in speeding behavior in blackspot areas where speed cameras are not provided. Therefore, we propose a methodology to analyze the combined accident and speeding data to offer recommendations to maintain traffic safety. We investigate three factors: "section," "existing speed camera location," and "over speeding data." To interpret the results, we used the QGIS tool for visualizing the spatial distribution of the incidents. Finally, we provide four recommendations based on the three aforementioned factors: "investigate with experts," "no action," "install fixed speed cameras," and "deploy mobile speed cameras."

South/Jeju Coast Beach Erosion Analysis Using Camera Monitoring Data (카메라 모니터링 자료를 활용한 남해안/제주 해빈 침식 분석)

  • Kim, Taerim
    • Journal of The Geomorphological Association of Korea
    • /
    • v.23 no.1
    • /
    • pp.129-140
    • /
    • 2016
  • Camera monitoring data for 5 years from January 2009 to January 2014 are analyzed to investigate changes in beach erosion on Sangju, Gujora and Haeundae beaches on the South sea and Jungmun beach on the south shore of Jeju Island. The data show the time series of beach area changes obtained from digital orthoimages rectified from oblique images taken near the beaches by cameras. Each beach has different sediment sizes and shapes, but faces the South and is eroded mainly during Typhoons. However, each beach often responds differently to the same Typhoon, and some beaches outside the influence of the Typhoon are also eroded. This study shows that high frequency data of beach area changes obtained from cameras can effectively analyze the seasonal changes in beach area.

A Design of Stand-Alone Linescan Camera Framegrabber Based on FPGA (FPGA 기반의 독립형 라인스캔 카메라 프레임그래버 설계)

  • Jeong, Heon;Choi, Han-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.12
    • /
    • pp.1036-1040
    • /
    • 2002
  • To process data of digital linescan camera, the frame grabber is essential to handle the data in low-level and in high speed more than 30 MHz stably. Traditional approaches to the development of hardware in vision system for the special purpose are mai y based on PC system, and are expensive and gigantic. Therefore, there are many difficulties in applying those in the field. So we investigate, in this paper, the implementation of FPGA for real-time processing of digital linescan camera. The system is not based on PC, but electronic device such as micropncessor. So it is expected that the use of FPGAs for low-level processing represents a fast, stable and inexpensive system. The experiments are carried out on the web guiding system in order to show the efficiency of the new image processor.

Localization System for Mobile Robot Using Electric Compass and Tracking IR Light Source (전자 나침반과 적외선 광원 추적을 이용한 이동로봇용 위치 인식 시스템)

  • Son, Chang-Woo;Lee, Seung-Heui;Lee, Min-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.8
    • /
    • pp.767-773
    • /
    • 2008
  • This paper presents a localization system based on the use of electric compass and tracking IR light source. Digital RGB(Red, Green, Blue)signal of digital CMOS Camera is sent to CPLD which converts the color image to binary image at 30 frames per second. CMOS camera has IR filter and UV filter in front of CMOS cell. The filters cut off above 720nm light source. Binary output data of CPLD is sent to DSP that rapidly tracks the IR light source by moving Camera tilt DC motor. At a robot toward north, electric compass signals and IR light source angles which are used for calculating the data of the location system. Because geomagnetic field is linear in local position, this location system is possible. Finally, it is shown that position error is within ${\pm}1.3cm$ in this system.

Development of an Algorithm to Measure the Road Traffic Data Using Video Camera

  • Kim, Hie-Sik;Kim, Jin-Man
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.95.2-95
    • /
    • 2002
  • 1. Introduction of Camera Detection system Camera Detection system is an equipment that can detect realtime traffic information by image processing techniques. This information can be used to analyze and control road traffic flow. It is also used as a method to detect and control traffic flow for ITS(Intelligent Transportation System). Traffic information includes speed, head way, traffic flow, occupation time and length of queue. There are many detection systems for traffic data. But video detection system can detect multiple lanes with only one camera and collect various traffic information. So it is thought to be the most efficient method of all detection system. Though the...

  • PDF