• Title/Summary/Keyword: single camera

Search Result 776, Processing Time 0.028 seconds

THE ANALYSIS OF PSM (POWER SUPPLY MODULE) FOR MULTI-SPECTRAL CAMERA IN KOMPSAT

  • Park Jong-Euk;Kong Jong-Pil;Heo Haeng-Pal;Kim Young Sun;Chang Young Jun
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.493-496
    • /
    • 2005
  • The PMU (Payload Management Unit) in MSC (Multi-Spectral Camera) is the main subsystem for the management, control and power supply of the MSC payload operation. The PMU shall handle the communication with the BUS (Spacecraft) OBC (On Board Computer) for the command, the telemetry and the communications with the various MSC units. The PMU will perform that distributes power to the various MSC units, collects the telemetry reports from MSC units, performs thermal control of the EOS (Electro-Optical Subsystem), performs the NUC (Non-Uniformity Correction) function of the raw imagery data, and rearranges the pixel data and output it to the DCSU (Data Compression and Storage Unit). The BUS provides high voltage to the MSC. The PMU is connected to primary and redundant BUS power and distributes the high unregulated primary voltages for all MSC sub-units. The PSM (Power Supply Module) is an assembly in the PMU implements the interface between several channels on the input. The bus switches are used to prevent a single point system failure. Such a failure could need the PSS (Power Supply System) requirement to combine the two PSM boards' bus outputs in a wired-OR configuration. In such a configuration if one of the boards' output gets shorted to ground then the entire bus could fail thereby causing the entire MSC to fail. To prevent such a short from pulling down the system, the switch could be opened and disconnect the short from the bus. This switch operation is controlled by the BUS.

  • PDF

Automatic Detecting and Tracking Algorithm of Joint of Human Body using Human Ratio (인체 비율을 이용한 인체의 조인트 자동 검출 및 객체 추적 알고리즘)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.4
    • /
    • pp.215-224
    • /
    • 2011
  • There have been studying many researches to detect human body and to track one with increasing interest on human and computer interaction. In this paper, we propose the algorithm that automatically extracts joints, linked points of human body, using the ratio of human body under single camera and tracks object. The proposed method gets the difference images of the grayscale images and ones of the hue images between input image and background image. Then the proposed method composes the results, splits background and foreground, and extracts objects. Also we standardize the ratio of human body using face' length and the measurement of human body and automatically extract joints of the object using the ratio and the corner points of the silhouette of object. After then, we tract the joints' movement using block-matching algorithm. The proposed method is applied to test video to be acquired through a camera and the result shows that the proposed method automatically extracts joints and effectively tracks the detected joints.

Monitoring the Vegetation Coverage Rate of Small Artificial Wetland Using Radio Controlled Helicopter (무선조종 헬리콥터를 이용한 소규모 인공 습지의 식생피복율 변화 모니터링)

  • Lee, Chun-Seok
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.9 no.2
    • /
    • pp.81-89
    • /
    • 2006
  • The purpose of this study was to evaluate the applicability of small RC(radio controlled) helicopter and single lens reflect camera as SFAP(Small Format Aerial Photography) aquisition system to monitor the vegetation coverage of wetland. The system used to take pictures of small artificial wetland were a common optical camera(Nikon F80 with manual lens whose focal length was 28mm) attached to the bottom of a RC helicopter with a 50 cubic inch size glow engine. Three hundreds pictures were taken at the altitude of 50m above the ground, from 23rd June to 7th September 2005. Four from the images were selected and scanned to digital images whose dimension were 2048${\times}$1357 pixels. Those images were processed and rectified with GCP(Ground Control Poins) and digital map, and then classified by the supervised- classification module of image processing program PG-steamer Version 2.2. The major findings were as follows ; 1. The final images processed had very high spatial resolution so that the objects bigger than 30mm like lotus(Nelumbo nucifera), rock and deck were easily identified. 2. The dominant plants of the monitoring site were Monochoria ragianlis, Typha latifolia, Beckmannia syzigachne etc. Because those species have narrow and long leaves and form irregular biomass, the individuals were hardly identifiable, but the distribution of population were easily identifiable depending on the color difference. 3. The area covered by vegetation was rapidly increased during the first month of monitoring. At the beginning of the monitoring 23th June 2005, The rate of area covered by vegetation were only 34%, but after 27 and 60 days it increased to 74%, and the 86% respectively.

Design of a Smart Phone Panoramic Photograph Support System Using Sensor and Camera Technology (센서 및 카메라 기술을 적용한 스마트폰 파노라마 사진 지원 시스템 설계)

  • Kim, Bong-Hyun;Oh, Sang-Young
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.12
    • /
    • pp.7187-7192
    • /
    • 2014
  • Recently, theservice field based on the location while expanding into a variety of business areas and have generated significant revenue. In particular, the map service provides a variety of information in conjunction with such public transport directions. Therefore, this study evaluated the map service, as one of the key technologies, StreetView and LoadView photographs of panoramic photograph-support service-modules that can be supported on smart phones. For this, purpose sensors were provided to allow smart phone users to easily publish panoramic photographs. The unnecessary parts could be removed from several photos and pictures using the design technology, and the naturalness of the connection could be maintained by applying the algorithm to handle a single photograph. Finally, a system to work with smartphone panoramic photographs was configured and designed to operate a smartphone application panoramic photograph for 6 months.

An analysis of Electro-Optical Camera (EOC) on KOMPSAT-1 during mission life of 3 years

  • Baek Hyun-Chul;Yong Sang-Soon;Kim Eun-Kyou;Youn Heong-Sik;Choi Hae-Jin
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.512-514
    • /
    • 2004
  • The Electro-Optical Camera (EOC) is a high spatial resolution, visible imaging sensor which collects visible image data of the earth's sunlit surface and is the primary payload on KOMPSAT-l. The purpose of the EOC payload is to provide high resolution visible imagery data to support cartography of the Korean Peninsula. The EOC is a push broom-scanned sensor which incorporates a single nadir looking telescope. At the nominal altitude of 685Km with the spacecraft in a nadir pointing attitude, the EOC collects data with a ground sample distance of approximately 6.6 meters and a swath width of around 17Km. The EOC is designed to operate with a duty cycle of up to 2 minutes (contiguous) per orbit over the mission lifetime of 3 years with the functions of programmable gain/offset. The EOC has no pointing mechanism of its own. EOC pointing is accomplished by right and left rolling of the spacecraft, as needed. Under nominal operating conditions, the spacecraft can be rolled to an angle in the range from +/- 15 to 30 degrees to support the collection of stereo data. In this paper, the status of EOC such as temperature, dark calibration, cover operation and thermal control is checked and analyzed by continuously monitored state of health (SOH) data and image data during the mission life of 3 years. The aliveness of EOC and operation continuation beyond mission life is confirmed by the results of the analysis.

  • PDF

People Counting System by Facial Age Group (얼굴 나이 그룹별 피플 카운팅 시스템)

  • Ko, Ginam;Lee, YongSub;Moon, Nammee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.69-75
    • /
    • 2014
  • Existing People Counting System using a single overhead mounted camera has limitation in object recognition and counting in various environments. Those limitations are attributable to overlapping, occlusion and external factors, such as over-sized belongings and dramatic light change. Thus, this paper proposes the new concept of People Counting System by Facial Age Group using two depth cameras, at overhead and frontal viewpoints, in order to improve object recognition accuracy and robust people counting to external factors. The proposed system is counting the pedestrians by five process such as overhead image processing, frontal image processing, identical object recognition, facial age group classification and in-coming/out-going counting. The proposed system developed by C++, OpenCV and Kinect SDK, and it target group of 40 people(10 people by each age group) was setup for People Counting and Facial Age Group classification performance evaluation. The experimental results indicated approximately 98% accuracy in People Counting and 74.23% accuracy in the Facial Age Group classification.

Development of Fire Detection Algorithm using Intelligent context-aware sensor (상황인지 센서를 활용한 지능형 화재감지 알고리즘 설계 및 구현)

  • Kim, Hyeng-jun;Shin, Gyu-young;Oh, Young-jun;Lee, Kang-whan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.93-96
    • /
    • 2015
  • In this paper, we introduce a fire detection system using context-aware sensor. In existing weather and based on vision sensor of fire detection system case, acquired image through sensor of camera is extracting features about fire range as processing to convert HSI(Hue, Saturation, Intensity) model HSI which is color space can have durability in illumination changes. However, in this case, until a fire occurs wide range of sensing a fire in a single camera sensor, it is difficult to detect the occurrence of a fire. Additionally, the fire detection in complex situations as well as difficult to separate continuous boundary is set for the required area is difficult. In this paper, we propose an algorithm for real-time by using a temperature sensor, humidity, Co2, the flame presence information acquired and comparing the data based on multiple conditions, analyze and determine the weighting according to fire it. In addition, it is possible to differential management to intensive fire detection is required zone dividing the state of fire.

  • PDF

Real-Time Image-Based Relighting for Tangible Video Teleconference (실감화상통신을 위한 실시간 재조명 기술)

  • Ryu, Sae-Woon;Parka, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.807-810
    • /
    • 2009
  • This paper deals with a real-time image based relighting system for tangible video teleconference. The proposed image based relighting system renders the extracted human object using the virtual environmental images. The proposed system can homogenize virtually the lighting environments of remote users on the video teleconference, or render the humans like they are in the virtual places. To realize the video teleconference, the paper obtains the 3D object models of users in real-time using the controlled lighting system. In this paper, we use single color camera and synchronized two directional flash lights. Proposed system generates pure shading images using on and off flash images subtraction. One pure shading reflectance map generates a directional normal map from multiplication of each reflectance map and basic normal vector map. Each directional basic normal map is generated by inner vector calculation of incident light vector and camera viewing vector. And the basic normal vector means a basis component of real surface normal vector. The proposed system enables the users to immerse video teleconference just as they are in the virtual environments.

Development and Evaluation of Maximum-Likelihood Position Estimation with Poisson and Gaussian Noise Models in a Small Gamma Camera

  • Chung, Yong-Hyun;Park, Yong;Song, Tae-Yong;Jung, Jin-Ho;Gyuseong Cho
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.331-334
    • /
    • 2002
  • It has been reported that maximum-likelihood position-estimation (MLPE) algorithms offer advantages of improved spatial resolution and linearity over conventional Anger algorithm in gamma cameras. The purpose of this study is to evaluate the performances of the noise models, Poisson and Gaussian, in MLPE for the localization of photons in a small gamma camera (SGC) using NaI(Tl) plate and PSPMT. The SGC consists of a single NaI(Tl) crystal, 10 cm diameter and 6 mm thick, optically coupled to a PSPMT (Hamamatsu R3292-07). The PSPMT was read out using a resistive charge divider, which multiplexes 28(X) by 28(Y) cross wire anodes into four channels. Poisson and Gaussian based MLPE methods have been implemented using experimentally measured light response functions. The system resolutions estimated by Poisson and Gaussian based MLPE were 4.3 mm and 4.0 mm, respectively. Integral uniformities were 29.7% and 30.6%, linearities were 1.5 mm and 1.0 mm and count rates were 1463 cps and 1388 cps in Poisson and Gaussian based MLPE, respectively. The results indicate that Gaussian based MLPE, which is convenient to implement, has better performances and is more robust to statistical noise than Poisson based MLPE.

  • PDF

Lane Information Fusion Scheme using Multiple Lane Sensors (다중센서 기반 차선정보 시공간 융합기법)

  • Lee, Soomok;Park, Gikwang;Seo, Seung-woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.142-149
    • /
    • 2015
  • Most of the mono-camera based lane detection systems are fragile on poor illumination conditions. In order to compensate limitations of single sensor utilization, lane information fusion system using multiple lane sensors is an alternative to stabilize performance and guarantee high precision. However, conventional fusion schemes, which only concerns object detection, are inappropriate to apply to the lane information fusion. Even few studies considering lane information fusion have dealt with limited aids on back-up sensor or omitted cases of asynchronous multi-rate and coverage. In this paper, we propose a lane information fusion scheme utilizing multiple lane sensors with different coverage and cycle. The precise lane information fusion is achieved by the proposed fusion framework which considers individual ranging capability and processing time of diverse types of lane sensors. In addition, a novel lane estimation model is proposed to synchronize multi-rate sensors precisely by up-sampling spare lane information signals. Through quantitative vehicle-level experiments with around view monitoring system and frontal camera system, we demonstrate the robustness of the proposed lane fusion scheme.