• Title/Summary/Keyword: Camera sensor

Search Result 1,278, Processing Time 0.03 seconds

Characteristics of the Electro-Optical Camera(EOC) (다목적실용위성탑재 전자광학카메라(EOC)의 성능 특성)

  • Seunghoon Lee;Hyung-Sik Shim;Hong-Yul Paik
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.3
    • /
    • pp.213-222
    • /
    • 1998
  • Electro-Optical Camera(EOC) is the main payload of the KOrea Multi-Purpose SATellite(KOMPSAT) with the mission of cartography to build up a digital map of Korean territory including a Digital Terrain Elevation Map(DTEM). This instalment which comprises EOC Sensor Assembly and EOC Electronics Assembly produces the panchromatic images of 6.6 m GSD with a swath wider than 17 km by push-broom scanning and spacecraft body pointing in a visible range of wavelength, 510~730 nm. The high resolution panchromatic image is to be collected for 2 minutes during 98 minutes of orbit cycle covering about 800 km along ground track, over the mission lifetime of 3 years with the functions of programmable gain/offset and on-board image data storage. The image of 8 bit digitization, which is collected by a full reflective type F8.3 triplet without obscuration, is to be transmitted to Ground Station at a rate less than 25 Mbps. EOC was elaborated to have the performance which meets or surpasses its requirements of design phase. The spectral response, the modulation transfer function, and the uniformity of all the 2592 pixel of CCD of EOC are illustrated as they were measured for the convenience of end-user. The spectral response was measured with respect to each gain setup of EOC and this is expected to give the capability of generating more accurate panchromatic image to the users of EOC data. The modulation transfer function of EOC was measured as greater than 16 % at Nyquist frequency over the entire field of view, which exceeds its requirement of larger than 10 %. The uniformity that shows the relative response of each pixel of CCD was measured at every pixel of the Focal Plane Array of EOC and is illustrated for the data processing.

A 2D / 3D Map Modeling of Indoor Environment (실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법)

  • Jo, Sang-Woo;Park, Jin-Woo;Kwon, Yong-Moo;Ahn, Sang-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF

Image Contrast and Sunlight Readability Enhancement for Small-sized Mobile Display (소형 모바일 디스플레이의 영상 컨트라스트 및 야외시인성 개선 기법)

  • Chung, Jin-Young;Hossen, Monir;Choi, Woo-Young;Kim, Ki-Doo
    • Journal of IKEEE
    • /
    • v.13 no.4
    • /
    • pp.116-124
    • /
    • 2009
  • Recently the CPU performance of modem chipsets or multimedia processors of mobile phone is as high as notebook PC. That is why mobile phone has been emerged as a leading ICON on the convergence of consumer electronics. The various applications of mobile phone such as DMB, digital camera, video telephony and internet full browsing are servicing to consumers. To meet all the demands the image quality has been increasingly important. Mobile phone is a portable device which is widely using in both the indoor and outside environments, so it is needed to be overcome to deteriorate image quality depending on environmental light source. Furthermore touch window is popular on the mobile display panel and it makes contrast loss because of low transmittance of ITO film. This paper presents the image enhancement algorithm to be embedded on image enhancement SoC. In contrast enhancement, we propose Clipped histogram stretching method to make it adaptive with the input images, while S-shape curve and gain/offset method for the static application And CIELCh color space is used to sunlight readability enhancement by controlling the lightness and chroma components which is depended on the sensing value of light sensor. Finally the performance of proposed algorithm is evaluated by using histogram, RGB pixel distribution, entropy and dynamic range of resultant images. We expect that the proposed algorithm is suitable for image enhancement of embedded SoC system which is applicable for the small-sized mobile display.

  • PDF

Study on Structure Visual Inspection Technology using Drones and Image Analysis Techniques (드론과 이미지 분석기법을 활용한 구조물 외관점검 기술 연구)

  • Kim, Jong-Woo;Jung, Young-Woo;Rhim, Hong-Chul
    • Journal of the Korea Institute of Building Construction
    • /
    • v.17 no.6
    • /
    • pp.545-557
    • /
    • 2017
  • The study is about the efficient alternative to concrete surface in the field of visual inspection technology for deteriorated infrastructure. By combining industrial drones and deep learning based image analysis techniques with traditional visual inspection and research, we tried to reduce manpowers, time requirements and costs, and to overcome the height and dome structures. On board device mounted on drones is consisting of a high resolution camera for detecting cracks of more than 0.3 mm, a lidar sensor and a embeded image processor module. It was mounted on an industrial drones, took sample images of damage from the site specimen through automatic flight navigation. In addition, the damege parts of the site specimen was used to measure not only the width and length of cracks but white rust also, and tried up compare them with the final image analysis detected results. Using the image analysis techniques, the damages of 54ea sample images were analyzed by the segmentation - feature extraction - decision making process, and extracted the analysis parameters using supervised mode of the deep learning platform. The image analysis of newly added non-supervised 60ea image samples was performed based on the extracted parameters. The result presented in 90.5 % of the damage detection rate.

A Methodology for Evaluating Vehicle Driving Safety based on the Analysis of Interactions With Roads and Adjacent Vehicles (도로 및 인접차량과의 상호작용분석을 통한 차량의 주행안전성 평가기법 개발 연구)

  • PARK, Jaehong;OH, Cheol;YUN, Dukgeun
    • Journal of Korean Society of Transportation
    • /
    • v.35 no.2
    • /
    • pp.116-128
    • /
    • 2017
  • Traffic accidents can be defined as a physical collision event of vehicles occurred instantaneously when drivers do not perceive the surrounding vehicles and roadway environments properly. Therefore, detecting the high potential events that cause traffic accidents with monitoring the interactions among the surroundings continuously by driver is the prerequisite for prevention the traffic accidents. For the analysis, basic data were collected to analyze interactions using a test vehicle which is equipped the GPS(Global Positioning System)-IMU(Inertial Measurement Unit), camera, radar and RiDAR. From the collected data, highway geometric information and the surrounding traffic situation were analyzed and then safety evaluation algorithm for driving vehicle was developed. In order to detect a dangerous event of interaction with surrounding vehicles, locations and speed data of surrounding vehicles acquired from the radar sensor were used. Using the collected data, the tangent and curve section were divided and the driving safety evaluation algorithm which is considered the highway geometric characteristic were developed. This study also proposed an algorithm that can assess the possibility of collision against surrounding vehicles considering the characteristics of geometric road structure. The methodology proposed in this study is expected to be utilized in the fields of autonomous vehicles in the future since this methodology can assess the driving safety using collectible data from vehicle's sensors.

Thermal Conductivity Effect of Heat Storage Layer using Porous Feldspar Powder (다공질 장석으로 제조한 축열층의 열전도 특성)

  • Kim, Sung-Wook;Go, Daehong;Choi, Eun-Kyeong;Kim, Sung-Hwan;Kim, Tae-Hyoung;Lee, Kyu-Hwan;Cho, Jinwoo
    • Economic and Environmental Geology
    • /
    • v.50 no.2
    • /
    • pp.159-170
    • /
    • 2017
  • The temporal and spatial temperature distribution of the heat storage mortar made of porous feldspar was measured and the thermal properties and electricity consumption were analyzed. For the experiment, two real size chambers (control model and test model) with hot water pipes were constructed. Two large scale models with hot water pipes were constructed. The surface temperature change of the heat storage layer was remotely monitored during the heating and cooling process using infrared thermal imaging camera and temperature sensor. The temperature increased from $20^{\circ}C$ to $30^{\circ}C$ under the heating condition. The temperature of the heat storage layer of the test model was $2.0-3.5^{\circ}C$ higher than the control model and the time to reach the target temperature was shortened. As the distance from the hot water pipe increased, the temperature gap increased from $4.0^{\circ}C$ to $4.8^{\circ}C$. The power consumed until the surface temperature of the heat storage layer reached $30^{\circ}C$ was 2.2 times that of the control model. From the heating experiment, the stepwise temperature and electricity consumption were calculated, and the electricity consumption of the heat storage layer of the test model was reduced by 66%. In the cooling experiment, the surface temperature of the heat storage layer of the test model was maintained $2^{\circ}C$ higher than that of the control model. The heat storage effect of the porous feldspar mortar was confirmed by the temperature experiment. With considering that the time to reheat the heat storage layer is extended, the energy efficiency will be increased.

Watching environment-independent color reproduction system development based on color adaption (색순응을 기반하여 관촬환경에 독립한 색재현 시스템 개발)

  • An, Seong-A;Kim, Jong-Pil;An, Seok-Chul
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.21 no.2
    • /
    • pp.43-53
    • /
    • 2003
  • As information-communication network has been developed rapidly, internet users' circumstances also have been changed for the better, in result, more information can be applied than before. At this moment, there are many differences between real color and reappeared color on the CRT. When we observe a material object, our eyes perceive the multiplied form of light sources and nature spectral reflection. However, when the photographed signal is reappeared, illumination at that time of photographing and spectral reflection of a material object are converted into signal, and this converted RGB signal is observed on the CRT under another illumination. At this time, RGB signal is the reflected result of illumination at that time of photographing Therefore, this signal is influenced by the illumination at present, so it can be perceived another color. To accord the colro reflections of another color source, the study has been reported by S.C.Ahn$^{[1]}$, which study is about the color reapperarance system using neuron network. Furthermore, color reappearing method become independent of its circumstances has been reported by Y.Miyake$^{[2]}$. This method can make the same illuminations even if the observe circumstances are changed. To assume the light sources of observe circumstances, the study about color reappearing system using CCD sensor also have been studied by S.C.Ahn$^{[3]}$. In these studies, a population is fixed, first, on ab coordinates of CIE L${\ast}$a${\ast}$b${\ast}$. Then, color reappearing can be possible using every population and existing digital camera. However, the color is changed curvedly, not straightly, according to value's changes on the ab coordinates of CIE L${\ast}$a${\ast}$b. To solve these problems in this study, first of all, Labeling techniques are introduced. Next, basis color-it is based on Munsell color system-is divided into 10 color fields. And then, 4 special color- skin color, grass color, sky color, and gray-are added to the basis color. Finally, 14 color fields are fixed. After analyzing of the principle elements of new-defined-color fields' population, utility value and propriety value are going to be examined in 3-Band system from now on.

  • PDF

Image-based Proximity Warning System for Excavator of Construction Sites (건설현장에 적합한 영상 기반 굴삭기 접근 감지 시스템)

  • Jo, Byung-Wan;Lee, Yun-Sung;Kim, Do-Keun;Kim, Jung-Hoon;Choi, Pyung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.588-597
    • /
    • 2016
  • According to an annual industrial accident report from Ministry of Employment of Labor, among the various types of accidents, the number of accidents from construction industry increases every year with the percentage of 27.56% as of 2014. In fact, this number has risen almost 3% over the last four years. Currently, among the industrial accidents, heavy machinery causes most of the tragedy such as collision or narrowness. As reported by the government, most of the time, both heavy machinery drivers and workers were unaware of each other's positions. Nowadays, however when society requires highly complex structures in minimal time, it is inevitable to allow heavy construction equipments running simultaneously in a construction field. In this paper, we have developed Approach Detection System for excavator in order to reduce the increasing number. The imaged based Approach Detection System contains camera, approach detection sensor and Around View Monitor (AVM). This system is also applicable in a small scale construction fields along with other machineries besides excavators since this system does not require additional communication infra such as server.

Augmented Reality Authoring Tool with Marker & Gesture Interactive Features (마커 및 제스처 상호작용이 가능한 증강현실 저작도구)

  • Shim, Jinwook;Kong, Minje;Kim, Hayoung;Chae, Seungho;Jeong, Kyungho;Seo, Jonghoon;Han, Tack-Don
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.720-734
    • /
    • 2013
  • In this paper, we suggest an augmented reality authoring tool system that users can easily make augmented reality contents using hand gesture and marker-based interaction methods. The previous augmented reality authoring tools are focused on augmenting a virtual object and to interact with this kind of augmented reality contents, user used the method utilizing marker or sensor. We want to solve this limited interaction method problem by applying marker based interaction method and gesture interaction method using depth sensing camera, Kinect. In this suggested system, user can easily develop simple form of marker based augmented reality contents through interface. Also, not just providing fragmentary contents, this system provides methods that user can actively interact with augmented reality contents. This research provides two interaction methods, one is marker based method using two markers and the other is utilizing marker occlusion. In addition, by recognizing and tracking user's bare hand, this system provides gesture interaction method which can zoom-in, zoom-out, move and rotate object. From heuristic evaluation about authoring tool and compared usability about marker and gesture interaction, this study confirmed a positive result.

Place Recognition Using Ensemble Learning of Mobile Multimodal Sensory Information (모바일 멀티모달 센서 정보의 앙상블 학습을 이용한 장소 인식)

  • Lee, Chung-Yeon;Lee, Beom-Jin;On, Kyoung-Woon;Ha, Jung-Woo;Kim, Hong-Il;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.1
    • /
    • pp.64-69
    • /
    • 2015
  • Place awareness is an essential for location-based services that are widely provided to smartphone users. However, traditional GPS-based methods are only valid outdoors where the GPS signal is strong and also require symbolic place information of the physical location. In this paper, environmental sounds and images are used to recognize important aspects of each place. The proposed method extracts feature vectors from visual, auditory and location data recorded by a smartphone with built-in camera, microphone and GPS sensors modules. The heterogeneous feature vectors were then learned by an ensemble learning method that learns each group of feature vectors for each classifier respectively and votes to produce the highest weighted result. The proposed method is evaluated for place recognition using a data group of 3000 samples in six places and the experimental results show a remarkably improved recognition accuracy when using all kinds of sensory data comparing to results using data from a single sensor or audio-visual integrated data only.