• Title/Summary/Keyword: Smoke Sensor

Search Result 84, Processing Time 0.018 seconds

A Fire Prevention System of the Nacelle of Wind Turbine Generator System Based on Broadband Powerline Communication (광대역 전력선통신 기반 풍력발전기 너셀 내부 화재예방시스템)

  • Kim, Hyun-Sik;Ju, Woo-Jin;Kang, Seog Geun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.6
    • /
    • pp.1229-1234
    • /
    • 2018
  • In this paper, a fire prevention system based on a broadband powerline communication (PLC) system is implemented and a demonstration experiment is carried out to prevent from or promptly dealing with possible fires within the nacelle of a wind turbine generator system (WTGS). For this purpose, an inductive coupler having satisfactory attenuation characteristic in the frequency region for high-speed PLC is also manufactured. It is confirmed that the implemented system can monitor the environmental change inside the nacelle in real time by transmitting various information obtained by the sensors such as temperature, flame, and smoke sensor installed in the nacelle and thermal image recorded by a thermal camera to the ground control center through the PLC system. Therefore, it is, considered that the implemented system will significantly improve the reliability of the fire monitoring and prevention system of the WTGS in conjunction with the existing safety system.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

A Study on an Adaptive Guidance Plan by Quickest Path Algorithm for Building Evacuations due to Fire (건물 화재시 Quickest Path를 이용한 Adaptive 피난경로 유도방안)

  • Sin, Seong-Il;Seo, Yong-Hui;Lee, Chang-Ju
    • Journal of Korean Society of Transportation
    • /
    • v.25 no.6
    • /
    • pp.197-208
    • /
    • 2007
  • Enormously sized buildings are appearing world-wide with the advancement of construction techniques. Large-scaled and complicated structures will have increased difficulties for dealing with safety, and will demand well-matched safety measures. This research introduced up-to-date techniques and systems which are applied in buildings in foreign nations. Furthermore, it proposed s direct guidance plan for buildings in case of fire. Since it is possible to install wireless sensor networks which detect fires or effects of fire, the plan makes use of this information. Accordingly, the authors completed a direct guidance plan that was based on omnidirectional guidance lights. It is possible to select a route with concern about both time and capacity with a concept of a non-dominated path. Finally, case studies showed that quickest path algorithms were effective for guiding efficient dispersion routes and in case of restriction of certain links in preferred paths due to temperature and smoke, it was possible to avoid relevant links and to restrict demand in the network application. Consequently, the algorithms were able to maximize safety and minimize evacuation time, which were the purposes of this study.

Visible and SWIR Satellite Image Fusion Using Multi-Resolution Transform Method Based on Haze-Guided Weight Map (Haze-Guided Weight Map 기반 다중해상도 변환 기법을 활용한 가시광 및 SWIR 위성영상 융합)

  • Taehong Kwak;Yongil Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.283-295
    • /
    • 2023
  • With the development of sensor and satellite technology, numerous high-resolution and multi-spectral satellite images have been available. Due to their wavelength-dependent reflection, transmission, and scattering characteristics, multi-spectral satellite images can provide complementary information for earth observation. In particular, the short-wave infrared (SWIR) band can penetrate certain types of atmospheric aerosols from the benefit of the reduced Rayleigh scattering effect, which allows for a clearer view and more detailed information to be captured from hazed surfaces compared to the visible band. In this study, we proposed a multi-resolution transform-based image fusion method to combine visible and SWIR satellite images. The purpose of the fusion method is to generate a single integrated image that incorporates complementary information such as detailed background information from the visible band and land cover information in the haze region from the SWIR band. For this purpose, this study applied the Laplacian pyramid-based multi-resolution transform method, which is a representative image decomposition approach for image fusion. Additionally, we modified the multiresolution fusion method by combining a haze-guided weight map based on the prior knowledge that SWIR bands contain more information in pixels from the haze region. The proposed method was validated using very high-resolution satellite images from Worldview-3, containing multi-spectral visible and SWIR bands. The experimental data including hazed areas with limited visibility caused by smoke from wildfires was utilized to validate the penetration properties of the proposed fusion method. Both quantitative and visual evaluations were conducted using image quality assessment indices. The results showed that the bright features from the SWIR bands in the hazed areas were successfully fused into the integrated feature maps without any loss of detailed information from the visible bands.