• Title/Summary/Keyword: Safety camera

Search Result 479, Processing Time 0.025 seconds

P2HYMN: Hybrid Network Systems for Maintenance Support in Power Plants (P2HYMN: 발전소 정비지원 하이브리드 네트워크 시스템)

  • Jin, Young-Hoon;Choo, Young-Yeol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.7
    • /
    • pp.782-787
    • /
    • 2014
  • Due to the complicated steel structure and safety concern, it is very difficult to deploy wireless networks in power plants. This paper presents a hybrid network, named as $P^2HYMN$ (Power Plant HYbrid Maintenance Network), encompassing PLC (Power Line Communication), TLC (Telephone Line Communication), and Wireless LAN. The design goal of $P^2HYMN$ is to integrate multimedia data such as design drawings of control equipment, process data, and video image data for maintenance operation in electric power plants. A Multiplex Line Communication (MLC) device was designed and implemented to integrate PLC, TLC, and Wireless LAN into $P^2HYMN$. Performance test of $P^2HYMN$ has been conducted on a testbed under various conditions. The throughput of TLC was shown as 39 Mbps. Because the bandwidth requirement per camera is 8.5 Mbps on average, TLC is expected to support more thant four video camera at the same time.

Vision-based remote 6-DOF structural displacement monitoring system using a unique marker

  • Jeon, Haemin;Kim, Youngjae;Lee, Donghwa;Myung, Hyun
    • Smart Structures and Systems
    • /
    • v.13 no.6
    • /
    • pp.927-942
    • /
    • 2014
  • Structural displacement is an important indicator for assessing structural safety. For structural displacement monitoring, vision-based displacement measurement systems have been widely developed; however, most systems estimate only 1 or 2-DOF translational displacement. To monitor the 6-DOF structural displacement with high accuracy, a vision-based displacement measurement system with a uniquely designed marker is proposed in this paper. The system is composed of a uniquely designed marker and a camera with a zooming capability, and relative translational and rotational displacement between the marker and the camera is estimated by finding a homography transformation. The novel marker is designed to make the system robust to measurement noise based on a sensitivity analysis of the conventional marker and it has been verified through Monte Carlo simulation results. The performance of the displacement estimation has been verified through two kinds of experimental tests; using a shaking table and a motorized stage. The results show that the system estimates the structural 6-DOF displacement, especially the translational displacement in Z-axis, with high accuracy in real time and is robust to measurement noise.

A Study on the Development of Automatic Ship Berthing System (선박 자동접안시스템 구축을 위한 기초연구)

  • Kim, Y.B.;Choi, Y.W.;Chae, G.H.
    • Journal of Power System Engineering
    • /
    • v.10 no.4
    • /
    • pp.139-146
    • /
    • 2006
  • In this paper vector code correlation(VCC) method and an algorithm to promote the image processing performance in building an effective measurement system using cameras are described for automatically berthing and controlling the ship equipped with side thrusters. In order to realize automatic ship berthing, it is indispensable that the berthing assistant system on the ship should continuously trace a target in the berth to measure the distance to the target and the ship attitude, such that we can make the ship move to the specified location. The considered system is made up of 4 apparatuses compounded from a CCD camera, a camera direction controller, a popular PC with a built in image processing board and a signal conversion unit connected to parallel port of the PC. The object of this paper is to reduce the image processing time so that the berthing system is able to ensure the safety schedule against risks during approaching to the berth. It could be achieved by composing the vector code image to utilize the gradient of an approximated plane found with the brightness of pixels forming a certain region in an image and verifying the effectiveness on a commonly used PC. From experimental results, it is clear that the proposed method can be applied to the measurement system for automatic ship berthing and has the image processing time of fourfold as compared with the typical template matching method.

  • PDF

Hazy Particle Map-based Automated Fog Removal Method with Haziness Degree Evaluator Applied (Haziness Degree Evaluator를 적용한 Hazy Particle Map 기반 자동화 안개 제거 방법)

  • Sim, Hwi Bo;Kang, Bong Soon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.9
    • /
    • pp.1266-1272
    • /
    • 2022
  • With the recent development of computer vision technology, image processing-based mechanical devices are being developed to realize autonomous driving. The camera-taken images of image processing-based machines are invisible due to scattering and absorption of light in foggy conditions. This lowers the object recognition rate and causes malfunction. The safety of the technology is very important because the malfunction of autonomous driving leads to human casualties. In order to increase the stability of the technology, it is necessary to apply an efficient haze removal algorithm to the camera. In the conventional haze removal method, since the haze removal operation is performed regardless of the haze concentration of the input image, excessive haze is removed and the quality of the resulting image is deteriorated. In this paper, we propose an automatic haze removal method that removes haze according to the haze density of the input image by applying Ngo's Haziness Degree Evaluator (HDE) to Kim's haze removal algorithm using Hazy Particle Map. The proposed haze removal method removes the haze according to the haze concentration of the input image, thereby preventing the quality degradation of the input image that does not require haze removal and solving the problem of excessive haze removal. The superiority of the proposed haze removal method is verified through qualitative and quantitative evaluation.

Identifying Specifications of Flat Type Signboards Using a Stereo Camera (스테레오 카메라를 이용한 판류형 간판의 규격 판별)

  • Kwon, Sang Il;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.1
    • /
    • pp.69-83
    • /
    • 2020
  • Signboards are standardized according to national legislation for the safety of pedestrians and disaster prevention in urban areas. Signboards should be installed according to the standard. However, it is not easy to manage the signboards systematically due to the number of signboards that have been installed for a long time and frequently changing stores. In this study, we proposed a methodology for distinguishing signboards that deviated from the standard. To this end, the signboard was photographed using a stereo camera, and then the three-dimensional coordinates of the signboard were determined from the signboard image to calculate the signboard's horizontal and vertical dimensions to determine the signboard's specifications. In order to determine the interior and relative orientation parameters of the stereo camera, an outdoor three-dimensional building was used as the test field. Then, the image coordinates of four vertices of the signboard were extracted from the signboard image taken from about 15m ~ 22m distance using deep learning. After determining the signboard's three-dimensional coordinates by using the interior and relative orientation parameters of the stereo camera and the image coordinates of the four vertices of the signboard, the horizontal and vertical sizes of the signboard were calculated, resulting in an error of about 2.7cm on average. The specifications for the ten flat-type signboards showed that all of the horizontal sizes were compliant with the specifications, but the vertical sizes exceeded about 36.5cm on average. Through this, it was found that maintenance of flat-type signboards is needed overall.

RESEARCH FOR ROBUSTNESS OF THE MIRIS OPTICAL COMPONENTS IN THE SHOCK ENVIRONMENT TEST (MIRIS 충격시험에서의 광학계 안정성 확보를 위한 연구)

  • Moon, B.K.;Kanai, Yoshikazu;Park, S.J.;Park, K.J.;Lee, D.H.;Jeong, W.S.;Park, Y.S.;Pyo, J.H.;Nam, U.W.;Lee, D.H.;Ree, S.W.;Matsumoto, Toshio;Han, W.
    • Publications of The Korean Astronomical Society
    • /
    • v.27 no.3
    • /
    • pp.39-47
    • /
    • 2012
  • MIRIS, Multi-purpose Infra-Red Imaging System, is the main payload of STSAT-3 (Korea Science & Technology Satellite 3), which will be launched in the end of 2012 (the exact date to be determined) by a Russian Dnepr rocket. MIRIS consists of two camera systems, SOC (Space Observation Camera) and EOC (Earth Observation Camera). During a shock test for the flight model stability in the launching environment, some lenses of SOC EQM (Engineering Qualification Model) were broken. In order to resolve the lens failure, analyses for cause were performed with visual inspections for lenses and opto-mechanical parts. After modifications of SOC opto-mechanical parts, the shock test was performed again and passed. In this paper, we introduce the solution for lens safety and report the test results.

A Study on the Decision of Influence Range of External Temperature in the Tunnel Using Thermal Camera (열화상카메라를 활용한 외부온도의 터널내 영향범위 산정방법 연구)

  • Lee, Yu-Seok;Lee, Tea-Jong;Park, Gwang-Rim;Oh, Young-Seok;Cha, Cheol-Jun
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.14 no.5
    • /
    • pp.136-143
    • /
    • 2010
  • There are three parts of tunnel which are influenced by outside temperature, entrance, exit and vent. These parts showed different tendency of deterioration(very rapid deterioration speed, wide range of defects, etc) compared with last parts of tunnel. Therefore, it needs to have different points of view when civil engineers analyze the defects on these parts and apply the retrofit or rehabilitation methods for them. However, when we conduct maintenance works, precise inspection and precise safety diagnosis, these defects had been neglected because those were considered as unimportant defects and caused from temporary weather and temperature change. In this study, two urban tunnels were analyzed to decide the range of tunnel which are influenced by outside temperature using a thermal camera, and to find out the causes of defects on these parts. From the results, the main points of maintenance were presented.

Design of a scintillator-based prompt gamma camera for boron-neutron capture therapy: Comparison of SrI2 and GAGG using Monte-Carlo simulation

  • Kim, Minho;Hong, Bong Hwan;Cho, Ilsung;Park, Chawon;Min, Sun-Hong;Hwang, Won Taek;Lee, Wonho;Kim, Kyeong Min
    • Nuclear Engineering and Technology
    • /
    • v.53 no.2
    • /
    • pp.626-636
    • /
    • 2021
  • Boron-neutron capture therapy (BNCT) is a cancer treatment method that exploits the high neutron reactivity of boron. Monitoring the prompt gamma rays (PGs) produced during neutron irradiation is essential for ensuring the accuracy and safety of BNCT. We investigate the imaging of PGs produced by the boron-neutron capture reaction through Monte Carlo simulations of a gamma camera with a SrI2 scintillator and parallel-hole collimator. GAGG scintillator is also used for a comparison. The simulations allow the shapes of the energy spectra, which exhibit a peak at 478 keV, to be determined along with the PG images from a boron-water phantom. It is found that increasing the size of the water phantom results in a greater number of image counts and lower contrast. Additionally, a higher septal penetration ratio results in poorer image quality, and a SrI2 scintillator results in higher image contrast. Thus, we can simulate the BNCT process and obtain an energy spectrum with a reasonable shape, as well as suitable PG images. Both GAGG and SrI2 crystals are suitable for PG imaging during BNCT. However, for higher imaging quality, SrI2 and a collimator with a lower septal penetration ratio should be utilized.

YOLO-based lane detection system (YOLO 기반 차선검출 시스템)

  • Jeon, Sungwoo;Kim, Dongsoo;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.464-470
    • /
    • 2021
  • Automobiles have been used as simple means of transportation, but recently, as automobiles are rapidly becoming intelligent and smart, and automobile preferences are increasing, research on IT technology convergence is underway, requiring basic high-performance functions such as driver's convenience and safety. As a result, autonomous driving and semi-autonomous vehicles are developed, and these technologies sometimes deviate from lanes due to environmental problems, situations that cannot be judged by autonomous vehicles, and lane detectors may not recognize lanes. In order to improve the performance of lane departure from the lane detection system of autonomous vehicles, which is such a problem, this paper uses fast recognition, which is a characteristic of YOLO(You only look once), and is affected by the surrounding environment using CSI-Camera. We propose a lane detection system that recognizes the situation and collects driving data to extract the region of interest.

A Study on Radar Video Fusion Systems for Pedestrian and Vehicle Detection (보행자 및 차량 검지를 위한 레이더 영상 융복합 시스템 연구)

  • Sung-Youn Cho;Yeo-Hwan Yoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.197-205
    • /
    • 2024
  • Development of AI and big data-based algorithms to advance and optimize the recognition and detection performance of various static/dynamic vehicles in front and around the vehicle at a time when securing driving safety is the most important point in the development and commercialization of autonomous vehicles. etc. are being studied. However, there are many research cases for recognizing the same vehicle by using the unique advantages of radar and camera, but deep learning image processing technology is not used, or only a short distance is detected as the same target due to radar performance problems. Therefore, there is a need for a convergence-based vehicle recognition method that configures a dataset that can be collected from radar equipment and camera equipment, calculates the error of the dataset, and recognizes it as the same target. In this paper, we aim to develop a technology that can link location information according to the installation location because data errors occur because it is judged as the same object depending on the installation location of the radar and CCTV (video).