• Title/Summary/Keyword: camera monitoring

Search Result 747, Processing Time 0.034 seconds

A STUDY FOR THE DETERMINATION OF KOMPSAT I CROSSING TIME OVER KOREA (I): EXAMINATION OF SOLAR AND ATMOSPHERIC VARIABLES (다목적 실용위성 1호의 한반도 통과시각 결정을 위한 연구 (I): 태양 및 대기 변수 조사)

  • 권태영;이성훈;오성남;이동한
    • Journal of Astronomy and Space Sciences
    • /
    • v.14 no.2
    • /
    • pp.330-346
    • /
    • 1997
  • Korea Multi-Purpose Satellite I (KOMPSAT-I, the first multi-purpose Korean satellite) will be launched in the third quarter of 1999, which is operated on the sun-synchronous orbit for cartography, ocean color monitoring, and space environment monitoring. The main mission of Electro-Optical Camera(EOC) which is one of KOMPSAT-I sensors is to provide images for the production of scale maps of Korea. EOC collects panchromatic imagery with the ground sample distance of 6.6m at nadir through visible spectral band of 510~730nm. For determining KOMPSAT-I crossing time over Korea, this study examines the diurnal variation of solar and atmospheric variables that can exert a great influence on the EOC imagery. The results are as follows: 1) After 10:30 a.m. at the winter solstice, solar zenith angle is less than $70^{\circ}$ and expected flux of EOC spectral band over land for clear sky is greater than about $2.4mW/cm^2$. 2) For daytime the distribution of cloud cover (clear sky) shows minimum (maximum) at about 11:00 a.m. Although the occurrence frequency of poor visibility by fog decreases from early morning toward noon, its effect on the distribution of clear sky is negligible. From the above examination it is concluded that determining KOMPSAT-I crossing time over Korea between 10:30 and 11:30 a.m. is adequate.

  • PDF

A study on the osteoblast differentiation using osteocalcin gene promoter controlling luciferase expression (리포터유전자를 이용한 조골세포 분화정도에 관한 연구)

  • Kim, Kyoung-Hwa;Park, Yoon-Jeong;Lee, Yong-Moo;Han, Jung-Suk;Lee, Dong-Soo;Lee, Seung-Jin;Chung, Chong-Pyoung;Seol, Yang-Jo
    • Journal of Periodontal and Implant Science
    • /
    • v.36 no.4
    • /
    • pp.839-847
    • /
    • 2006
  • The aim of this study is to monitor reporter gene expression under osteocalcin gene promoter, using a real-time molecular imaging system, as tool to investigate osteoblast differentiation. The promoter region of mouse osteocalcin gene 2 (mOG2), the best-characterized osteoblast-specific gene, was inserted in promoterless luciferase reporter vector. Expression of reporter gene was confirmed and relationship between the reporter gene expression and osteoblastic differentiation was evaluated. Gene expression according to osteoblstic differentiation on biomaterials, utilizing a real-time molecular imaging system, was monitored. Luciferase was expressed at the only cells transduced with pGL4/mOGP and the level of expression was statistically higher at cells cultured in mineralization medium than cells in growth medium. CCCD camera detected the luciferase expression and was visible differentiation-dependent intensity of luminescence. The cells produced osteocalcin with time-dependent increment in BMP-2 treated cells and there was difference between BMP-2 treated cells and untreated cells at 14days. There was difference at the level of luciferase expression under pGL4/mOGP between BMP-2 treated cells and untreated cells at 3days. CCCD camera detected the luciferase expression at cells transduced with pGL4/mOGP on Ti disc and was visible differentiation-dependent intensity of luminescence This study shows that 1) expression of luciferase is regulated by the mouse OC promoter, 2) the CCCD detection system is a reliable quantitative gene detection tool for the osteoblast differentiation, 3) the dynamics of mouse OC promoter regulation during osteoblast differentiation is achieved in real time and quantitatively on biomaterial. The present system is a very reliable system for monitoring of osteoblast differentiation in real time and may be used for monitoring the effects of growth factors, drug, cytokines and biomaterials on osteoblast differentiation in animal.

A Study on Digital Color Reproduction for Recording Color Appearance of Cultural Heritage (문화유산의 현색(顯色) 기록화를 위한 디지털 색재현 연구)

  • Song, Hyeong Rok;Jo, Young Hoon
    • Journal of Conservation Science
    • /
    • v.38 no.2
    • /
    • pp.154-165
    • /
    • 2022
  • The color appearance of cultural heritage are essential factors for manufacturing technique interpretation, conservation treatment usage, and condition monitoring. Therefore, this study systematically established color reproduction procedures based on the digital color management system for the portrait of Gwon Eungsu. Moreover, various application strategies for recording and conserving the cultural heritage were proposed. Overall color reproduction processes were conducted in the following order: photography condition setting, standard color measurements, digital photography, color correction, and color space creation. Therefore, compared with the color appearance, the digital image applied to a camera maker profile indicated an average color difference of 𝜟10.1. However, the digital reproduction result based on the color management system exhibits an average color difference of 𝜟1.1, which is close to the color appearance. This means that although digital photography conditions are optimized, recording the color appearance is difficult when relying on the correction algorithm developed by the camera maker. Therefore, the digital color reproduction of cultural heritage is required through color correction and color space creation based on the raw digital image, which is a crucial process for documenting the color appearance. Additionally, the recording of color appearance through digital color reproduction is important for condition evaluation, conservation treatment, and restoration of cultural heritage. Furthermore, standard data of imaging analysis are available for discoloration monitoring.

Towards Efficient Aquaculture Monitoring: Ground-Based Camera Implementation for Real-Time Fish Detection and Tracking with YOLOv7 and SORT (효율적인 양식 모니터링을 향하여: YOLOv7 및 SORT를 사용한 실시간 물고기 감지 및 추적을 위한 지상 기반 카메라 구현)

  • TaeKyoung Roh;Sang-Hyun Ha;KiHwan Kim;Young-Jin Kang;Seok Chan Jeong
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.73-82
    • /
    • 2023
  • With 78% of current fisheries workers being elderly, there's a pressing need to address labor shortages. Consequently, active research on smart aquaculture technologies, centered on object detection and tracking algorithms, is underway. These technologies allow for fish size analysis and behavior pattern forecasting, facilitating the development of real-time monitoring and automated systems. Our study utilized video data from cameras outside aquaculture facilities and implemented fish detection and tracking algorithms. We aimed to tackle high maintenance costs due to underwater conditions and camera corrosion from ammonia and pH levels. We evaluated the performance of a real-time system using YOLOv7 for fish detection and the SORT algorithm for movement tracking. YOLOv7 results demonstrated a trade-off between Recall and Precision, minimizing false detections from lighting, water currents, and shadows. Effective tracking was ascertained through re-identification. This research holds promise for enhancing smart aquaculture's operational efficiency and improving fishery facility management.

Development of Image-map Generation and Visualization System Based on UAV for Real-time Disaster Monitoring (실시간 재난 모니터링을 위한 무인항공기 기반 지도생성 및 가시화 시스템 구축)

  • Cheon, Jangwoo;Choi, Kyoungah;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_2
    • /
    • pp.407-418
    • /
    • 2018
  • The frequency and risk of disasters are increasing due to environmental and social factors. In order to respond effectively to disasters that occur unexpectedly, it is very important to quickly obtain up-to-date information about target area. It is possible to intuitively judge the situation about the area through the image-map generated at high speed, so that it can cope with disaster quickly and effectively. In this study, we propose an image-map generation and visualization system from UAV images for real-time disaster monitoring. The proposed system consists of aerial segment and ground segment. In the aerial segment, the UAV system acquires the sensory data from digital camera and GPS/IMU sensor. Communication module transmits it to the ground server in real time. In the ground segment, the transmitted sensor data are processed to generate image-maps and the image-maps are visualized on the geo-portal. We conducted experiment to check the accuracy of the image-map using the system. Check points were obtained through ground survey in the data acquisition area. When calculating the difference between adjacent image maps, the relative accuracy was 1.58 m. We confirmed the absolute accuracy of the image map for the position measured from the individual image map. It is confirmed that the map is matched to the existing map with an absolute accuracy of 0.75 m. We confirmed the processing time of each step until the visualization of the image-map. When the image-map was generated with GSD 10 cm, it took 1.67 seconds to visualize. It is expected that the proposed system can be applied to real - time monitoring for disaster response.

An Illumination-Robust Driver Monitoring System Based on Eyelid Movement Measurement (조명에 강인한 눈꺼풀 움직임 측정기반 운전자 감시 시스템)

  • Park, Il-Kwon;Kim, Kwang-Soo;Park, Sangcheol;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.255-265
    • /
    • 2007
  • In this paper, we propose a new illumination-robust drowsy driver monitoring system with single CCD(Charge Coupled Device) camera for intelligent vehicle in the day and night. For this system that is monitoring driver's eyes during a driving, the eye detection and the measure of eyelid movement are the important preprocesses. Therefore, we propose efficient illumination compensation algorithm to improve the performance of eye detection and also eyelid movement measuring method for efficient drowsy detection in various illumination. For real-time application, Cascaded SVM (Cascaded Support Vector Machine) is applied as an efficient eye verification method in this system. Furthermore, in order to estimate the performance of the proposed algorithm, we collect video data about drivers under various illuminations in the day and night. Finally, we acquired average eye detection rate of over 98% about these own data, and PERCLOS(The percentage of eye-closed time during a period) are represented as drowsy detection results of the proposed system for the collected video data.

A Study of Baby Sleeping Positions Sensing and Safety Band Using an Accelerometer (가속도 센서를 이용한 아기 수면자세 감지 및 안전 밴드에 관한 연구)

  • Yoon, Ji-Min;Lim, Chae-Young;Kim, Kyung-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.6
    • /
    • pp.11-18
    • /
    • 2010
  • In this paper, it introduced the device that was fabricated for monitoring sleeping positions of infants with 3-axis accelerometer. Sleep monitoring studies has been usually conducted two ways. To monitor sleeping posture by installing a camera and then recording of sleep in the sleeping room continuously is the first one. The other one is monitoring pressure sensor's results data for sleeping. Those two ways' benefits are that are able to get relatively accurate sleeping posture data but, there are many disadvantages like constraints of spaces and places, the installation of sensors or cameras, and high cost. In addition, it has a lot of problems that difficult to solve. For babies, it's not easy to apply, as well as uncomfortable. The proposed method uses a 3-axis accelerometer's X axis, Y axis, Z axis position output values in order to recognize the bad ground sleeping position that use of the buzzer alarm. This method uses a 3-axis acceleration sensor to measure the data and transmit sleeping posture using Bluetooth wireless in real time monitoring. The data is helpful for prevention safety hazard such as choked themselves when they slept back side on.

A Development of Active Monitoring and Approach Alarm System for Marine Buoy Protection and Ship Accident Prevention based on Trail Cameras and AIS (해상 부이 보호 및 선박 사고 예방을 위한 트레일 카메라-AIS 연계형 능동감시 및 접근경보 시스템 개발)

  • Hwang, Hun-Gyu;Kim, Bae-Sung;Kim, Hyen-Woo;Gang, Yong-Soo;Kim, Dae-Han
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.7
    • /
    • pp.1021-1029
    • /
    • 2018
  • The marine buoys are operated in various domains, which are navigation route and danger maker, weather and environment monitoring, military strategical element, etc. If the marine buoy is damaged, there consumes many cost and time for recovery or replacement, because of severe environmental condition, and causes a risk possibility of secondary accident. In this paper, we developed an active monitoring and approach alarm providing system using trail cameras and AIS for protection for the marine buoys. To do this, we analyzed existing researches and similar systems, extracted requirements for enhancement, and designed the system architecture that applied the enhanced elements. The main considerations of system enhancement are: integration of AIS and trail cameras, adopting of phased alarm technique by approaching ships, applying of selective communication module, conducting the image processing of ships for providing alarm, and applying thermal cameras. After that, we developed the system using designed architecture and verified effectiveness of the system based on laboratory or field-level tests.

A Study of Hazard Analysis and Monitoring Concepts of Autonomous Vehicles Based on V2V Communication System at Non-signalized Intersections (비신호 교차로 상황에서 V2V 기반 자율주행차의 위험성 분석 및 모니터링 컨셉 연구)

  • Baek, Yun-soek;Shin, Seong-geun;Ahn, Dae-ryong;Lee, Hyuck-kee;Moon, Byoung-joon;Kim, Sung-sub;Cho, Seong-woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.6
    • /
    • pp.222-234
    • /
    • 2020
  • Autonomous vehicles are equipped with a wide rage of sensors such as GPS, RADAR, LIDAR, camera, IMU, etc. and are driven by recognizing and judging various transportation systems at intersections in the city. The accident ratio of the intersection of the autonomous vehicles is 88% of all accidents due to the limitation of prediction and judgment of an area outside the sensing distance. Not only research on non-signalized intersection collision avoidance strategies through V2V and V2I is underway, but also research on safe intersection driving in failure situations is underway, but verification and fragments through simple intersection scenarios Only typical V2V failures are presented. In this paper, we analyzed the architecture of the V2V module, analyzed the causal factors for each V2V module, and defined the failure mode. We presented intersection scenarios for various road conditions and traffic volumes. we used the ISO-26262 Part3 Process and performed HARA (Hazard Analysis and Risk Assessment) to analyze the risk of autonomous vehicle based on the simulation. We presented ASIL, which is the result of risk analysis, proposed a monitoring concept for each component of the V2V module, and presented monitoring coverage.

Class 1·3 Vehicle Classification Using Deep Learning and Thermal Image (열화상 카메라를 활용한 딥러닝 기반의 1·3종 차량 분류)

  • Jung, Yoo Seok;Jung, Do Young
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.6
    • /
    • pp.96-106
    • /
    • 2020
  • To solve the limitation of traffic monitoring that occur from embedded sensor such as loop and piezo sensors, the thermal imaging camera was installed on the roadside. As the length of Class 1(passenger car) is getting longer, it is becoming difficult to classify from Class 3(2-axle truck) by using an embedded sensor. The collected images were labeled to generate training data. A total of 17,536 vehicle images (640x480 pixels) training data were produced. CNN (Convolutional Neural Network) was used to achieve vehicle classification based on thermal image. Based on the limited data volume and quality, a classification accuracy of 97.7% was achieved. It shows the possibility of traffic monitoring system based on AI. If more learning data is collected in the future, 12-class classification will be possible. Also, AI-based traffic monitoring will be able to classify not only 12-class, but also new various class such as eco-friendly vehicles, vehicle in violation, motorcycles, etc. Which can be used as statistical data for national policy, research, and industry.