• Title/Summary/Keyword: Camera localization

Search Result 200, Processing Time 0.028 seconds

Sidewalk Gaseous Pollutants Estimation Through UAV Video-based Model

  • Omar, Wael;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.1-20
    • /
    • 2022
  • As unmanned aerial vehicle (UAV) technology grew in popularity over the years, it was introduced for air quality monitoring. This can easily be used to estimate the sidewalk emission concentration by calculating road traffic emission factors of different vehicle types. These calculations require a simulation of the spread of pollutants from one or more sources given for estimation. For this purpose, a Gaussian plume dispersion model was developed based on the US EPA Motor Vehicle Emissions Simulator (MOVES), which provides an accurate estimate of fuel consumption and pollutant emissions from vehicles under a wide range of user-defined conditions. This paper describes a methodology for estimating emission concentration on the sidewalk emitted by different types of vehicles. This line source considers vehicle parameters, wind speed and direction, and pollutant concentration using a UAV equipped with a monocular camera. All were sampled over an hourly interval. In this article, the YOLOv5 deep learning model is developed, vehicle tracking is used through Deep SORT (Simple Online and Realtime Tracking), vehicle localization using a homography transformation matrix to locate each vehicle and calculate the parameters of speed and acceleration, and ultimately a Gaussian plume dispersion model was developed to estimate the CO, NOx concentrations at a sidewalk point. The results demonstrate that these estimated pollutants values are good to give a fast and reasonable indication for any near road receptor point using a cheap UAV without installing air monitoring stations along the road.

Anomaly detection of isolating switch based on single shot multibox detector and improved frame differencing

  • Duan, Yuanfeng;Zhu, Qi;Zhang, Hongmei;Wei, Wei;Yun, Chung Bang
    • Smart Structures and Systems
    • /
    • v.28 no.6
    • /
    • pp.811-825
    • /
    • 2021
  • High-voltage isolating switches play a paramount role in ensuring the safety of power supply systems. However, their exposure to outdoor environmental conditions may cause serious physical defects, which may result in great risk to power supply systems and society. Image processing-based methods have been used for anomaly detection. However, their accuracy is affected by numerous uncertainties due to manually extracted features, which makes the anomaly detection of isolating switches still challenging. In this paper, a vision-based anomaly detection method for isolating switches, which uses the rotational angle of the switch system for more accurate and direct anomaly detection with the help of deep learning (DL) and image processing methods (Single Shot Multibox Detector (SSD), improved frame differencing method, and Hough transform), is proposed. The SSD is a deep learning method for object classification and localization. In addition, an improved frame differencing method is introduced for better feature extraction and a hough transform method is adopted for rotational angle calculation. A number of experiments are conducted for anomaly detection of single and multiple switches using video frames. The results of the experiments demonstrate that the SSD outperforms the You-Only-Look-Once network. The effectiveness and robustness of the proposed method have been proven under various conditions, such as different illumination and camera locations using 96 videos from the experiments.

Sensor System for Autonomous Mobile Robot Capable of Floor-to-floor Self-navigation by Taking On/off an Elevator (엘리베이터를 통한 층간 이동이 가능한 실내 자율주행 로봇용 센서 시스템)

  • Min-ho Lee;Kun-woo Na;Seungoh Han
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.118-123
    • /
    • 2023
  • This study presents sensor system for autonomous mobile robot capable of floor-to-floor self-navigation. The robot was modified using the Turtlebot3 hardware platform and ROS2 (robot operating system 2). The robot utilized the Navigation2 package to estimate and calibrate the moving path acquiring a map with SLAM (simultaneous localization and mapping). For elevator boarding, ultrasonic sensor data and threshold distance are compared to determine whether the elevator door is open. The current floor information of the elevator is determined using image processing results of the ceiling-fixed camera capturing the elevator LCD (liquid crystal display)/LED (light emitting diode). To realize seamless communication at any spot in the building, the LoRa (long-range) communication module was installed on the self-navigating autonomous mobile robot to support the robot in deciding if the elevator door is open, when to get off the elevator, and how to reach at the destination.

Patient Setup Aid with Wireless CCTV System in Radiation Therapy (무선 CCTV 시스템을 이용한 환자 고정 보조기술의 개발)

  • Park, Yang-Kyun;Ha, Sung-Whan;Ye, Sung-Joon;Cho, Woong;Park, Jong-Min;Park, Suk-Won;Huh, Soon-Nyung
    • Radiation Oncology Journal
    • /
    • v.24 no.4
    • /
    • pp.300-308
    • /
    • 2006
  • $\underline{Purpose}$: To develop a wireless CCTV system in semi-beam's eye view (BEV) to monitor daily patient setup in radiation therapy. $\underline{Materials\;and\;Methods}$: In order to get patient images in semi-BEV, CCTV cameras are installed in a custom-made acrylic applicator below the treatment head of a linear accelerator. The images from the cameras are transmitted via radio frequency signal (${\sim}2.4\;GHz$ and 10 mW RF output). An expected problem with this system is radio frequency interference, which is solved utilizing RF shielding with Cu foils and median filtering software. The images are analyzed by our custom-made software. In the software, three anatomical landmarks in the patient surface are indicated by a user, then automatically the 3 dimensional structures are obtained and registered by utilizing a localization procedure consisting mainly of stereo matching algorithm and Gauss-Newton optimization. This algorithm is applied to phantom images to investigate the setup accuracy. Respiratory gating system is also researched with real-time image processing. A line-laser marker projected on a patient's surface is extracted by binary image processing and the breath pattern is calculated and displayed in real-time. $\underline{Results}$: More than 80% of the camera noises from the linear accelerator are eliminated by wrapping the camera with copper foils. The accuracy of the localization procedure is found to be on the order of $1.5{\pm}0.7\;mm$ with a point phantom and sub-millimeters and degrees with a custom-made head/neck phantom. With line-laser marker, real-time respiratory monitoring is possible in the delay time of ${\sim}0.17\;sec$. $\underline{Conclusion}$: The wireless CCTV camera system is the novel tool which can monitor daily patient setups. The feasibility of respiratory gating system with the wireless CCTV is hopeful.

Estimation of Local Strain Distribution of Shear-Compressive Failure Type Beam Using Digital Image Processing Technology (화상계측기법에 의한 전단압축파괴형 보의 국부변형률분포 추정)

  • Kwon, Yong-Gil;Han, Sang-Hoon;Hong, Ki-Nam
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.2
    • /
    • pp.121-127
    • /
    • 2009
  • The failure behavior of RC structure was exceedingly affected by the size and the local strain distribution of the failure zone due to the strain localization behavior on the tension softening materials. However, it is very difficult to quantify and assess the local strain occurring in the failure zone by the conventional test method. In this study, image processing technology, which is available to measure the strain up to the complete failure of RC structures, was used to estimate the local strain distribution and the size of failure zone. In order to verify the reliability and validity for the image processing technology, the strain transition acquired by the image processing technology was compared with strain values measured by the concrete gauge on the uniaxial compressive specimens. Based on the verification of image processing technology for the uniaxial compressive specimens, the size and the local strain distribution of the failure zone of deep beam was measured using the image processing technology. With the results of test, the principal tensile/compressive strain contours were drawn. Using the strain contours, the size of the failure zone and the local strain distribution on the failure of the deep beam was evaluated. The results of strain contour showed that image processing technology is available to assess the failure behavior of deep beam and obtain the local strain values on the domain of the post-peak failure comparatively.

Visual-Attention Using Corner Feature Based SLAM in Indoor Environment (실내 환경에서 모서리 특징을 이용한 시각 집중 기반의 SLAM)

  • Shin, Yong-Min;Yi, Chu-Ho;Suh, Il-Hong;Choi, Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.90-101
    • /
    • 2012
  • The landmark selection is crucial to successful perform in SLAM(Simultaneous Localization and Mapping) with a mono camera. Especially, in unknown environment, automatic landmark selection is needed since there is no advance information about landmark. In this paper, proposed visual attention system which modeled human's vision system will be used in order to select landmark automatically. The edge feature is one of the most important element for attention in previous visual attention system. However, when the edge feature is used in complicated indoor area, the response of complicated area disappears, and between flat surfaces are getting higher. Also, computation cost increases occurs due to the growth of the dimensionality since it uses the responses for 4 directions. This paper suggests to use a corner feature in order to solve or prevent the problems mentioned above. Using a corner feature can also increase the accuracy of data association by concentrating on area which is more complicated and informative in indoor environments. Finally, this paper will prove that visual attention system based on corner feature can be more effective in SLAM compared to previous method by experiment.

Gamma Ray Detection Processing in PET/CT scanner (PET/CT 장치의 감마선 검출과정)

  • Park, Soung-Ock;Ahn, Sung-Min
    • Journal of radiological science and technology
    • /
    • v.29 no.3
    • /
    • pp.125-132
    • /
    • 2006
  • The PET/CT scanner is an evolution in image technology. The two modalities are complementary with CT and PET images. The PET scan images are well known as low resolution anatomic landmak, but such problems may help with interpretation detailed anatomic framework such as that provided by CT scan. PET/CT offers some advantages-improved lesion localization and identification, more accurate tumor staging. etc. Conventional PET employs tranmission scan require around 4 min./bed position and 30 min. for whole body scan. But PET/CT scanner can reduced by 50% in whole body scan. Especially nowadays PET scanner LSO scintillator-based from BGO without septa and operate in 3-D acquisition mode with multidetectors CT. PET/CT scanner fusion problems solved through hardware rather than software. Such device provides with the capability to acquire accurately aligned anatomic and functional images from single scan. It is very important to effective detection from gamma ray source in PETdetector. And can be offer high quality diagnostic images. So we have study about detection processing of PET detector and high quality imaging process.

  • PDF

Implementation of Pattern Recognition Algorithm Using Line Scan Camera for Recognition of Path and Location of AGV (무인운반차(AGV)의 주행경로 및 위치인식을 위한 라인스캔카메라를 이용한 패턴인식 알고리즘 구현)

  • Kim, Soo Hyun;Lee, Hyung Gyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.1
    • /
    • pp.13-21
    • /
    • 2018
  • AGVS (Automated Guided Vehicle System) is a core technology of logistics automation which automatically moves specific objects or goods within a certain work space. Conventional AGVS generally requires the in-door localization system and each AGV equips expensive sensors such as laser, magnetic, inertial sensors for the route recognition and automatic navigation. thus the high installation cost is inevitable and there are many restrictions on route(path) modification or expansion. To address this issue, in this paper, we propose a cost-effective and scalable AGV based on a light-weight pattern recognition technique. The proposed pattern recognition technology not only enables autonomous driving by recognizing the route(path), but also provides a technique for figuring out the loc ation of AGV itself by recognizing the simple patterns(bar-code like) installed on the route. This significantly reduces the cost of implementing AGVS as well as benefiting from route modification and expansion. In order to verify the effectiveness of the proposed technique, we first implement a pattern recognition algorithm on a light-weight MCU(Micro Control Unit), and then verify the results by implementing an MCU_controlled AGV prototype.

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

Door Recognition using Visual Fuzzy System in Indoor Environments (시각 퍼지 시스템을 이용한 실내 문 인식)

  • Yi, Chu-Ho;Lee, Sang-Heon;Jeong, Seung-Do;Suh, Il-Hong;Choi, Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.73-82
    • /
    • 2010
  • Door is an important object to understand given environment and it could be used to distinguish with corridors and rooms. Doors are widely used natural landmark in mobile robotics for localization and navigation. However, almost algorithm for door recognition with camera is difficult real-time application because feature extraction and matching have heavy computation complexity. This paper proposes a method to recognize a door in corridor. First, we extract distinguished lines which have high possibility to comprise of door using Hough transformation. Then, we detect candidate of door region by applying previously extracted lines to first-stage visual fuzzy system. Finally, door regions are determined by verifying knob region in candidate of door region suing second-stage visual fuzzy system.