• Title/Summary/Keyword: Vision sensor

Search Result 833, Processing Time 0.028 seconds

Development of monocular video deflectometer based on inclination sensors

  • Wang, Shuo;Zhang, Shuiqiang;Li, Xiaodong;Zou, Yu;Zhang, Dongsheng
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.607-616
    • /
    • 2019
  • The video deflectometer based on digital image correlation is a non-contacting optical measurement method which has become a useful tool for characterization of the vertical deflections of large structures. In this study, a novel imaging model has been established which considers the variations of pitch angles in the full image. The new model allows deflection measurement at a wide working distance with high accuracy. A monocular video deflectometer has been accordingly developed with an inclination sensor, which facilitates dynamic determination of the orientations and rotation of the optical axis of the camera. This layout has advantages over the video deflectometers based on theodolites with respect to convenience. Experiments have been presented to show the accuracy of the new imaging model and the performance of the monocular video deflectometer in outdoor applications. Finally, this equipment has been applied to the measurement of the vertical deflection of Yingwuzhou Yangtze River Bridge in real time at a distance of hundreds of meters. The results show good agreement with the embedded GPS outputs.

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.

Human Activity Recognition with LSTM Using the Egocentric Coordinate System Key Points

  • Wesonga, Sheilla;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_1
    • /
    • pp.693-698
    • /
    • 2021
  • As technology advances, there is increasing need for research in different fields where this technology is applied. On of the most researched topic in computer vision is Human activity recognition (HAR), which has widely been implemented in various fields which include healthcare, video surveillance and education. We therefore present in this paper a human activity recognition system based on scale and rotation while employing the Kinect depth sensors to obtain the human skeleton joints. In contrast to previous approaches that use joint angles, in this paper we propose that each limb has an angle with the X, Y, Z axes which we employ as feature vectors. The use of the joint angles makes our system scale invariant. We further calculate the body relative direction in the egocentric coordinates in order to provide the rotation invariance. For the system parameters, we employ 8 limbs with their corresponding angles each having the X, Y, Z axes from the coordinate system as feature vectors. The extracted features are finally trained and tested with the Long short term memory (LSTM) Network which gives us an average accuracy of 98.3%.

Collision Avoidance Sensor System for Mobile Crane (전지형 크레인의 인양물 충돌방지를 위한 환경탐지 센서 시스템 개발)

  • Kim, Ji-Chul;Kim, Young Jea;Kim, Mingeuk;Lee, Hanmin
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.62-69
    • /
    • 2022
  • Construction machinery is exposed to accidents such as collisions, narrowness, and overturns during operation. In particular, mobile crane is operated only with the driver's vision and limited information of the assistant worker. Thus, there is a high risk of an accident. Recently, some collision avoidance device using sensors such as cameras and LiDAR have been applied. However, they are still insufficient to prevent collisions in the omnidirectional 3D space. In this study, a rotating LiDAR device was developed and applied to a 250-ton crane to obtain a full-space point cloud. An algorithm that could provide distance information and safety status to the driver was developed. Also, deep-learning segmentation algorithm was used to classify human-worker. The developed device could recognize obstacles within 100m of a 360-degree range. In the experiment, a safety distance was calculated with an error of 10.3cm at 30m to give the operator an accurate distance and collision alarm.

Intelligent Robust Base-Station Research in Harsh Outdoor Wilderness Environments for Wildsense

  • Ahn, Junho;Mysore, Akshay;Zybko, Kati;Krumm, Caroline;Lee, Dohyeon;Kim, Dahyeon;Han, Richard;Mishra, Shivakant;Hobbs, Thompson
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.814-836
    • /
    • 2021
  • Wildlife ecologists and biologists recapture deer to collect tracking data from deer collars or wait for a drop-off of a deer collar construction that is automatically detached and disconnected. The research teams need to manage a base camp with medical trailers, helicopters, and airplanes to capture deer or wait for several months until the deer collar drops off of the deer's neck. We propose an intelligent robust base-station research with a low-cost and time saving method to obtain recording sensor data from their collars to a listener node, and readings are obtained without opening the weatherproof deer collar. We successfully designed the and implemented a robust base station system for automatically collecting data of the collars and listener motes in harsh wilderness environments. Intelligent solutions were also analyzed for improved data collections and pattern predictions with drone-based detection and tracking algorithms.

Business Model Framework for IoT: Case Studies and Strategic Implications for IoT Businesses

  • Kim, Dongwook;Kim, Sungbum;Lee, Junghwan
    • Journal of Information Technology Applications and Management
    • /
    • v.29 no.1
    • /
    • pp.1-28
    • /
    • 2022
  • To realize the vision of internet of things (IoT), where it is expected to bring significant impact to the global economy in the future, consideration of business models in the IoT context is necessary. This research attempts to build an enhanced artifact business model framework based on the definitions of IoT and literature on business models for analysis of IoT businesses. The framework is used to analyze four different types of players: the owner of things, vendors of devices, providers of connectivity and providers of IoT application services. The findings suggest that the owners of things tend to partner with ICT players to complement their weakness, and it tends to be connectivity providers. The device vendors leverage their strength of devices and device platforms to attract and enable 3rd party sensor/devices to interconnect, while the service providers are aiming to penetrate into customer premise. These lead to the following recommendations for non-IT players to consider in expanding into IoT business: 1) take into account differences in product development process between IT and non-IT businesses in expanding into IoT market; 2) collaborate with ICT players that acknowledge and understand the differences.

Semi-Supervised Domain Adaptation on LiDAR 3D Object Detection with Self-Training and Knowledge Distillation (자가학습과 지식증류 방법을 활용한 LiDAR 3차원 물체 탐지에서의 준지도 도메인 적응)

  • Jungwan Woo;Jaeyeul Kim;Sunghoon Im
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.346-351
    • /
    • 2023
  • With the release of numerous open driving datasets, the demand for domain adaptation in perception tasks has increased, particularly when transferring knowledge from rich datasets to novel domains. However, it is difficult to solve the change 1) in the sensor domain caused by heterogeneous LiDAR sensors and 2) in the environmental domain caused by different environmental factors. We overcome domain differences in the semi-supervised setting with 3-stage model parameter training. First, we pre-train the model with the source dataset with object scaling based on statistics of the object size. Then we fine-tine the partially frozen model weights with copy-and-paste augmentation. The 3D points in the box labels are copied from one scene and pasted to the other scenes. Finally, we use the knowledge distillation method to update the student network with a moving average from the teacher network along with a self-training method with pseudo labels. Test-Time Augmentation with varying z values is employed to predict the final results. Our method achieved 3rd place in ECCV 2022 workshop on the 3D Perception for Autonomous Driving challenge.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Recognition of Occupants' Cold Discomfort-Related Actions for Energy-Efficient Buildings

  • Song, Kwonsik;Kang, Kyubyung;Min, Byung-Cheol
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.426-432
    • /
    • 2022
  • HVAC systems play a critical role in reducing energy consumption in buildings. Integrating occupants' thermal comfort evaluation into HVAC control strategies is believed to reduce building energy consumption while minimizing their thermal discomfort. Advanced technologies, such as visual sensors and deep learning, enable the recognition of occupants' discomfort-related actions, thus making it possible to estimate their thermal discomfort. Unfortunately, it remains unclear how accurate a deep learning-based classifier is to recognize occupants' discomfort-related actions in a working environment. Therefore, this research evaluates the classification performance of occupants' discomfort-related actions while sitting at a computer desk. To achieve this objective, this study collected RGB video data on nine college students' cold discomfort-related actions and then trained a deep learning-based classifier using the collected data. The classification results are threefold. First, the trained classifier has an average accuracy of 93.9% for classifying six cold discomfort-related actions. Second, each discomfort-related action is recognized with more than 85% accuracy. Third, classification errors are mostly observed among similar discomfort-related actions. These results indicate that using human action data will enable facility managers to estimate occupants' thermal discomfort and, in turn, adjust the operational settings of HVAC systems to improve the energy efficiency of buildings in conjunction with their thermal comfort levels.

  • PDF

Leveraging Deep Learning and Farmland Fertility Algorithm for Automated Rice Pest Detection and Classification Model

  • Hussain. A;Balaji Srikaanth. P
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.959-979
    • /
    • 2024
  • Rice pest identification is essential in modern agriculture for the health of rice crops. As global rice consumption rises, yields and quality must be maintained. Various methodologies were employed to identify pests, encompassing sensor-based technologies, deep learning, and remote sensing models. Visual inspection by professionals and farmers remains essential, but integrating technology such as satellites, IoT-based sensors, and drones enhances efficiency and accuracy. A computer vision system processes images to detect pests automatically. It gives real-time data for proactive and targeted pest management. With this motive in mind, this research provides a novel farmland fertility algorithm with a deep learning-based automated rice pest detection and classification (FFADL-ARPDC) technique. The FFADL-ARPDC approach classifies rice pests from rice plant images. Before processing, FFADL-ARPDC removes noise and enhances contrast using bilateral filtering (BF). Additionally, rice crop images are processed using the NASNetLarge deep learning architecture to extract image features. The FFA is used for hyperparameter tweaking to optimise the model performance of the NASNetLarge, which aids in enhancing classification performance. Using an Elman recurrent neural network (ERNN), the model accurately categorises 14 types of pests. The FFADL-ARPDC approach is thoroughly evaluated using a benchmark dataset available in the public repository. With an accuracy of 97.58, the FFADL-ARPDC model exceeds existing pest detection methods.