• Title/Summary/Keyword: Vision-based analytics

Search Result 7, Processing Time 0.019 seconds

Combining Object Detection and Hand Gesture Recognition for Automatic Lighting System Control

  • Pham, Giao N.;Nguyen, Phong H.;Kwon, Ki-Ryong
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.329-332
    • /
    • 2019
  • Recently, smart lighting systems are the combination between sensors and lights. These systems turn on/off and adjust the brightness of lights based on the motion of object and the brightness of environment. These systems are often applied in places such as buildings, rooms, garages and parking lot. However, these lighting systems are controlled by lighting sensors, motion sensors based on illumination environment and motion detection. In this paper, we propose an automatic lighting control system using one single camera for buildings, rooms and garages. The proposed system is one integration the results of digital image processing as motion detection, hand gesture detection to control and dim the lighting system. The experimental results showed that the proposed system work very well and could consider to apply for automatic lighting spaces.

BIM and Thermographic Sensing: Reflecting the As-is Building Condition in Energy Analysis

  • Ham, Youngjib;Golparvar-Fard, Mani
    • Journal of Construction Engineering and Project Management
    • /
    • v.5 no.4
    • /
    • pp.16-22
    • /
    • 2015
  • This paper presents an automated computer vision-based system to update BIM data by leveraging multi-modal visual data collected from existing buildings under inspection. Currently, visual inspections are conducted for building envelopes or mechanical systems, and auditors analyze energy-related contextual information to examine if their performance is maintained as expected by the design. By translating 3D surface thermal profiles into energy performance metrics such as actual R-values at point-level and by mapping such properties to the associated BIM elements using XML Document Object Model (DOM), the proposed method shortens the energy performance modeling gap between the architectural information in the as-designed BIM and the as-is building condition, which improve the reliability of building energy analysis. Several case studies were conducted to experimentally evaluate their impact on BIM-based energy analysis to calculate energy load. The experimental results on existing buildings show that (1) the point-level thermography-based thermal resistance measurement can be automatically matched with the associated BIM elements; and (2) their corresponding thermal properties are automatically updated in gbXML schema. This paper provides practitioners with insight to uncover the fundamentals of how multi-modal visual data can be used to improve the accuracy of building energy modeling for retrofit analysis. Open research challenges and lessons learned from real-world case studies are discussed in detail.

Updating BIM: Reflecting Thermographic Sensing in BIM-based Building Energy Analysis

  • Ham, Youngjib;Golparvar-Fard, Mani
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.532-536
    • /
    • 2015
  • This paper presents an automated computer vision-based system to update BIM data by leveraging multi-modal visual data collected from existing buildings under inspection. Currently, visual inspections are conducted for building envelopes or mechanical systems, and auditors analyze energy-related contextual information to examine if their performance is maintained as expected by the design. By translating 3D surface thermal profiles into energy performance metrics such as actual R-values at point-level and by mapping such properties to the associated BIM elements using XML Document Object Model (DOM), the proposed method shortens the energy performance modeling gap between the architectural information in the as-designed BIM and the as-is building condition, which improve the reliability of building energy analysis. The experimental results on existing buildings show that (1) the point-level thermography-based thermal resistance measurement can be automatically matched with the associated BIM elements; and (2) their corresponding thermal properties are automatically updated in gbXML schema. This paper provides practitioners with insight to uncover the fundamentals of how multi-modal visual data can be used to improve the accuracy of building energy modeling for retrofit analysis. Open research challenges and lessons learned from real-world case studies are discussed in detail.

  • PDF

Automated Training Database Development through Image Web Crawling for Construction Site Monitoring (건설현장 영상 분석을 위한 웹 크롤링 기반 학습 데이터베이스 구축 자동화)

  • Hwang, Jeongbin;Kim, Jinwoo;Chi, Seokho;Seo, JoonOh
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.39 no.6
    • /
    • pp.887-892
    • /
    • 2019
  • Many researchers have developed a series of vision-based technologies to monitor construction sites automatically. To achieve high performance of vision-based technologies, it is essential to build a large amount and high quality of training image database (DB). To do that, researchers usually visit construction sites, install cameras at the jobsites, and collect images for training DB. However, such human and site-dependent approach requires a huge amount of time and costs, and it would be difficult to represent a range of characteristics of different construction sites and resources. To address these problems, this paper proposes a framework that automatically constructs a training image DB using web crawling techniques. For the validation, the authors conducted two different experiments with the automatically generated DB: construction work type classification and equipment classification. The results showed that the method could successfully build the training image DB for the two classification problems, and the findings of this study can be used to reduce the time and efforts for developing a vision-based technology on construction sites.

Design and Implementation of OPC UA-based Collaborative Robot Guard System Using Sensor and Camera Vision (센서 및 카메라 비전을 활용한 OPC UA 기반 협동로봇 가드 시스템의 설계 및 구현)

  • Kim, Jeehyeong;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.47-55
    • /
    • 2019
  • The robot is the creation of new markets and various cooperation according to the manufacturing paradigm shift. Cooperative management easy for existing industrial robots, robots work on productivity, manpower to replace the robot in every industry cooperation for the purpose of and demand increases.to exist But the industrial robot at the scene of the cooperation working due to accidents are frequent, threatening the safety of the operator. Of industrial site is configured with a robot in an environment ensuring the safety of the operator to and confidence to communicate that can do the possibility of action.Robot guard system of the need for development cooperation. The robot's cooperation through the sensors and computer vision task within a radius of the double to prevent accidents and accidents should reduce the risk. International protocol for a variety of industrial production equipment and communications opc ua system based on ultrasonic sensors and cnn to (Convolution Neural Network) for video analytics. We suggest the cooperation with the robot guard system. Robots in a proposed system is unsafe situation of workers evaluating the possibility of control.

Ensemble-based deep learning for autonomous bridge component and damage segmentation leveraging Nested Reg-UNet

  • Abhishek Subedi;Wen Tang;Tarutal Ghosh Mondal;Rih-Teng Wu;Mohammad R. Jahanshahi
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.335-349
    • /
    • 2023
  • Bridges constantly undergo deterioration and damage, the most common ones being concrete damage and exposed rebar. Periodic inspection of bridges to identify damages can aid in their quick remediation. Likewise, identifying components can provide context for damage assessment and help gauge a bridge's state of interaction with its surroundings. Current inspection techniques rely on manual site visits, which can be time-consuming and costly. More recently, robotic inspection assisted by autonomous data analytics based on Computer Vision (CV) and Artificial Intelligence (AI) has been viewed as a suitable alternative to manual inspection because of its efficiency and accuracy. To aid research in this avenue, this study performs a comparative assessment of different architectures, loss functions, and ensembling strategies for the autonomous segmentation of bridge components and damages. The experiments lead to several interesting discoveries. Nested Reg-UNet architecture is found to outperform five other state-of-the-art architectures in both damage and component segmentation tasks. The architecture is built by combining a Nested UNet style dense configuration with a pretrained RegNet encoder. In terms of the mean Intersection over Union (mIoU) metric, the Nested Reg-UNet architecture provides an improvement of 2.86% on the damage segmentation task and 1.66% on the component segmentation task compared to the state-of-the-art UNet architecture. Furthermore, it is demonstrated that incorporating the Lovasz-Softmax loss function to counter class imbalance can boost performance by 3.44% in the component segmentation task over the most employed alternative, weighted Cross Entropy (wCE). Finally, weighted softmax ensembling is found to be quite effective when used synchronously with the Nested Reg-UNet architecture by providing mIoU improvement of 0.74% in the component segmentation task and 1.14% in the damage segmentation task over a single-architecture baseline. Overall, the best mIoU of 92.50% for the component segmentation task and 84.19% for the damage segmentation task validate the feasibility of these techniques for autonomous bridge component and damage segmentation using RGB images.

A Deep Learning Method for Cost-Effective Feed Weight Prediction of Automatic Feeder for Companion Animals (반려동물용 자동 사료급식기의 비용효율적 사료 중량 예측을 위한 딥러닝 방법)

  • Kim, Hoejung;Jeon, Yejin;Yi, Seunghyun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.263-278
    • /
    • 2022
  • With the recent advent of IoT technology, automatic pet feeders are being distributed so that owners can feed their companion animals while they are out. However, due to behaviors of pets, the method of measuring weight, which is important in automatic feeding, can be easily damaged and broken when using the scale. The 3D camera method has disadvantages due to its cost, and the 2D camera method has relatively poor accuracy when compared to 3D camera method. Hence, the purpose of this study is to propose a deep learning approach that can accurately estimate weight while simply using a 2D camera. For this, various convolutional neural networks were used, and among them, the ResNet101-based model showed the best performance: an average absolute error of 3.06 grams and an average absolute ratio error of 3.40%, which could be used commercially in terms of technical and financial viability. The result of this study can be useful for the practitioners to predict the weight of a standardized object such as feed only through an easy 2D image.