• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.026 seconds

A Study on 3D Geospatial Information Model based Influence Factor Management Application in Earthwork Plan (3차원 지형공간정보모델기반 토공사 계획 및 관리에 미치는 영향요인 관리 애플리케이션 연구)

  • Park, Jae-woo;Yun, Won Gun;Kim, Suk Su;Song, Jae Ho
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.2
    • /
    • pp.125-135
    • /
    • 2019
  • In recent years, the digital transformation age represented by the "Fourth Industrial Revolution", which is a universalization of digitalization across all industries, has become a reality. In the construction sector in 2018, the Ministry of Land, Infrastructure and Transport established the Smart Construction 2025 vision and established the 'Smart Construction Technology Roadmap' aiming to complete construction automation by 2030. Especially, in the construction stage, field monitoring technology using drones is needed to support construction equipment automation and on-site control, and a 3D geospatial information model can be utilized as a base tool for this. The purpose of this study is to investigate the factors affecting earthworks work in order to manage changes in site conditions and improve communication between managers and workers in the earthworks plan, which has a considerable part in terms of construction time and cost as a single type of work. Based on this, field management procedures and applications were developed.

Analysis of the Design Elements for the Children's Picture Books Based on VR

  • Lu, Kai;Cho, Dong Min
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.953-965
    • /
    • 2021
  • The research of combining virtual reality technology with the design elements of children's picture book education is a relatively new topic in virtual reality technology in recent years. Based on the combination of picture book design elements with virtual reality technology and the development of a children's picture book teaching game, this article analyzes the effectiveness of the application of virtual reality technology in children's teaching, and explores the usability of picture book design elements in teaching [1]. Through literature research methods, practical research methods and investigation research methods, this paper lucubrates the application of virtual reality technology in the design elements of children's picture book education so as to provide adequate theoretical and practical support for the research theme. The spatial positioning, vision, sound, and functional requirements of children's picture book games play a leading role in teaching. Practical statistics have proved that it is easier to promote children's mastery of teaching knowledge in a virtual environment. Moreover, use VR's game management function and setting function to solve the boringness of traditional education methods and the limitations of the teaching environment. The feasibility of game operation provides a virtual teaching platform system for children's education, and the teaching effect is remarkable.

Panoramic Image Stitching using Feature Extracting and Matching on Mobile Device (모바일 기기에서 특징적 추출과 정합을 활용한 파노라마 이미지 스티칭)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.4
    • /
    • pp.97-102
    • /
    • 2016
  • Image stitching is a process of combining two or more images with overlapping area to create a panorama of input images, which is considered as an active research area in computer vision, especially in the field of augmented reality with 360 degree images. Image stitching techniques can be categorized into two general approaches: direct and feature based techniques. Direct techniques compare all the pixel intensities of the images with each other, while feature based approaches aim to determine a relationship between the images through distinct features extracted from the images. This paper proposes a novel image stitching method based on feature pixels with approximated clustering filter. When the features are extracted from input images, we calculate a meaning of the minutiae, and apply an effective feature extraction algorithm to improve the processing time. With the evaluation of the results, the proposed method is corresponding accurate and effective, compared to the previous approaches.

Reinforced Feature of Dynamic Search Area for the Discriminative Model Prediction Tracker based on Multi-domain Dataset (다중 도메인 데이터 기반 구별적 모델 예측 트레커를 위한 동적 탐색 영역 특징 강화 기법)

  • Lee, Jun Ha;Won, Hong-In;Kim, Byeong Hak
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.6
    • /
    • pp.323-330
    • /
    • 2021
  • Visual object tracking is a challenging area of study in the field of computer vision due to many difficult problems, including a fast variation of target shape, occlusion, and arbitrary ground truth object designation. In this paper, we focus on the reinforced feature of the dynamic search area to get better performance than conventional discriminative model prediction trackers on the condition when the accuracy deteriorates since low feature discrimination. We propose a reinforced input feature method shown like the spotlight effect on the dynamic search area of the target tracking. This method can be used to improve performances for deep learning based discriminative model prediction tracker, also various types of trackers which are used to infer the center of the target based on the visual object tracking. The proposed method shows the improved tracking performance than the baseline trackers, achieving a relative gain of 38% quantitative improvement from 0.433 to 0.601 F-score at the visual object tracking evaluation.

Hazy Particle Map-based Automated Fog Removal Method with Haziness Degree Evaluator Applied (Haziness Degree Evaluator를 적용한 Hazy Particle Map 기반 자동화 안개 제거 방법)

  • Sim, Hwi Bo;Kang, Bong Soon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.9
    • /
    • pp.1266-1272
    • /
    • 2022
  • With the recent development of computer vision technology, image processing-based mechanical devices are being developed to realize autonomous driving. The camera-taken images of image processing-based machines are invisible due to scattering and absorption of light in foggy conditions. This lowers the object recognition rate and causes malfunction. The safety of the technology is very important because the malfunction of autonomous driving leads to human casualties. In order to increase the stability of the technology, it is necessary to apply an efficient haze removal algorithm to the camera. In the conventional haze removal method, since the haze removal operation is performed regardless of the haze concentration of the input image, excessive haze is removed and the quality of the resulting image is deteriorated. In this paper, we propose an automatic haze removal method that removes haze according to the haze density of the input image by applying Ngo's Haziness Degree Evaluator (HDE) to Kim's haze removal algorithm using Hazy Particle Map. The proposed haze removal method removes the haze according to the haze concentration of the input image, thereby preventing the quality degradation of the input image that does not require haze removal and solving the problem of excessive haze removal. The superiority of the proposed haze removal method is verified through qualitative and quantitative evaluation.

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.

Solitary Work Detection of Heavy Equipment Using Computer Vision (컴퓨터비전을 활용한 건설현장 중장비의 단독작업 자동 인식 모델 개발)

  • Jeong, Insoo;Kim, Jinwoo;Chi, Seokho;Roh, Myungil;Biggs, Herbert
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.4
    • /
    • pp.441-447
    • /
    • 2021
  • Construction sites are complex and dangerous because heavy equipment and workers perform various operations simultaneously within limited working areas. Solitary works of heavy equipment in complex job sites can cause fatal accidents, and thus they should interact with spotters and obtain information about surrounding environments during operations. Recently, many computer vision technologies have been developed to automatically monitor construction equipment and detect their interactions with other resources. However, previous methods did not take into account the interactions between equipment and spotters, which is crucial for identifying solitary works of heavy equipment. To address the drawback, this research develops a computer vision-based solitary work detection model that considers interactive operations between heavy equipment and spotters. To validate the proposed model, the research team performed experiments using image data collected from actual construction sites. The results showed that the model was able to detect workers and equipment with 83.4 % accuracy, classify workers and spotters with 84.2 % accuracy, and analyze the equipment-to-spotter interactions with 95.1 % accuracy. The findings of this study can be used to automate manual operation monitoring of heavy equipment and reduce the time and costs required for on-site safety management.

Development of Control Simulator for Integrated Sensor Module of Vehicle (차량용 통합 센서 모듈 제어를 위한 시뮬레이터 개발)

  • Jeon, Jin-Young;Park, Jeong-Yeon;Byun, Hyung-Gi
    • Journal of Sensor Science and Technology
    • /
    • v.22 no.1
    • /
    • pp.65-70
    • /
    • 2013
  • The integrated sensor module of vehicle combines the functions of rain sensor, auto defog sensor, and sun angle sensor into a single module. These functions originally were applied to work separatively. This integrated sensor module should meet the each performance which appears from the individual modules up to the same level or higher. Therefore, it is important to verify the stability and the accuracy considering the characteristics of the integrated sensor module according to various situations. For the verification, we need to use the actual data of integrated sensor module measured but, a lot of time and money is needed to collect data measured under various circumstances when operating. Thus, through the development of this simulator for the control of the integrated sensor module, we can use it effectively for the initial verification of integrated sensor module by implementing the various situations. In this paper, the simulator for controlling the integrated sensor module which combines vision-based rain sensor, auto defog sensor, auto light sensor, and sun angle sensor has been developed.

Gesture Recognition by Analyzing a Trajetory on Spatio-Temporal Space (시공간상의 궤적 분석에 의한 제스쳐 인식)

  • 민병우;윤호섭;소정;에지마 도시야끼
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.157-157
    • /
    • 1999
  • Researches on the gesture recognition have become a very interesting topic in the computer vision area, Gesture recognition from visual images has a number of potential applicationssuch as HCI (Human Computer Interaction), VR(Virtual Reality), machine vision. To overcome thetechnical barriers in visual processing, conventional approaches have employed cumbersome devicessuch as datagloves or color marked gloves. In this research, we capture gesture images without usingexternal devices and generate a gesture trajectery composed of point-tokens. The trajectory Is spottedusing phase-based velocity constraints and recognized using the discrete left-right HMM. Inputvectors to the HMM are obtained by using the LBG clustering algorithm on a polar-coordinate spacewhere point-tokens on the Cartesian space .are converted. A gesture vocabulary is composed oftwenty-two dynamic hand gestures for editing drawing elements. In our experiment, one hundred dataper gesture are collected from twenty persons, Fifty data are used for training and another fifty datafor recognition experiment. The recognition result shows about 95% recognition rate and also thepossibility that these results can be applied to several potential systems operated by gestures. Thedeveloped system is running in real time for editing basic graphic primitives in the hardwareenvironments of a Pentium-pro (200 MHz), a Matrox Meteor graphic board and a CCD camera, anda Window95 and Visual C++ software environment.

A Study of the Shaft Power Measuring System Using Cameras (카메라를 이용한 축계 비틀림 계측 장치 개발)

  • Jeong, Jeong-Soon;Kim, Young-Bok;Choi, Myung-Soo
    • Journal of Ocean Engineering and Technology
    • /
    • v.24 no.4
    • /
    • pp.72-77
    • /
    • 2010
  • This paper presents a method for measuring the shaft power of a marine main engine. Usually, in traditional systems for measuring shaft power, a strain gauge is used even though it has several disadvantages. First, it is difficult to set up the strain gauge on the shaft and acquire the correct signal for analysis. Second, it is very expensive and complicated. For these reasons, we investigated alternative approaches for measuring shaft power and proposed a new method that uses a vision-based measurement system. For this study, templates for image processing and CCD cameras were installed at the both ends of the shaft. Then, in order for the cameras to capture the images synchronously, we used a trigger mark and a optical sensor. The position of each template between the first and the second camera images were compared to calculate the torsion angle. The proposed measurement system can be installed more easily than traditional measurement systems and is suitable for any shaft because it does not contact the shaft. With this approach, it is possible to measure the shaft power while a ship is operating.