• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.023 seconds

Example of Application of Drone Mapping System based on LiDAR to Highway Construction Site (드론 LiDAR에 기반한 매핑 시스템의 고속도로 건설 현장 적용 사례)

  • Seung-Min Shin;Oh-Soung Kwon;Chang-Woo Ban
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.6_3
    • /
    • pp.1325-1332
    • /
    • 2023
  • Recently, much research is being conducted based on point cloud data for the growth of innovations such as construction automation in the transportation field and virtual national space. This data is often measured through remote control in terrain that is difficult for humans to access using devices such as UAVs and UGVs. Drones, one of the UAVs, are mainly used to acquire point cloud data, but photogrammetry using a vision camera, which takes a lot of time to create a point cloud map, is difficult to apply in construction sites where the terrain changes periodically and surveying is difficult. In this paper, we developed a point cloud mapping system by adopting non-repetitive scanning LiDAR and attempted to confirm improvements through field application. For accuracy analysis, a point cloud map was created through a 2 minute 40 second flight and about 30 seconds of software post-processing on a terrain measuring 144.5 × 138.8 m. As a result of comparing the actual measured distance for structures with an average of 4 m, an average error of 4.3 cm was recorded, confirming that the performance was within the error range applicable to the field.

The Study on the Analysis of Road Surface Brightness of Low Mounted Road Lighting System (낮은 도로 조명의 노면 휘도 실태 분석에 대한 연구)

  • Kiho Nam;Chung Hyeok Kim
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.37 no.3
    • /
    • pp.314-321
    • /
    • 2024
  • Low road lighting is a lighting device that complements the shortcomings of existing pillar-type street lights. It is a lighting device that emits light from the side of the road surface and adjusts the luminance of the road surface like a light carpet. In this paper, to achieve full commercialization, we analyzed the luminance of the installed road surface and studied whether lighting could replace existing road lighting. In this study, the LMK (Luminance Measurement Camera) LABSOFT program was used to measure and analyze the surface luminance of road lighting, and the RELUX program was used to evaluate and analyze the simulation performance to determine light-based lighting conditions. A study was conducted to determine whether replacing pillar-type road lighting with low-level road lighting in a real environment would ensure comfortable and safe night vision for drivers at night.

Convolutional GRU and Attention based Fall Detection Integrating with Human Body Keypoints and DensePose

  • Yi Zheng;Cunyi Liao;Ruifeng Xiao;Qiang He
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.9
    • /
    • pp.2782-2804
    • /
    • 2024
  • The integration of artificial intelligence technology with medicine has rapidly evolved, with increasing demands for quality of life. However, falls remain a significant risk leading to severe injuries and fatalities, especially among the elderly. Therefore, the development and application of computer vision-based fall detection technologies have become increasingly important. In this paper, firstly, the keypoint detection algorithm ViTPose++ is used to obtain the coordinates of human body keypoints from the camera images. Human skeletal feature maps are generated from this keypoint coordinate information. Meanwhile, human dense feature maps are produced based on the DensePose algorithm. Then, these two types of feature maps are confused as dual-channel inputs for the model. The convolutional gated recurrent unit is introduced to extract the frame-to-frame relevance in the process of falling. To further integrate features across three dimensions (spatio-temporal-channel), a dual-channel fall detection algorithm based on video streams is proposed by combining the Convolutional Block Attention Module (CBAM) with the ConvGRU. Finally, experiments on the public UR Fall Detection Dataset demonstrate that the improved ConvGRU-CBAM achieves an F1 score of 92.86% and an AUC of 95.34%.

Autonomous Surveillance-tracking System for Workers Monitoring (작업자 모니터링을 위한 자동 감시추적 시스템)

  • Ko, Jung-Hwan;Lee, Jung-Suk;An, Young-Hwan
    • 전자공학회논문지 IE
    • /
    • v.47 no.2
    • /
    • pp.38-46
    • /
    • 2010
  • In this paper, an autonomous surveillance-tracking system for Workers monitoring basing on the stereo vision scheme is proposed. That is, analysing the characteristics of the cross-axis camera system through some experiments, a optimized stereo vision system is constructed and using this system an intelligent worker surveillance-tracking system is implemented, in which a target worker moving through the environments can be detected and tracked, and its resultant stereo location coordinates and moving trajectory in the world space also can be extracted. From some experiments on moving target surveillance-tracking, it is analyzed that the target's center location after being tracked is kept to be very low error ratio of 1.82%, 1.11% on average in the horizontal and vertical directions, respectively. And, the error ratio between the calculation and measurement values of the 3D location coordinates of the target person is found to be very low value of 2.5% for the test scenario on average. Accordingly, in this paper, a possibility of practical implementation of the intelligent stereo surveillance system for real-time tracking of a target worker moving through the environments and robust detection of the target's 3D location coordinates and moving trajectory in the real world is finally suggested.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

(Distance and Speed Measurements of Moving Object Using Difference Image in Stereo Vision System) (스테레오 비전 시스템에서 차 영상을 이용한 이동 물체의 거리와 속도측정)

  • 허상민;조미령;이상훈;강준길;전형준
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.9
    • /
    • pp.1145-1156
    • /
    • 2002
  • A method to measure the speed and distance of moving object is proposed using the stereo vision system. One of the most important factors for measuring the speed and distance of moving object is the accuracy of object tracking. Accordingly, the background image algorithm is adopted to track the rapidly moving object and the local opening operator algorithm is used to remove the shadow and noise of object. The extraction efficiency of moving object is improved by using the adaptive threshold algorithm independent to variation of brightness. Since the left and right central points are compensated, the more exact speed and distance of object can be measured. Using the background image algorithm and local opening operator algorithm, the computational processes are reduced and it is possible to achieve the real-time processing of the speed and distance of moving object. The simulation results show that background image algorithm can track the moving object more rapidly than any other algorithm. The application of adaptive threshold algorithm improved the extraction efficiency of the target by reducing the candidate areas. Since the central point of the target is compensated by using the binocular parallax, the error of measurement for the speed and distance of moving object is reduced. The error rate of measurement for the distance from the stereo camera to moving object and for the speed of moving object are 2.68% and 3.32%, respectively.

  • PDF

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

Positive Random Forest based Robust Object Tracking (Positive Random Forest 기반의 강건한 객체 추적)

  • Cho, Yunsub;Jeong, Soowoong;Lee, Sangkeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.6
    • /
    • pp.107-116
    • /
    • 2015
  • In compliance with digital device growth, the proliferation of high-tech computers, the availability of high quality and inexpensive video cameras, the demands for automated video analysis is increasing, especially in field of intelligent monitor system, video compression and robot vision. That is why object tracking of computer vision comes into the spotlight. Tracking is the process of locating a moving object over time using a camera. The consideration of object's scale, rotation and shape deformation is the most important thing in robust object tracking. In this paper, we propose a robust object tracking scheme using Random Forest. Specifically, an object detection scheme based on region covariance and ZNCC(zeros mean normalized cross correlation) is adopted for estimating accurate object location. Next, the detected region will be divided into five regions for random forest-based learning. The five regions are verified by random forest. The verified regions are put into the model pool. Finally, the input model is updated for the object location correction when the region does not contain the object. The experiments shows that the proposed method produces better accurate performance with respect to object location than the existing methods.

Laver(Kim) Thickness Measurement and Control System Design (해태(김)두께측정 및 조절 장치 설계)

  • Lee, Bae-Kyu;Choi, Young-Il;Kim, Jung-Hwa
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.11
    • /
    • pp.226-233
    • /
    • 2013
  • In this study, In Laver's automatic drying device, laver thickness measurement and control devices that are associated with. Disconnect the water and steam, after put a certain amount of the mixture(water and laver) in the mold. In process, Laver of the size and thickness (weight) to determine, constant light source to detect and image LED Lamp occur Vision Sensor (Camera) prepare, then the values of these state of the image is transmitted in real time embedded computers. Built-in measurement and control with the purpose of the application of each of the channels separately provided measurements are displayed on a monitor, And servo signals sent to each of the channels and it become so set function should be. In this paper, the laver drying device, prior to the laver thickness measurement and control devices that rely on the experience of existing workers directly laver manually adjust the thickness of the lever, but the lever by each channel relative to the actuator by installing was to improve the quality. In addition, The effect of productivity gains and labor savings are.