• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.03 seconds

Traffic Data Calculation Solution for Moving Vehicles using Vision Tracking (Vision Tracking을 이용한 주행 차량의 교통정보 산출 기법)

  • Park, Young ki;Im, Sang il;Jo, Ik hyeon;Cha, Jae sang
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.5
    • /
    • pp.97-105
    • /
    • 2020
  • Recently, for a smart city, there is a demand for a technology for acquiring traffic information using an intelligent road infrastructure and managing it. In the meantime, various technologies such as loop detectors, ultrasonic detectors, and image detectors have been used to analyze road traffic information but these have difficulty in collecting various informations, such as traffic density and length of a queue required for building a traffic information DB for moving vehicles. Therefore, in this paper, assuming a smart city built on the basis of a camera infrastructure such as intelligent CCTV on the road, a solution for calculating the traffic DB of moving vehicles using Vision Tracking of road CCTV cameras is presented. Simulation and verification of basic performance were conducted and solution can be usefully utilized in related fields as a new intelligent traffic DB calculation solution that reflects the environment of road-mounted CCTV cameras and moving vehicles in a variable smart city road environment. It is expected to be there.

Precision Evaluation of Expressway Incident Detection Based on Dash Cam (차량 내 영상 센서 기반 고속도로 돌발상황 검지 정밀도 평가)

  • Sanggi Nam;Younshik Chung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.6
    • /
    • pp.114-123
    • /
    • 2023
  • With the development of computer vision technology, video sensors such as CCTV are detecting incident. However, most of the current incident have been detected based on existing fixed imaging equipment. Accordingly, there has been a limit to the detection of incident in shaded areas where the image range of fixed equipment is not reached. With the recent development of edge-computing technology, real-time analysis of mobile image information has become possible. The purpose of this study is to evaluate the possibility of detecting expressway emergencies by introducing computer vision technology to dash cam. To this end, annotation data was constructed based on 4,388 dash cam still frame data collected by the Korea Expressway Corporation and analyzed using the YOLO algorithm. As a result of the analysis, the prediction accuracy of all objects was over 70%, and the precision of traffic accidents was about 85%. In addition, in the case of mAP(mean Average Precision), it was 0.769, and when looking at AP(Average Precision) for each object, traffic accidents were the highest at 0.904, and debris were the lowest at 0.629.

Reliability Evaluation of ACP Component under a Radiation Environment (방사선환경에서 ACP 주요부품의 신뢰도 평가)

  • Lee, Hyo-Jik;Yoon, Kwang-Ho;Lim, Kwang-Mook;Park, Byung-Suk;Yoon, Ji-Sup
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.5 no.4
    • /
    • pp.309-322
    • /
    • 2007
  • This study deals with the irradiation effects on some selected components which are being used in an Advanced Spent Fuel Conditioning Process (ACP). Irradiation test components have a higher priority from the aspect of their reliability because their degradation or failure is able to critically affect the performance of an ACP equipment. Components that we chose for the irradiation tests were the AC servo motor, potentiometer, thermocouples, accelerometer and CCD camera. ACP facility has a number of AC servo motors to move the joints of a manipulator and to operate process equipment. Potentiometers are used for a measurement of several joint angles in a manipulator. Thermocouples are used for a temperature measurement in an electrolytic reduction reactor, a vol-oxidation reactor and a molten salt transfer line. An accelerometer is installed in a slitting machine to forecast an incipient failure during a slitting process. A small CCD camera is used for an in-situ vision monitoring between ACP campaigns. We made use of a gamma-irradiation facility with cobalt-60 source for an irradiation test on the above components because gamma rays from among various radioactive rays are the most significant for electric, electronic and robotic components. Irradiation tests were carried out for enough long time for total doses to be over expected threshold values. Other components except the CCD camera showed a very high radiation hardening characteristic. Characteristic changes at different total doses were investigated and threshold values to warrant at least their performance without a deterioration were evaluated as a result of the irradiation tests.

  • PDF

3D Reconstruction using a Moving Planar Mirror (움직이는 평면거울을 이용한 3차원 물체 복원)

  • 장경호;이동훈;정순기
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.11
    • /
    • pp.1543-1550
    • /
    • 2004
  • Modeling from images is a cost-effective means of obtaining 3D geometric models. These models can be effectively constructed from classical Structure from Motion algorithm. However, it's too difficult to reconstruct whole scenes using SFM method since general sites contain a very complex shapes and brilliant colours. To overcome this difficulty, the current paper proposes a new reconstruction method based on a moving Planar mirror. We devise the mirror posture instead of scene itself as a cue for reconstructing the geometry That implies that the geometric cues are inserted into the scene by compulsion. With this method, we can obtain the geometric details regardless of the scene complexity. For this purpose, we first capture image sequences through the moving mirror containing the interested scene, and then calibrate the camera through the mirror's posture. Since the calibration results are still inaccurate due to the detection error, the camera pose is revised using frame-correspondence of the comer points that are easily obtained using the initial camera posture. Finally, 3D information is computed from a set of calibrated image sequences. We validate our approach with a set of experiments on some complex objects.

Augmented Reality System using Planar Natural Feature Detection and Its Tracking (동일 평면상의 자연 특징점 검출 및 추적을 이용한 증강현실 시스템)

  • Lee, A-Hyun;Lee, Jae-Young;Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.49-58
    • /
    • 2011
  • Typically, vision-based AR systems operate on the basis of prior knowledge of the environment such as a square marker. The traditional marker-based AR system has a limitation that the marker has to be located in the sensing range. Therefore, there have been considerable research efforts for the techniques known as real-time camera tracking, in which the system attempts to add unknown 3D features to its feature map, and these then provide registration even when the reference map is out of the sensing range. In this paper, we describe a real-time camera tracking framework specifically designed to track a monocular camera in a desktop workspace. Basic idea of the proposed scheme is that a real-time camera tracking is achieved on the basis of a plane tracking algorithm. Also we suggest a method for re-detecting features to maintain registration of virtual objects. The proposed method can cope with the problem that the features cannot be tracked, when they go out of the sensing range. The main advantage of the proposed system are not only low computational cost but also convenient. It can be applicable to an augmented reality system for mobile computing environment.

Technical-note : Real-time Evaluation System for Quantitative Dynamic Fitting during Pedaling (단신 : 페달링 시 정량적인 동적 피팅을 위한 실시간 평가 시스템)

  • Lee, Joo-Hack;Kang, Dong-Won;Bae, Jae-Hyuk;Shin, Yoon-Ho;Choi, Jin-Seung;Tack, Gye-Rae
    • Korean Journal of Applied Biomechanics
    • /
    • v.24 no.2
    • /
    • pp.181-187
    • /
    • 2014
  • In this study, a real-time evaluation system for quantitative dynamic fitting during pedaling was developed. The system is consisted of LED markers, a digital camera connected to a computer and a marker detecting program. LED markers are attached to hip, knee, ankle joint and fifth metatarsal in the sagittal plane. Playstation3 eye which is selected as a main digital camera in this paper has many merits for using motion capture, such as high FPS (Frame per second) about 180FPS, $320{\times}240$ resolution, and low-cost with easy to use. The maker detecting program was made by using Labview2010 with Vision builder. The program was made up of three parts, image acquisition & processing, marker detection & joint angle calculation, and output section. The digital camera's image was acquired in 95FPS, and the program was set-up to measure the lower-joint angle in real-time, providing the user as a graph, and allowing to save it as a test file. The system was verified by pedalling at three saddle heights (knee angle: 25, 35, $45^{\circ}$) and three cadences (30, 60, 90 rpm) at each saddle heights by using Holmes method, a method of measuring lower limbs angle, to determine the saddle height. The result has shown low average error and strong correlation of the system, respectively, $1.18{\pm}0.44^{\circ}$, $0.99{\pm}0.01^{\circ}$. There was little error due to the changes in the saddle height but absolute error occurred by cadence. Considering the average error is approximately $1^{\circ}$, it is a suitable system for quantitative dynamic fitting evaluation. It is necessary to decrease error by using two digital camera with frontal and sagittal plane in future study.

A Deep Learning Method for Cost-Effective Feed Weight Prediction of Automatic Feeder for Companion Animals (반려동물용 자동 사료급식기의 비용효율적 사료 중량 예측을 위한 딥러닝 방법)

  • Kim, Hoejung;Jeon, Yejin;Yi, Seunghyun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.263-278
    • /
    • 2022
  • With the recent advent of IoT technology, automatic pet feeders are being distributed so that owners can feed their companion animals while they are out. However, due to behaviors of pets, the method of measuring weight, which is important in automatic feeding, can be easily damaged and broken when using the scale. The 3D camera method has disadvantages due to its cost, and the 2D camera method has relatively poor accuracy when compared to 3D camera method. Hence, the purpose of this study is to propose a deep learning approach that can accurately estimate weight while simply using a 2D camera. For this, various convolutional neural networks were used, and among them, the ResNet101-based model showed the best performance: an average absolute error of 3.06 grams and an average absolute ratio error of 3.40%, which could be used commercially in terms of technical and financial viability. The result of this study can be useful for the practitioners to predict the weight of a standardized object such as feed only through an easy 2D image.

Development of an Image Processing Algorithm for Paprika Recognition and Coordinate Information Acquisition using Stereo Vision (스테레오 영상을 이용한 파프리카 인식 및 좌표 정보 획득 영상처리 알고리즘 개발)

  • Hwa, Ji-Ho;Song, Eui-Han;Lee, Min-Young;Lee, Bong-Ki;Lee, Dae-Weon
    • Journal of Bio-Environment Control
    • /
    • v.24 no.3
    • /
    • pp.210-216
    • /
    • 2015
  • Purpose of this study was a development of an image processing algorithm to recognize paprika and acquire it's 3D coordinates from stereo images to precisely control an end-effector of a paprika auto harvester. First, H and S threshold was set using HSI histogram analyze for extracting ROI(region of interest) from raw paprika cultivation images. Next, fundamental matrix of a stereo camera system was calculated to process matching between extracted ROI of corresponding images. Epipolar lines were acquired using F matrix, and $11{\times}11$ mask was used to compare pixels on the line. Distance between extracted corresponding points were calibrated using 3D coordinates of a calibration board. Non linear regression analyze was used to prove relation between each pixel disparity of corresponding points and depth(Z). Finally, the program could calculate horizontal(X), vertical(Y) directional coordinates using stereo camera's geometry. Horizontal directional coordinate's average error was 5.3mm, vertical was 18.8mm, depth was 5.4mm. Most of the error was occurred at 400~450mm of depth and distorted regions of image.

Active Object Tracking System based on Stereo Vision (스테레오 비젼 기반의 능동형 물체 추적 시스템)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.4
    • /
    • pp.159-166
    • /
    • 2016
  • In this paper, an active object tracking system basing on the pan/tilt-embedded stereo camera system is suggested and implemented. In the proposed system, once the face area of a target is detected from the input stereo image by using a YCbCr color model and phase-type correlation scheme and then, using this data as well as the geometric information of the tracking system, the distance and 3D information of the target are effectively extracted in real-time. Basing on these extracted data the pan/tilted-embedded stereo camera system is adaptively controlled and as a result, the proposed system can track the target adaptively under the various circumstance of the target. From some experiments using 480 frames of the test input stereo image, it is analyzed that a standard variation between the measured and computed the estimated target's height and an error ratio between the measured and computed 3D coordinate values of the target is also kept to be very low value of 1.03 and 1.18% on average, respectively. From these good experimental results a possibility of implementing a new real-time intelligent stereo target tracking and surveillance system using the proposed scheme is finally suggested.

Accuracy of Parcel Boundary Demarcation in Agricultural Area Using UAV-Photogrammetry (무인 항공사진측량에 의한 농경지 필지 경계설정 정확도)

  • Sung, Sang Min;Lee, Jae One
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.53-62
    • /
    • 2016
  • In recent years, UAV Photogrammetry based on an ultra-light UAS(Unmanned Aerial System) installed with a low-cost compact navigation device and a camera has attracted great attention through fast and accurate acquirement of geo-spatial data. In particular, UAV Photogrammetry do gradually replace the traditional aerial photogrammetry because it is able to produce DEMs(Digital Elevation Models) and Orthophotos rapidly owing to large amounts of high resolution image collection by a low-cost camera and image processing software combined with computer vision technique. With these advantages, UAV-Photogrammetry has therefore been applying to a large scale mapping and cadastral surveying that require accurate position information. This paper presents experimental results of an accuracy performance test with images of 4cm GSD from a fixed wing UAS to demarcate parcel boundaries in agricultural area. Consequently, the accuracy of boundary point extracted from UAS orthoimage has shown less than 8cm compared with that of terrestrial cadastral surveying. This means that UAV images satisfy the tolerance limit of distance error in cadastral surveying for the scale of 1: 500. And also, the area deviation is negligible small, about 0.2%(3.3m2), against true area of 1,969m2 by cadastral surveying. UAV-Photogrammetry is therefore as a promising technology to demarcate parcel boundaries.