• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.024 seconds

The Container Pose Measurement Using Computer Vision (컴퓨터 비젼을 이용한 컨테이너 자세 측정)

  • 주기세
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.702-707
    • /
    • 2004
  • This article is concerned with container pose estimation using CCD a camera and a range sensor. In particular, the issues of characteristic point extraction and image noise reduction are described. The Euler-Lagrange equation for gaussian and random noise reduction is introduced. The alternating direction implicit(ADI) method for solving Euler-Lagrange equation based on partial differential equation(PDE) is applied. The vertex points as characteristic points of a container and a spreader are founded using k order curvature calculation algorithm since the golden and the bisection section algorithm can't solve the local minimum and maximum problems. The proposed algorithm in image preprocess is effective in image denoise. Furthermore, this proposed system using a camera and a range sensor is very low price since the previous system can be used without reconstruction.

Vision-based Real-time Velocity Detection Method (비젼 베이스 실시간 속도 검출 방법)

  • Kim Beom-Seok;Park Sung-Il;Ko Young-Hyuk
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.301-304
    • /
    • 2006
  • As is different from formerly used fixing camera method in this paper, proposed method that can measure the speed of vehicles and logarithm of vehicles in video. Vehicles that proposed method runs with 50km/h, 80km/h, 90km/h's the speed recording on Video Tape beginning point and time of reaching point draw, and calculated 47.57km/h, 81.20km/h, the 90.00km/h speed by time and distance, the tracking cars and the velocity detection in video with the 'begin-line mark' and the 'end-line mark' processing.

  • PDF

Repeatability Test for the Asymmetry Measurement of Human Appearance using General-purpose Depth Cameras (범용 깊이 카메라를 이용한 인체 외형 비대칭 측정의 반복성 평가)

  • Jang, Jun-Su
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.30 no.3
    • /
    • pp.184-189
    • /
    • 2016
  • Human appearance analysis is an important part of both eastern and western medicine fields, such as Sasang constitutional medicine, rehabilitation medicine, dental medicine, and etc. By the rapid growing of depth camera technology, 3D measuring becomes popular in many applications including medical area. In this study, the possibility of using depth cameras in asymmetry analysis of human appearance is examined. We introduce the development of 3D measurement system using 2 Microsoft Kinect depth cameras and fully automated asymmetry analysis algorithms based on computer vision technology. We compare the proposed automated method to the manual method, which is usually used in asymmetry analysis. As a measure of repeatability, standard deviations of asymmetry indices are examined by 10 times repeated experiments. Experimental results show that the standard deviation of the automated method (1.00mm for face, 1.22mm for body) is better than that of the manual method (2.06mm for face, 3.44mm for body) for the same 3D measurement. We conclude that the automated method using depth cameras can be successfully applicable to practical asymmetry analysis and contribute to reliable human appearance analysis.

Surface Inspection Algorighm using Oriented Bounding Box (회전 윤곽 상자를 이용한 표면 검사 알고리즘)

  • Hwang, Myun Joong;Chung, Seong Youb
    • Journal of Institute of Convergence Technology
    • /
    • v.6 no.1
    • /
    • pp.23-26
    • /
    • 2016
  • DC motor shafts have several defects such as double cut, deep scratch on surface, and defects in diameter and length. The deep scratches are due to collision among the other shafts. So the scratches are long and thin but their orientations are random. If the smallest enclosing box, i.e. oriented bounding box for a detective point group is found, then the size of the corresponding defect can be modeled as its diagonal length. This paper proposes an suface inspection algorithm for the DC motor shaft using the oriented bounding box. To evaluate the proposed algorithm, a test bed is made with a line scan CCD camera (4096 pixels/line) and two rollers mechanism to rotate the shaft. The experimental result on a pre-processed image with contrast streching algorithm, shows that the proposed algorithm sucessfully finds 150 surface defects and its computation time (0.291 msec) is enough fast for the requirement (4 seconds).

To Use AI Camera Block Vision Algorithm Contents Development (AI Camera Block을 사용한 비전 알고리즘 콘텐츠 개발)

  • Lim, Tae Yoon;An, Jae-Yong;Oh, Junhyeok;Kim, Dong-Yeon;Won, JinSub;Hwang, Jun Ho;Do, Youngchae;Woo, Deok Ha;Lee, Seok
    • Annual Conference of KIPS
    • /
    • 2019.10a
    • /
    • pp.840-843
    • /
    • 2019
  • IoT 산업이 발전하면서 기존 토이와 IoT 기술을 결합한 스마트토이가 각광 받고 있다. 스마트토이는 수동적인 방식의 기존토이와는 다르게 토이 간 인터렉션이 가능하며 전자 센서들을 사용하여 토이를 사용하는 어린아이들에 코딩을 활용한 콘텐츠를 제공가능하다. 기존 스마트토이는 처음에는 호기심을 자극하지만, 익숙해지면 흥미가 떨어지는 현상을 보인다. 이에 본 논문에서는 기존 스마트토이가 갖는 재미요소 증가와 다양한 콘텐츠의 개발을 위해서 스마트 토이에 Artificial Intelligence(AI) 기능을 접목한 AI 카메라블록을 사용하여 새로운 콘텐츠를 개발하였다.

A Real-time Vision Inspection System at a Laver Production Line (해태 생산라인에서의 실시간 시각검사 시스템)

  • Kim, Gi-Weon;Kim, Bong-Gi
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.6
    • /
    • pp.1136-1140
    • /
    • 2007
  • In this paper dose a laver surface check using a real time image process. This system does false retrieval of a laver at a laver production line. At first, a laver image was read in real time using a CCD camera. In this paper, we use an area scan CCD camera. Image is converted into a binary code image using a high-speed imaging process board afterwards. A laver feature is extracted by a binary code image. Surface false retrieval is finally executed using a laver feature. In this paper, we use an area feature of a laver image.

A Study on Extraction Depth Information Using a Non-parallel Axis Image (사각영상을 이용한 물체의 고도정보 추출에 관한 연구)

  • 이우영;엄기문;박찬응;이쾌희
    • Korean Journal of Remote Sensing
    • /
    • v.9 no.2
    • /
    • pp.7-19
    • /
    • 1993
  • In stereo vision, when we use two parallel axis images, small portion of object is contained and B/H(Base-line to Height) ratio is limited due to the size of object and depth information is inaccurate. To overcome these difficulities we take a non-parallel axis image which is rotated $\theta$ about y-axis and match other parallel-axis image. Epipolar lines of non-parallel axis image are not same as those of parallel-axis image and we can't match these two images directly. In this paper, we transform the non-parallel axis image geometrically with camera parameters, whose epipolar lines are alingned parallel. NCC(Normalized Cross Correlation) is used as match measure, area-based matching technique is used find correspondence and 9$\times$9 window size is used, which is chosen experimentally. Focal length which is necessary to get depth information of given object is calculated with least-squares method by CCD camera characteristics and lenz property. Finally, we select 30 test points from given object whose elevation is varied to 150 mm, calculate heights and know that height RMS error is 7.9 mm.

Real-Time Earlobe Detection System on the Web

  • Kim, Jaeseung;Choi, Seyun;Lee, Seunghyun;Kwon, Soonchul
    • International journal of advanced smart convergence
    • /
    • v.10 no.4
    • /
    • pp.110-116
    • /
    • 2021
  • This paper proposed a real-time earlobe detection system using deep learning on the web. Existing deep learning-based detection methods often find independent objects such as cars, mugs, cats, and people. We proposed a way to receive an image through the camera of the user device in a web environment and detect the earlobe on the server. First, we took a picture of the user's face with the user's device camera on the web so that the user's ears were visible. After that, we sent the photographed user's face to the server to find the earlobe. Based on the detected results, we printed an earring model on the user's earlobe on the web. We trained an existing YOLO v5 model using a dataset of about 200 that created a bounding box on the earlobe. We estimated the position of the earlobe through a trained deep learning model. Through this process, we proposed a real-time earlobe detection system on the web. The proposed method showed the performance of detecting earlobes in real-time and loading 3D models from the web in real-time.

Hazy Particle Map-based Automated Fog Removal Method with Haziness Degree Evaluator Applied (Haziness Degree Evaluator를 적용한 Hazy Particle Map 기반 자동화 안개 제거 방법)

  • Sim, Hwi Bo;Kang, Bong Soon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.9
    • /
    • pp.1266-1272
    • /
    • 2022
  • With the recent development of computer vision technology, image processing-based mechanical devices are being developed to realize autonomous driving. The camera-taken images of image processing-based machines are invisible due to scattering and absorption of light in foggy conditions. This lowers the object recognition rate and causes malfunction. The safety of the technology is very important because the malfunction of autonomous driving leads to human casualties. In order to increase the stability of the technology, it is necessary to apply an efficient haze removal algorithm to the camera. In the conventional haze removal method, since the haze removal operation is performed regardless of the haze concentration of the input image, excessive haze is removed and the quality of the resulting image is deteriorated. In this paper, we propose an automatic haze removal method that removes haze according to the haze density of the input image by applying Ngo's Haziness Degree Evaluator (HDE) to Kim's haze removal algorithm using Hazy Particle Map. The proposed haze removal method removes the haze according to the haze concentration of the input image, thereby preventing the quality degradation of the input image that does not require haze removal and solving the problem of excessive haze removal. The superiority of the proposed haze removal method is verified through qualitative and quantitative evaluation.

Development of Color Recognition Algorithm for Traffic Lights using Deep Learning Data (딥러닝 데이터 활용한 신호등 색 인식 알고리즘 개발)

  • Baek, Seoha;Kim, Jongho;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.45-50
    • /
    • 2022
  • The vehicle motion in urban environment is determined by surrounding traffic flow, which cause understanding the flow to be a factor that dominantly affects the motion planning of the vehicle. The traffic flow in this urban environment is accessed using various urban infrastructure information. This paper represents a color recognition algorithm for traffic lights to perceive traffic condition which is a main information among various urban infrastructure information. Deep learning based vision open source realizes positions of traffic lights around the host vehicle. The data are processed to input data based on whether it exists on the route of ego vehicle. The colors of traffic lights are estimated through pixel values from the camera image. The proposed algorithm is validated in intersection situations with traffic lights on the test track. The results show that the proposed algorithm guarantees precise recognition on traffic lights associated with the ego vehicle path in urban intersection scenarios.