• Title/Summary/Keyword: camera vision

Search Result 1,389, Processing Time 0.028 seconds

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

An Optimal Position and Orientation of Stereo Camera (스테레오 카메라의 최적 위치 및 방향)

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Jung, Sung-Hun
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.3
    • /
    • pp.354-360
    • /
    • 2013
  • A stereo vision analysis was performed for motion and depth control of unmanned vehicles. In stereo vision, the depth information in three-dimensional coordinates can be obtained by triangulation after identifying points between the stereo image. However, there are always triangulation errors due to several reasons. Such errors in the vision triangulation can be alleviated by careful arrangement of the camera position and orientation. In this paper, an approach to the determination of the optimal position and orientation of camera is presented for unmanned vehicles.

A Study on the Control Characteristics of Line Scan Light Source for Machine Vision Line Scan Camera (머신 비전 라인 스캔 카메라를 위한 라인 스캔 광원의 제어 특성에 관한 연구)

  • Kim, Tae-Hwa;Lee, Cheon
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.34 no.5
    • /
    • pp.371-381
    • /
    • 2021
  • A machine vision inspection system consists of a camera, optics, illumination, and image acquisition system. Especially a scanning system has to be made to measure a large inspection area. Therefore, a machine vision line scan camera needs a line scan light source. A line scan light source should have a high light intensity and a uniform intensity distribution. In this paper, an offset calibration and slope calibration methods are introduced to obtain a uniform light intensity profile. Offset calibration method is to remove the deviation of light intensity among channels through adding intensity difference. Slope calibration is to remove variation of light intensity slope according to the control step among channels through multiplying slope difference. We can obtain an improved light intensity profile through applying offset and slope calibration simultaneously. The proposed method can help to obtain clearer image with a high precision in a machine vision inspection system.

Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKF Methods for Slender Bar Placement (얇은막대 배치작업에 대한 N-R 과 EKF 방법을 이용하여 개발한 로봇 비젼 제어알고리즘의 평가)

  • Son, Jae Kyung;Jang, Wan Shik;Hong, Sung Mun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.4
    • /
    • pp.447-459
    • /
    • 2013
  • Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKF methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKF and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

Vision Inspection for Flexible Lens Assembly of Camera Phone (카메라 폰 렌즈 조립을 위한 비전 검사 방법들에 대한 연구)

  • Lee I.S.;Kim J.O.;Kang H.S.;Cho Y.J.;Lee G.B.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.631-632
    • /
    • 2006
  • The assembly of camera lens modules fur the mobile phone has not been automated so far. They are still assembled manually because of high precision of all parts and hard-to-recognize lens by vision camera. In addition, the very short life cycle of the camera phone lens requires flexible and intelligent automation. This study proposes a fast and accurate identification system of the parts by distributing the camera for 4 degree of freedom assembly robot system. Single or multi-cameras can be installed according to the part's image capture and processing mode. It has an agile structure which enables adaptation with the minimal job change. The framework is proposed and the experimental result is shown to prove the effectiveness.

  • PDF

Sensitivity Analysis of Excavator Activity Recognition Performance based on Surveillance Camera Locations

  • Yejin SHIN;Seungwon SEO;Choongwan KOO
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1282-1282
    • /
    • 2024
  • Given the widespread use of intelligent surveillance cameras at construction sites, recent studies have introduced vision-based deep learning approaches. These studies have focused on enhancing the performance of vision-based excavator activity recognition to automatically monitor productivity metrics such as activity time and work cycle. However, acquiring a large amount of training data, i.e., videos captured from actual construction sites, is necessary for developing a vision-based excavator activity recognition model. Yet, complexities of dynamic working environments and security concerns at construction sites pose limitations on obtaining such videos from various surveillance camera locations. Consequently, this leads to performance degradation in excavator activity recognition models, reducing the accuracy and efficiency of heavy equipment productivity analysis. To address these limitations, this study aimed to conduct sensitivity analysis of excavator activity recognition performance based on surveillance camera location, utilizing synthetic videos generated from a game-engine-based virtual environment (Unreal Engine). Various scenarios for surveillance camera placement were devised, considering horizontal distance (20m, 30m, and 50m), vertical height (3m, 6m, and 10m), and horizontal angle (0° for front view, 90° for side view, and 180° for backside view). Performance analysis employed a 3D ResNet-18 model with transfer learning, yielding approximately 90.6% accuracy. Main findings revealed that horizontal distance significantly impacted model performance. Overall accuracy decreased with increasing distance (76.8% for 20m, 60.6% for 30m, and 35.3% for 50m). Particularly, videos with a 20m horizontal distance (close distance) exhibited accuracy above 80% in most scenarios. Moreover, accuracy trends in scenarios varied with vertical height and horizontal angle. At 0° (front view), accuracy mostly decreased with increasing height, while accuracy increased at 90° (side view) with increasing height. In addition, limited feature extraction for excavator activity recognition was found at 180° (backside view) due to occlusion of the excavator's bucket and arm. Based on these results, future studies should focus on enhancing the performance of vision-based recognition models by determining optimal surveillance camera locations at construction sites, utilizing deep learning algorithms for video super resolution, and establishing large training datasets using synthetic videos generated from game-engine-based virtual environments.

A Study on the Determination of 3-D Object's Position Based on Computer Vision Method (컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구)

  • 김경석
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.8 no.6
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

Development of multi-line laser vision sensor and welding application (멀티 라인 레이저 비전 센서를 이용한 고속 3차원 계측 및 모델링에 관한 연구)

  • 성기은;이세헌
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.169-172
    • /
    • 2002
  • A vision sensor measure range data using laser light source. This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which feeds foster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF

Automatic Alignment and Mounting of FPCs Using Machine Vision (머신비전을 이용한 FPC의 자동정렬 및 장착)

  • Shin, Dong-Won
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.6 no.3
    • /
    • pp.24-30
    • /
    • 2007
  • The FPCs(Flexible Printed Circuit) are currently used in several electronic products like digital cameras, cellular phones because of flexible material characteristics. Because the FPC is usually small size and flexible, only one FPC should not enter chip mounting process, instead, several FPCs are placed on the large rigid pallette and enter into the chip mounting process. Currently the job of mounting FPC on the pallette is carried by totally manual way. Thus, the goals of the research is develop the automatic machine of FPC mounting on pallette using vision alignment. Instead of using two cameras or using moving one camera, the proposed vision system with only one fixed camera is adopted. Moreover, the two picker heads which can handle two FPCs simultaneously are used to make process time shortened. The procedure of operation is firstly to measure alignment error of FPC, correct alignment errors, and finally mount well-aligned FPC on the pallette. The vision technology is used to measure alignment error accurately, and precision motion control is used in correcting errors and mounting FPC.

  • PDF

A study on the automatic wafer alignment in semiconductor dicing (반도체 절단 공정의 웨이퍼 자동 정렬에 관한 연구)

  • 김형태;송창섭;양해정
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.12
    • /
    • pp.105-114
    • /
    • 2003
  • In this study, a dicing machine with vision system was built and an algorithm for automatic alignment was developed for dual camera system. The system had a macro and a micro inspection tool. The algorithm was formulated from geometric relations. When a wafer was put on the cutting stage within certain range, it was inspected by vision system and compared with a standard pattern. The difference between the patterns was analyzed and evaluated. Then, the stage was moved by x, y, $\theta$ axes to compensate these differences. The amount of compensation was calculated from the result of the vision inspection through the automatic alignment algorithm. The stage was moved to the compensated position and was inspected by vision for checking its result again. Accuracy and validity of the algorithm was discussed from these data.