• Title/Summary/Keyword: Real-time camera tracking

Search Result 295, Processing Time 0.027 seconds

Development of Vision Sensor Module for the Measurement of Welding Profile (용접 형상 측정용 시각 센서 모듈 개발)

  • Kim C.H.;Choi T.Y.;Lee J.J.;Suh J.;Park K.T.;Kang H.S.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.285-286
    • /
    • 2006
  • The essential tasks to operate the welding robot are the acquisition of the position and/or shape of the parent metal. For the seam tracking or the robot automation, many kinds of contact and non-contact sensors are used. Recently, the vision sensor is most popular. In this paper, the development of the system which measures the profile of the welding part is described. The total system will be assembled into a compact module which can be attached to the head of welding robot system. This system uses the line-type structured laser diode and the vision sensor It implemented Direct Linear Transformation (DLT) for the camera calibration as well as radial distortion correction. The three dimensional shape of the parent metal is obtained after simple linear transformation and therefore, the system operates in real time. Some experiments are carried out to evaluate the performance of the developed system.

  • PDF

Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio (모멘트 변화와 객체 크기 비율을 이용한 객체 행동 및 위험상황 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.556-565
    • /
    • 2014
  • This paper proposes a method to track object of real-time video transferred through single web-camera and to recognize risk-situation and human actions. The proposed method recognizes human basic actions that human can do in daily life and finds risk-situation such as faint and falling down to classify usual action and risk-situation. The proposed method models the background, obtains the difference image between input image and the modeled background image, extracts human object from input image, tracts object's motion and recognizes human actions. Tracking object uses the moment information of extracting object and the characteristic of object's recognition is moment's change and ratio of object's size between frames. Actions classified are four actions of walking, waling diagonally, sitting down, standing up among the most actions human do in daily life and suddenly falling down is classified into risk-situation. To test the proposed method, we applied it for eight participants from a video of a web-cam, classify human action and recognize risk-situation. The test result showed more than 97 percent recognition rate for each action and 100 percent recognition rate for risk-situation by the proposed method.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

Development of CanSat System for Vehicle Tracking based on Jetson Nano (젯슨 나노 기반의 차량 추적 캔위성 시스템 개발)

  • Lee, Younggun;Lee, Sanghyun;You, Seunghoon;Lee, Sangku
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.556-558
    • /
    • 2022
  • This paper proposes a CanSat system with a vehicle tracking function based on Jetson Nano, a high-performance small computer capable of operating artificial intelligence algorithms. The CanSat system consists of a CanSat and a ground station. The CanSat falls in the atmosphere and transmits the data obtained through the installed sensors to the ground station using wireless communication. The existing CanSat is limited to the mission of simply transmitting the collected information to the ground station, and there is a limit to efficiently performing the mission due to the limited fall time and bandwidth limitation of wireless communication. The Jetson Nano based CanSat proposed in this paper uses a pre-trained neural network model to detect the location of a vehicle in each image taken from the air in real time, and then uses a 2-axis motor to move the camera to track the vehicle.

  • PDF

Real Time Face Tracking and Recognition using SVM-SMO with a Pan-Tilt Web-Camera (SVM-SMO와 Pan-Tilt 웹 카메라를 이용한 실시간 얼굴 추적과 얼굴 인식)

  • 이호근;김명훈;이지근;정성태
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.679-681
    • /
    • 2004
  • 웹 카메라로부터 입력된 비디오 영상으로부터 실시간 얼굴 인식은 빠르고 정확한 시스템이 요구된다. 따라서 본 논문에서는 객체 분류 기법인 SVM을 이용하여 실시간 다중 얼굴 인식이 가능한 시스템 구현에 중점을 두었다. 본 논문은 얼굴 skin/non-skin 정보를 이용한 얼굴 후보 영역의 검출 단계, 얼굴/비얼굴의 검출 단계, 그리고 얼굴의 인식 단계로 구성되어 있다. 각각의 단계별로 SVM을 적용하였고 각 SVM은 오프라인상의 학습 부분과 온라인상의 테스트 부분으로 구성되어 있고, SVM의 QP 최적화 문제를 해결하기 위해 학습 알고리즘인 SMO을 적용하였다. 팬(Pan)-틸트(Tilt) 제어가 가능한 저가형 웹 카메라를 이용하여 자동으로 얼굴 위치를 추적, 이동하면서 얼굴 인식을 수행하였다.

  • PDF

Non-constraining Online Signature Reconstruction System for Persons with Handwriting Problems

  • Abbadi, Belkacem;Mostefai, Messaoud;Oulefki, Adel
    • ETRI Journal
    • /
    • v.37 no.1
    • /
    • pp.138-146
    • /
    • 2015
  • This paper presents a new non-constraining online optical handwritten signature reconstruction system that, in the main, makes use of a transparent glass pad placed in front of a color camera. The reconstruction approach allows efficient exploitation of hand activity during a signing process; thus, the system as a whole can be seen as a viable alternative to other similar acquisition tools. This proposed system allows people with physical or emotional problems to carry out their own signatures without having to use a pen or sophisticated acquisition system. Moreover, the developed reconstruction signature algorithms have low computational complexity and are therefore well suited for a hardware implementation on a dedicated smart system.

H Control on the Optical Image Stabilizer Mechanism in Mobile Phone Cameras (이동통신 단말기 카메라의 손떨림 보정 장치의 H 제어)

  • Lee, Chibum
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.3
    • /
    • pp.266-272
    • /
    • 2014
  • This study proposes a closed-loop shaping control method with $H_{\infty}$ optimization for optical image stabilization (OIS) in mobile phone cameras. The image stabilizer is composed of a horizontal stage constrained by ball bearings and actuated by the magnetic force from voice coil motors. The displacement of the stage is measured by Hall effect sensors. From the OIS frequency response experiment, the transfer function models of the stage and Hall effect sensor were identified. The weight functions were determined considering the tracking performance, noise attenuation, and stability with considerable margins. The $H_{\infty}$ optimal controller was executed using closed-loop shaping and limiting the controller order, which should be less than 6 for real-time implementation. The control algorithm was verified experimentally and proved to operate as designed.

Development of a vision sensor for measuring the weld groove parameters in arc welding process (자동 아크 용접공정의 용접개선변수 측정을 위한 시각 시스템)

  • 김호학;부광석;조형석
    • Journal of Welding and Joining
    • /
    • v.8 no.2
    • /
    • pp.58-69
    • /
    • 1990
  • In conventional arc welding, position error of the weld torch with respect to the weld seam and variation of groove dimension are induced by inaccurate fitup and fixturing. In this study, a vision system has been developed to recognize and compensate the position error and dimensional inaccuracy. The system uses a structured laser light illuminated on the weld groove and perceived by a C.C.D camera. A new algorithm to detect the edge of the reflected laser light is introduced for real time processing. The developed system was applied to arbitarary weld paths with various types of joint in arc welding process. The experimental results show that the proposed system can detect the weld groove parameters within good accuracy and yield good tracking performance.

  • PDF

Convergence Control of Moving Object using Opto-Digital Algorithm in the 3D Robot Vision System

  • Ko, Jung-Hwan;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.3 no.2
    • /
    • pp.19-25
    • /
    • 2002
  • In this paper, a new target extraction algorithm is proposed, in which the coordinates of target are obtained adaptively by using the difference image information and the optical BPEJTC(binary phase extraction joint transform correlator) with which the target object can be segmented from the input image and background noises are removed in the stereo vision system. First, the proposed algorithm extracts the target object by removing the background noises through the difference image information of the sequential left images and then controlls the pan/tilt and convergence angle of the stereo camera by using the coordinates of the target position obtained from the optical BPEJTC between the extracted target image and the input image. From some experimental results, it is found that the proposed algorithm can extract the target object from the input image with background noises and then, effectively track the target object in real time. Finally, a possibility of implementation of the adaptive stereo object tracking system by using the proposed algorithm is also suggested.

Real-Time Specific Object Tracking Algorithm by using Multi-Camera (멀티카메라를 이용한 실시간 특정객체 추적 알고리즘)

  • Min, Byoung-Muk;Lee, Kwang-Hyoung;Oh, Hae-Seok
    • Proceedings of the KAIS Fall Conference
    • /
    • 2006.11a
    • /
    • pp.229-232
    • /
    • 2006
  • 단일 카메라를 통하여 실시간으로 입력되는 객체의 추적은 환경의 제약을 많이 받는다. 입력되는 영상에서의 움직임이 있는 객체는 단일하여야 하며, 동시에 많은 움직임이 발생하면 추적하고자 하는 객체를 구분하기 어려워진다. 본 논문에서는 동일공간을 감시하는 두 대의 카메라가 서로 데이터를 주고 받으며 추적하고자 하는 특정객체를 오류 없이 추적할 수 있는 방법을 제시하였다. 실시간 객체 추적은 입력되는 영상에서 객체의 위치를 가장 빠르게 검색하기 위한 고속탐색 알고리즘이 필요하다. 본 논문은 실시간영상에서 객체의 움직임을 추출하고 추적을 위하여 각각 위치가 다른 두 대의 카메라가 상호 협력하면서 객체 추적에 대한 연산을 현저하게 줄일 수 있었다. 또한 객체의 움직임이 많은 공간에서도 추적하고자 하는 특정객체를 잃어버리지 않고 추적하였다. 실험결과, 제안한 방법은 97% 이상의 높은 객체 추적율을 보였다.

  • PDF