• Title/Summary/Keyword: Camera Model

Search Result 1,515, Processing Time 0.036 seconds

On Design of Visual Servoing using an Uncalibrated Camera in 3D Space

  • Morita, Masahiko;Kenji, Kohiyama;Shigeru, Uchikado;Lili, Sun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1121-1125
    • /
    • 2003
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. We use a pinhole camera model as the camera one. The essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. These play an important role in designing visual servoing. For easy understanding of the proposed method we first show a design in case of the calibrated camera. The design is constructed by 4 steps and the directional motion of the robot arm is fixed only to a constant direction. This means that an estimated epipole denotes the direction, to which the robot arm translates in 3D space, on the image plane.

  • PDF

New Method of Visual Servoing using an Uncalibrated Camera and a Calibrated Robot

  • Morita, Masahiko;Shigeru, Uchikado;Yasuhiro, Osa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.4-41
    • /
    • 2002
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. Here we consider two coordinate systems, the world coordinate system and the camera coordinate one and we use a pinhole camera model as the camera one. First of all, the essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. And these plays an important role in designing visual servoing in the later chapters. Statement of the problem is giver. Provided two a priori...

  • PDF

Range and Velocity Estimation of the Object using a Moving Camera (움직이는 카메라를 이용한 목표물의 거리 및 속도 추정)

  • Byun, Sang-Hoon;Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.12
    • /
    • pp.1737-1743
    • /
    • 2013
  • This paper proposes the range and velocity of the object estimation method using a moving camera. Structure and motion (SaM) estimation is to estimate the Euclidean geometry of the object as well as the relative motion between the camera and object. Unlike the previous works, the proposed estimation method can relax the camera and object motion constraints. To this end, we arrange the dynamics of moving camera-moving object relative motion model in an appropriate form such that the nonlinear observer can be employed for the SaM estimation. Through both simulations and experiments we have confirmed the validity of the proposed estimation algorithm.

A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect (렌즈왜곡효과를 보상하는 새로운 Hand-eye 보정기법)

  • Chung, Hoi-Bum
    • Proceedings of the KSME Conference
    • /
    • 2000.11a
    • /
    • pp.596-601
    • /
    • 2000
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

  • PDF

A HIGH PRECISION CAMERA OPERATING PARAMETER MEASUREMENT SYSTEM AND ITS APPLICATION TO IMAGE MOTION INFERRING

  • Wentao-Zheng;Yoshiaki-Shishikui;Yasuaki-Kanatsugu;Yutaka-Tanaka
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.77-82
    • /
    • 1999
  • Information about camera operating such as zoom, focus, pan, tilt and tracking is useful not only for efficient video coding, but also for content-based video representation. A camera operating parameter measurement system designed specifically for these applications is therefore developed. This system, implemented in real time and synchronized with the video signal, measures the precise camera operating parameters. We calibrated the camera lens using a camera model that accounts for redial lens distortion. The system is then applied to infer image motion from pan and tilt operating parameters. The experimental results show that the inferred motion coincides with the actual motion very well, with an error of less than 0.5 pixel even for large motion up to 80 pixels.

On Design of Visual Servoing using an Uncalibrated Camera and a Calibrated Robot

  • Uchikado, Shigeru;Morita, Masahiko;Osa, Yasuhiro;Mabuchi, Tesuo;Tanya, Kanya
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.23.2-23
    • /
    • 2001
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. We use a pinhole camera model as the camera one. The essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. These play an important role in designing visual servoing. For easy understanding of the proposed method we first show a design in case of the calibrated camera. The design is constructed by 4 steps and the directional motion of the robot arm is fixed only to a constant direction. This means that an estimated epipole denotes the direction, to which the robot arm translates in 3D space, on the image plane.

  • PDF

Development of Urban Wildlife Detection and Analysis Methodology Based on Camera Trapping Technique and YOLO-X Algorithm (카메라 트래핑 기법과 YOLO-X 알고리즘 기반의 도시 야생동물 탐지 및 분석방법론 개발)

  • Kim, Kyeong-Tae;Lee, Hyun-Jung;Jeon, Seung-Wook;Song, Won-Kyong;Kim, Whee-Moon
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.4
    • /
    • pp.17-34
    • /
    • 2023
  • Camera trapping has been used as a non-invasive survey method that minimizes anthropogenic disturbance to ecosystems. Nevertheless, it is labor-intensive and time-consuming, requiring researchers to quantify species and populations. In this study, we aimed to improve the preprocessing of camera trapping data by utilizing an object detection algorithm. Wildlife monitoring using unmanned sensor cameras was conducted in a forested urban forest and a green space on a university campus in Cheonan City, Chungcheongnam-do, Korea. The collected camera trapping data were classified by a researcher to identify the occurrence of species. The data was then used to test the performance of the YOLO-X object detection algorithm for wildlife detection. The camera trapping resulted in 10,500 images of the urban forest and 51,974 images of green spaces on campus. Out of the total 62,474 images, 52,993 images (84.82%) were found to be false positives, while 9,481 images (15.18%) were found to contain wildlife. As a result of wildlife monitoring, 19 species of birds, 5 species of mammals, and 1 species of reptile were observed within the study area. In addition, there were statistically significant differences in the frequency of occurrence of the following species according to the type of urban greenery: Parus varius(t = -3.035, p < 0.01), Parus major(t = 2.112, p < 0.05), Passer montanus(t = 2.112, p < 0.05), Paradoxornis webbianus(t = 2.112, p < 0.05), Turdus hortulorum(t = -4.026, p < 0.001), and Sitta europaea(t = -2.189, p < 0.05). The detection performance of the YOLO-X model for wildlife occurrence was analyzed, and it successfully classified 94.2% of the camera trapping data. In particular, the number of true positive predictions was 7,809 images and the number of false negative predictions was 51,044 images. In this study, the object detection algorithm YOLO-X model was used to detect the presence of wildlife in the camera trapping data. In this study, the YOLO-X model was used with a filter activated to detect 10 specific animal taxa out of the 80 classes trained on the COCO dataset, without any additional training. In future studies, it is necessary to create and apply training data for key occurrence species to make the model suitable for wildlife monitoring.

Noncontact 3-dimensional measurement using He-Ne laser and CCD camera (He-Ne 레이저와 CCD 카메라를 이용한 비접촉 3차원 측정)

  • Kim, Bong-chae;Jeon, Byung-cheol;Kim, Jae-do
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.11
    • /
    • pp.1862-1870
    • /
    • 1997
  • A fast and precise technique to measure 3-dimensional coordinates of an object is proposed. It is essential to take the 3-dimensional measurements of the object in design and inspection. Using this developed system a surface model of a complex shape can be constructed. 3-dimensional world coordinates are projected onto a camera plane by the perspective transformation, which plays an important role in this measurement system. According to the shape of the object two measuring methods are proposed. One is rotation of an object and the other is translation of measuring unit. Measuring speed depending on image processing time is obtained as 200 points per second. Measurement resolution i sexperimented by two parameters among others; the angle between the laser beam plane and the camera, and the distance between the camera and the object. As a result of these experiments, it was found that measurement resolution ranges from 0.3mm to 1.0mm. This constructed surface model could be used in manufacturing tools such as rapid prototyping machine.

3-D shape and motion recovery using SVD from image sequence (동영상으로부터 3차원 물체의 모양과 움직임 복원)

  • 정병오;김병곤;고한석
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.3
    • /
    • pp.176-184
    • /
    • 1998
  • We present a sequential factorization method using singular value decomposition (SVD) for recovering both the three-dimensional shape of an object and the motion of camera from a sequence of images. We employ paraperpective projection [6] for camera model to handle significant translational motion toward the camera or across the image. The proposed mthod not only quickly gives robust and accurate results, but also provides results at each frame becauseit is a sequential method. These properties make our method practically applicable to real time applications. Considerable research has been devoted to the problem of recovering motion and shape of object from image [2] [3] [4] [5] [6] [7] [8] [9]. Among many different approaches, we adopt a factorization method using SVD because of its robustness and computational efficiency. The factorization method based on batch-type computation, originally proposed by Tomasi and Kanade [1] proposed the feature trajectory information using singular value decomposition (SVD). Morita and Kanade [10] have extenened [1] to asequential type solution. However, Both methods used an orthographic projection and they cannot be applied to image sequences containing significant translational motion toward the camera or across the image. Poleman and Kanade [11] have developed a batch-type factorization method using paraperspective camera model is a sueful technique, the method cannot be employed for real-time applications because it is based on batch-type computation. This work presents a sequential factorization methodusing SVD for paraperspective projection. Initial experimental results show that the performance of our method is almost equivalent to that of [11] although it is sequential.

  • PDF

Virtual Environment Building and Navigation of Mobile Robot using Command Fusion and Fuzzy Inference

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.4
    • /
    • pp.427-433
    • /
    • 2019
  • This paper propose a fuzzy inference model for map building and navigation for a mobile robot with an active camera, which is intelligently navigating to the goal location in unknown environments using sensor fusion, based on situational command using an active camera sensor. Active cameras provide a mobile robot with the capability to estimate and track feature images over a hallway field of view. In this paper, instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. Command fusion method is used to govern the robot navigation. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a command fusion technique is introduced, where the sensory data of active camera sensor for navigation experiments are fused into the identification process. Navigation performance improves on that achieved using fuzzy inference alone and shows significant advantages over command fusion techniques. Experimental evidences are provided, demonstrating that the proposed method can be reliably used over a wide range of relative positions between the active camera and the feature images.