• Title/Summary/Keyword: single camera

Search Result 776, Processing Time 0.032 seconds

Sensors Comparison for Observation of floating structure's movement

  • Trieu, Hang Thi;Han, Dong Yeob
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2014.10a
    • /
    • pp.219-221
    • /
    • 2014
  • The objective of this paper is to simulate the dynamic behavior of a floating structure model, using image processing and close-range photogrammetry, instead of the contact sensors. Previously, the movement of structure was presented through the exterior orientation estimation of a single camera by space resection. The inverse resection yields the 6 orientation parameters of the floating structure, with respect to the camera coordinate system. The single camera solution is of interest in applications characterized by restriction in term of costs, unfavorable observation conditions, or synchronization demands when using multiple cameras. This article discusses the theoretical determinations of camera exterior orientation based on Direct Linear Transformation and photogrammetric resection using least squares adjustment. The proposed method was used to monitor the motion of a floating model. The results of six degrees of freedom (6-DOF) by inverse resection show that the appropriate initial values by DLT can be effectually applied in least squares adjustment, to obtain the precision of exterior orientation parameters. Additionally, a comparison between the close-range photogrammetry and total station results was feasibly verified. Therefore, the proposed method can be considered as an efficient solution to simulating the movement of floating structure.

  • PDF

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.

Development of Green-Sheet Measurement Algorithm by Image Processing Technique (영상처리기법을 이용한 그린시트 측정알고리즘 개발)

  • Pyo, C.R.;Yang, S.M.;Kang, S.H.;Yoon, S.M.
    • Transactions of Materials Processing
    • /
    • v.16 no.4 s.94
    • /
    • pp.313-316
    • /
    • 2007
  • The purpose of this paper is the development of measurement algorithm for green-sheet based on the digital image processing technique. The Low Temperature Co-fired Ceramic(LTCC) technology can be employed to produce multilayer circuits with the help of single tapes, which are used to apply conductive, dielectric and/or resistive pastes on. These single green-sheets must be laminated together and fired at the same time. Main function of the green-sheet film measurement algorithm is to measure the position and size of the punching hole in each single layer. The line scan camera coupled with motorized X-Y stage is used. In order to measure the entire film area using several scanning steps, an overlapping method is used.

Danger Alert Surveillance Camera Service using AI Image Recognition technology (인공지능 이미지 인식 기술을 활용한 위험 알림 CCTV 서비스)

  • Lee, Ha-Rin;Kim, Yoo-Jin;Lee, Min-Ah;Moon, Jae-Hyun
    • Annual Conference of KIPS
    • /
    • 2020.11a
    • /
    • pp.814-817
    • /
    • 2020
  • The number of single-person households is increasing every year, and there are also high concerns about the crime and safety of single-person households. In particular, crimes targeting women are increasing. Although home surveillance camera applications, which are mostly used by single-person households, only provide intrusion detection functions, this service utilizes AI image recognition technologies such as face recognition and object detection to provide theft, violence, stranger and intrusion detection. Users can receive security-related notifications, relieve their anxiety, and prevent crimes through this service.

3D reconstruction using a method of the planar homography from uncalibrated camera

  • Yoon Yong In;Choi Jong Soo;Kwon Jun sik;Kwon Oh Keun
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.804-809
    • /
    • 2004
  • It is essential to calibrate a camera in order to recover 3-dimensional reconstruction from uncalibrated images. This paper proposes a new technique of the camera calibration using a homography between the planar patterns image taken by the camera, which is located at the three planar patterns image. Since the proposed method should be computed from the homography among the three planar patterns from a single image, it is implemented more easily and simply to recover 3D object than the conventional. Experimental results show the performances of the proposed method are the better than the conventional. We demonstrate the examples of 3D reconstruction using the proposed algorithm from image sequence.

  • PDF

A 3D Foot Scanner Using Mirrors and Single Camera (거울 및 단일 카메라를 이용한 3차원 발 스캐너)

  • Chung, Seong-Youb;Park, Sang-Kun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.1
    • /
    • pp.11-20
    • /
    • 2011
  • A structured beam laser is often used to scan object and make 3D model. Multiple cameras are inevitable to see occluded areas, which is the main reason of the high price of the scanner. In this paper, a low cost 3D foot scanner is developed using one camera and two mirrors. The camera and two mirrors are located below and above the foot, respectively. Occluded area, which is the top of the foot, is reflected by the mirrors. Then the camera measures 3D point data of the bottom and top of the foot at the same time. Then, the whole foot model is reconstructed after symmetrical transformation of the data reflected by mirrors. The reliability of the scan data depends on the accuracy of the parameters between the camera and the laser. A calibration method is also proposed and verified by experiments. The results of the experiments show that the worst errors of the system are 2 mm along x, y, and z directions.

A novel visual servoing techniques considering robot dynamics (로봇의 운동특성을 고려한 새로운 시각구동 방법)

  • 이준수;서일홍;김태원
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.410-414
    • /
    • 1996
  • A visual servoing algorithm is proposed for a robot with a camera in hand. Specifically, novel image features are suggested by employing a viewing model of perspective projection to estimate relative pitching and yawing angles between the object and the camera. To compensate dynamic characteristics of the robot, desired feature trajectories for the learning of visually guided line-of-sight robot motion are obtained by measuring features by the camera in hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a, commercially provided function of linear motion. And then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories. To show the validity of proposed algorithm, some experimental results are illustrated, where a four axis SCARA robot with a B/W CCD camera is used.

  • PDF

The Indoor Position Detection Method using a Single Camera and a Parabolic Mirror (볼록 거울 및 단일 카메라를 이용한 실내에서의 전 방향 위치 검출 방법)

  • Kim, Jee-Hong;Kim, Hee-Sun;Lee, Chang-Goo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.161-167
    • /
    • 2008
  • This article describes the methods of a decision of the location which user points to move by an optical device like a laser pointer and a moving to that location. Using a conic mirror and CCD camera sensor, a robot observes a spot of user wanted point among an initiative, computes the location and azimuth and moves to that position. This system offers the brief data to a processor with simple devices. In these reason, we can reduce the time of a calculation to process of images and find the target by user point for carrying a robot. User points a laser spot on a point to be moved so that this sensor system in the robot, detecting the laser spot point with a conic mirror, laid on the robot, showing a camera. The camera is attached on the robot upper body and fixed parallel to the ground and the conic mirror.

Motion Estimation of a Moving Object in Three-Dimensional Space using a Camera (카메라를 이용한 3차원 공간상의 이동 목표물의 거리정보기반 모션추정)

  • Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.12
    • /
    • pp.2057-2060
    • /
    • 2016
  • Range-based motion estimation of a moving object by using a camera is proposed. Whereas the existing results constrain the motion of an object for the motion estimation of an object, the constraints on the motion is relieved in the proposed method in that a more generally moving object motion can be handled. To this end, a nonlinear observer is designed based on the relative dynamics between the object and camera so that the object velocity and the unknown camera velocity can be estimated. Stability analysis and simulation results for the moving object are provided to show the effectiveness of the proposed method.