• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.029 seconds

A Real-time Plane Estimation in Virtual Reality Using a RGB-D Camera in Indoors (RGB-D 카메라를 이용한 실시간 가상 현실 평면 추정)

  • Yi, Chuho;Cho, Jungwon
    • Journal of Digital Convergence
    • /
    • v.14 no.11
    • /
    • pp.319-324
    • /
    • 2016
  • In the case of robot and Argument Reality applications using a camera in environments, a technology to estimate planes is a very important technology. A RGB-D camera can get a three-dimensional measurement data even in a flat which has no information of the texture of the plane;, however, there is an enormous amount of computation in order to process the point-cloud data of the image. Furthermore, it could not know the number of planes that are currently observed as an advance, also, there is an additional operation required to estimate a three dimensional plane. In this paper, we proposed the real-time method that decides the number of planes automatically and estimates the three dimensional plane by using the continuous data of an RGB-D camera. As experimental results, the proposed method showed an improvement of approximately 22 times faster speed compared to processing the entire data.

Vision-Based Displacement Measurement System Operable at Arbitrary Positions (임의의 위치에서 사용 가능한 영상 기반 변위 계측 시스템)

  • Lee, Jun-Hwa;Cho, Soo-Jin;Sim, Sung-Han
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.18 no.6
    • /
    • pp.123-130
    • /
    • 2014
  • In this study, a vision-based displacement measurement system is developed to accurately measure the displacement of a structure with locating the camera at arbitrary position. The previous vision-based system brings error when the optical axis of a camera has an angle with the measured structure, which limits the applicability at large structures. The developed system measures displacement by processing the images of a target plate that is attached on the measured position of a structure. To measure displacement regardless of the angle between the optical axis of the camera and the target plate, planar homography is employed to match two planes in image and world coordinate systems. To validate the performance of the present system, a laboratory test is carried out using a small 2-story shear building model. The result shows that the present system measures accurate displacement of the structure even with a camera significantly angled with the target plate.

Real-Time Camera Tracking for Virtual Stud (가상스튜디오 구현을 위한 실시간 카메라 추적)

  • Park, Seong-Woo;Seo, Yong-Duek;Hong, Ki-Sang
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.7
    • /
    • pp.90-103
    • /
    • 1999
  • In this paper, we present an overall algorithm for real-time camera parameter extraction which is one of key elements in implementing virtual studio. The prevailing mechanical methode for tracking cameras have several disadvantage such as the price, calibration with the camera and operability. To overcome these disadvantages we calculate camera parameters directly from the input image using computer-vision technique. When using zoom lenses, it requires real time calculation of lens distortion. But in Tsai algorithm, adopted for camera calibration, it can be calculated through nonlinear optimization in triple parameter space, which usually takes long computation time. We proposed a new method, separating lens distortion parameter from the other two parameters, so that it is reduced to nonlinear optimization in one parameter space, which can be computed fast enough for real time application.

  • PDF

DOF Correction of Heterogeneous Stereoscopic Cameras (이종 입체영상 카메라의 피사계심도 일치화)

  • Choi, Sung-In;Park, Soon-Yong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.7
    • /
    • pp.169-179
    • /
    • 2014
  • In this paper, we propose a DOF (Depth of Field) correction technique by determining the values of the internal parameters of a 3-D camera which consists of stereoscopic cameras of different optical properties. If there is any difference in the size or the depth range of focused objects in the left and right stereoscopic images, it could cause visual fatigue to human viewers. The object size of in the stereoscopic image is corrected by the LUT of zoom lenses, and the forward and backward DOF are corrected by the object distance. Then the F-numbers are determined to adjust the optical properties of the camera for DOF correction. By applying the proposed technique to a main-sub type 3-D camera using a GUI-based DOF simulator, the DOF of the camera is automatically corrected.

Underwater Docking of an AUV Using a Visual Servo Controller (비쥬얼 서보 제어기를 이용한 자율무인잠수정의 도킹)

  • Lee, Pan-Mook;Jeon, Bong-Hwan;Lee, Chong-Moo
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.142-148
    • /
    • 2002
  • Autonomous underwater vehicles (AUVs) are unmanned underwater vessels to investigate sea environments, oceanography and deep-sea resources autonomously. Docking systems are required to increase the capability of the AUVs to recharge the batteries and to transmit data in real time for specific underwater works, such as repeated jobs at sea bed. This paper presents a visual servo control system for an AUV to dock into an underwater station with a camera mounted at the nose center of the AUV. To make the visual servo control system, this paper derives an optical flow model of a camera, where the projected motions of the image plane are described with the rotational and translational velocities of the AUV. This paper combines the optical flow equation of the camera with the AUVs equation of motion, and derives a state equation for the visual servoing AUV. This paper proposes a discrete-time MIMO controller minimizing a cost function. The control inputs of the AUV are automatically generated with the projected target position on the CCD plane of the camera and with the AUVs motion. To demonstrate the effectiveness of the modeling and the control law of the visual servoing AUV, simulations on docking the AUV to a target station are performed with the 6-dof nonlinear equations of REMUS AUV and a CCD camera.

  • PDF

Geometric Modelling and Coordinate Transformation of Satellite-Based Linear Pushbroom-Type CCD Camera Images (선형 CCD카메라 영상의 기하학적 모델 수립 및 좌표 변환)

  • 신동석;이영란
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.2
    • /
    • pp.85-98
    • /
    • 1997
  • A geometric model of pushbroom-type linear CCD camera images is proposed in this paper. At present, this type of cameras are used for obtaining almost all kinds of high-resolution optical images from satellites. The proposed geometric model includes not only a forward transformation which is much more efficient. An inverse transformation function cannot be derived analytically in a closed form because the focal point of an image varies with time. In this paper, therefore, an iterative algorithm in which a focal point os converged to a given pixel position is proposed. Although the proposed model can be applied to any pushbroom-type linear CCD camera images, the geometric model of the high-resolution multi-spectral camera on-board KITSAT-3 is used in this paper as an example. The flight model of KITSAT-3 is in development currently and it is due to be launched late 1998.

An Education Plan for Camera Drone (촬영용 드론 교육 방안)

  • Park, Sung-Dae;Han, Kun-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1206-1213
    • /
    • 2021
  • A drone invented for the military has been increased the range of application with the development of relevant technology, and it influences to include the private area. Currently, the use of drone has been increasing in many areas, such as agriculture, unmanned parcel service, production of image contents, and architecture. In 2021, South of Korea, a drone certificate system for drone flight is introduced and on operation. In case of drone flight with the maximum takeoff weight as 2kg or up, the flight experience and practical examination are required, whereas in case of drone lighter than 2kg, the online education qualification is enough to operate it without the flight experience and practical examination. Recently, the drone related accidents have been increasing with the rapidly supply of camera drones with the maximum takeoff weight as less than 2kg. This paper introduces the characteristics of the camera drone to meet burgeoning demand, and discusses an education plan for the camera drone.

Camera and LiDAR Sensor Fusion for Improving Object Detection (카메라와 라이다의 객체 검출 성능 향상을 위한 Sensor Fusion)

  • Lee, Jongseo;Kim, Mangyu;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.580-591
    • /
    • 2019
  • This paper focuses on to improving object detection performance using the camera and LiDAR on autonomous vehicle platforms by fusing detected objects from individual sensors through a late fusion approach. In the case of object detection using camera sensor, YOLOv3 model was employed as a one-stage detection process. Furthermore, the distance estimation of the detected objects is based on the formulations of Perspective matrix. On the other hand, the object detection using LiDAR is based on K-means clustering method. The camera and LiDAR calibration was carried out by PnP-Ransac in order to calculate the rotation and translation matrix between two sensors. For Sensor fusion, intersection over union(IoU) on the image plane with respective to the distance and angle on world coordinate were estimated. Additionally, all the three attributes i.e; IoU, distance and angle were fused using logistic regression. The performance evaluation in the sensor fusion scenario has shown an effective 5% improvement in object detection performance compared to the usage of single sensor.

Voice Assistant for Visually Impaired People (시각장애인을 위한 음성 도우미 장치)

  • Chae, Jun-Gy;Jang, Ji-Woo;Kim, Dong-Wan;Jung, Su-Jin;Lee, Ik Hyun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.4
    • /
    • pp.131-136
    • /
    • 2019
  • People with compromised visual ability suffer from many inconveniences in daily life, such as distinguishing colors, identifying currency notes and realizing the atmospheric temperature. Therefore, to assist the visually impaired people, we propose a system by utilizing optical and infrared cameras. In the proposed system, an optical camera is used to collect features related to colors and currency notes while an infrared camera is utilized to get temperature information. The user is enabled to select the desired service by pushing the button and the appreciate voice information are provided through the speaker. The device can distinguish 16 kinds of colors, four different currency notes, and temperature information in four steps and the current accuracy is around 90%. It can be improved further through block-wise input image, machine learning, and a higher version of the infrared camera. In addition, it will be attached to the stick for easy carrying and to use it more conveniently.

A Study on the Development of YOLO-Based Maritime Object Detection System through Geometric Interpretation of Camera Images (카메라 영상의 기하학적 해석을 통한 YOLO 알고리즘 기반 해상물체탐지시스템 개발에 관한 연구)

  • Kang, Byung-Sun;Jung, Chang-Hyun
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.4
    • /
    • pp.499-506
    • /
    • 2022
  • For autonomous ships to be commercialized and be able to navigate in coastal water, they must be able to detect maritime obstacles. One of the most common obstacles seen in coastal area are the farm buoys. In this study, a maritime object detection system was developed that detects buoys using the YOLO algorithm and visualizes the distance and bearing between buoys and the ship through geometric interpretation of camera images. After training the maritime object detection model with 1,224 pictures of buoys, the precision of the model was 89.0%, the recall was 95.0%, and the F1-score was 92.0%. Camera calibration had been conducted to calculate the distance and bearing of an object away from the camera using the obtained image coordinates and Experiment A and B were designed to verify the performance of the maritime object detection system. As a result of verifying the performance of the maritime object detection system, it can be seen that the maritime object detection system is superior to radar in its short-distance detection capability, so that it can be used as a navigational aid along with the radar.