• Title/Summary/Keyword: one camera

Search Result 1,583, Processing Time 0.025 seconds

Improving Precision of the Exterior Orientation and the Pixel Position of a Multispectral Camera onboard a Drone through the Simultaneous Utilization of a High Resolution Camera (고해상도 카메라와의 동시 운영을 통한 드론 다분광카메라의 외부표정 및 영상 위치 정밀도 개선 연구)

  • Baek, Seungil;Byun, Minsu;Kim, Wonkook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.541-548
    • /
    • 2021
  • Recently, multispectral cameras are being actively utilized in various application fields such as agriculture, forest management, coastal environment monitoring, and so on, particularly onboard UAV's. Resultant multispectral images are typically georeferenced primarily based on the onboard GPS (Global Positioning System) and IMU (Inertial Measurement Unit)or accurate positional information of the pixels, or could be integrated with ground control points that are directly measured on the ground. However, due to the high cost of establishing GCP's prior to the georeferencing or for inaccessible areas, it is often required to derive the positions without such reference information. This study aims to provide a means to improve the georeferencing performance of a multispectral camera images without involving such ground reference points, but instead with the simultaneously onboard high resolution RGB camera. The exterior orientation parameters of the drone camera are first estimated through the bundle adjustment, and compared with the reference values derived with the GCP's. The results showed that the incorporation of the images from a high resolution RGB camera greatly improved both the exterior orientation estimation and the georeferencing of the multispectral camera. Additionally, an evaluation performed on the direction estimation from a ground point to the sensor showed that inclusion of RGB images can reduce the angle errors more by one order.

Performance Evaluation of Smartphone Camera App with Multi-Focus Shooting and Focus Post-processing Functions (다초점 촬영과 초점후처리 기능을 가진 스마트폰 카메라 앱의 성능평가)

  • Chae-Won Park;Kyung-Mi Kim;Song-Yeon Yoo;Yu-Jin Kim;Kitae Hwang;In-Hwang Jung;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.35-40
    • /
    • 2024
  • In this paper, we validate the practicality of the OnePIC app implemented in the previous study by analyzing the execution and storage performance. The OnePIC app is a camera app that allows you to get a photo with a desired focus after taking photos focused on various places. To evaluate performance, we analyzed distance focus shooting time and object focus shooting time in detail. The performance evaluation was measured on actual smartphone. Distance focus shooting time for 5 photos was around 0.84 seconds, the object detection time was around 0.19 seconds regardless of the number of objects and object focus shooting time for 5 photos was around 4.84 seconds. When we compared the size of a single All-in-JPEG file that stores multi-focus photos to the size of the JPEG files stored individually, there was no significant benefit in storage space because the All-in-JPEG file size was subtly reduced. However, All-in-JPEG has the great advantage of managing multi-focus photos. Finally, we conclude that the OnePIC app is practical in terms of shooting time, photo storage size, and management.

IoT-enabled Solutions for Tour Photography Services

  • Jeong, Isu;Baek, Seungwoo;An, Eunsol;Kim, Yujin;Choi, Jiwoo;Yun, Jaeseok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.127-135
    • /
    • 2020
  • In this paper, we propose an IoT-enabled solution for tour photography services with small-size investment and resources in the travel and tourism industries, being able to impact on economic, social, and cultural values. An IoT-enabled camera is developed based on an open hardware and software platform complying with oneM2M, which can make traditional embedded systems oneM2M-compliant devices due to a middleware solution called TAS (thing adaptation software). IoT cameras deployed around photo zones in a tour site could be remotely controlled via an IoT gateway with a Web-based application on a smartphone. Users would perform a pan and tilt camera control if they want and then take and download a perfect photo picture (even though they are away from the tour site). We expect that the proposed solution will promote the deployment IoT-enabled technologies in tour and travel industries which are important parts of the tertiary sector.

Video Camera Characterization with White Balance (기준 백색 선택에 따른 비디오 카메라의 전달 특성)

  • 김은수;박종선;장수욱;한찬호;송규익
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.23-34
    • /
    • 2004
  • Video camera can be a useful tool to capture images for use in colorimeter. However the RGB signals generated by different video camera are not equal for the same scene. The video camera for use in colorimeter is characterized based on the CIE standard colorimetric observer. One method of deriving a colorimetric characterization matrix between camera RGB output signals and CIE XYZ tristimulus values is least squares polynomial modeling. However it needs tedious experiments to obtain camera transfer matrix under various white balance point for the same camera. In this paper, a new method to obtain camera transfer matrix under different white balance by using 3${\times}$3 camera transfer matrix under a certain white balance point is proposed. According to the proposed method camera transfer matrix under any other white balance could be obtained by using colorimetric coordinates of phosphor derived from 3${\times}$3 linear transfer matrix under the certain white balance point. In experimental results, it is demonstrated that proposed method allow 3${\times}$3 linear transfer matrix under any other white balance having a reasonable degree of accuracy compared with the transfer matrix obtained by experiments.

A Visual Calibration Scheme for Off-Line Programming of SCARA Robots (스카라 로봇의 오프라인 프로그래밍을 위한 시각정보 보정기법)

  • Park, Chang-Kyoo;Son, Kwon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.1
    • /
    • pp.62-72
    • /
    • 1997
  • High flexibility and productivity using industrial robots are being achieved in manufacturing lines with off-line robot programmings. A good off-line programming system should have functions of robot modelling, trajectory planning, graphical teach-in, kinematic and dynamic simulations. Simulated results, however, can hardly be applied to on-line tasks until any calibration procedure is accompained. This paper proposes a visual calibration scheme in order to provide a calibration tool for our own off-line programming system of SCARA robots. The suggested scheme is based on the position-based visual servoings, and the perspective projection. The scheme requires only one camera as it uses saved kinematic data for three-dimensional visual calibration. Predicted images are generated and then compared with camera images for updating positions and orientations of objects. The scheme is simple and effective enough to be used in real time robot programming.

An Obstacle Detection and Avoidance Method for Mobile Robot Using a Stereo Camera Combined with a Laser Slit

  • Kim, Chul-Ho;Lee, Tai-Gun;Park, Sung-Kee;Kim, Jai-Hie
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.871-875
    • /
    • 2003
  • To detect and avoid obstacles is one of the important tasks of mobile navigation. In a real environment, when a mobile robot encounters dynamic obstacles, it is required to simultaneously detect and avoid obstacles for its body safely. In previous vision system, mobile robot has used it as either a passive sensor or an active sensor. This paper proposes a new obstacle detection algorithm that uses a stereo camera as both a passive sensor and an active sensor. Our system estimates the distances from obstacles by both passive-correspondence and active-correspondence using laser slit. The system operates in three steps. First, a far-off obstacle is detected by the disparity from stereo correspondence. Next, a close obstacle is acquired from laser slit beam projected in the same stereo image. Finally, we implement obstacle avoidance algorithm, adopting the modified Dynamic Window Approach (DWA), by using the acquired the obstacle's distance.

  • PDF

Moving Target Indication using an Image Sensor for Small UAVs (소형 무인항공기용 영상센서 기반 이동표적표시 기법)

  • Yun, Seung-Gyu;Kang, Seung-Eun;Ko, Sangho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.12
    • /
    • pp.1189-1195
    • /
    • 2014
  • This paper addresses a Moving Target Indication (MTI) algorithm which can be used for small Unmanned Aerial Vehicles (UAVs) equipped with image sensors. MTI is a system (or an algorithm) which detects moving objects. The principle of the MTI algorithm is to analyze the difference between successive image data. It is difficult to detect moving objects in the images recorded from dynamic cameras attached to moving platforms such as UAVs flying at low altitudes over a variety of terrain, since the acquired images have two motion components: 'camera motion' and 'object motion'. Therefore, the motion of independent objects can be obtained after the camera motion is compensated thoroughly via proper manipulations. In this study, the camera motion effects are removed by using wiener filter-based image registration, one of the non-parametric methods. In addition, an image pyramid structure is adopted to reduce the computational complexity for UAVs. We demonstrate the effectiveness of our method with experimental results on outdoor video sequences.

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

Flip Chip Interconnection Method Applied to Small Camera Module

  • Segawa, Masao;Ono, Michiko;Karasawa, Jun;Hirohata, Kenji;Aoki, Makoto;Ohashi, Akihiro;Sasaki, Tomoaki;Kishimoto, Yasukazu
    • Proceedings of the International Microelectronics And Packaging Society Conference
    • /
    • 2000.10a
    • /
    • pp.39-45
    • /
    • 2000
  • A small camera module fabricated by including bare chip bonding methods is utilized to realize advanced mobile devices. One of the driving forces is the TOG (Tape On Glass) bonding method which reduces the packaging size of the image sensor clip. The TOG module is a new thinner and smaller image sensor module, using flip chip interconnection method with the ACP (Anisotropic Conductive Paste). The TOG production process was established by determining the optimum bonding conditions for both optical glass bonding and image sensor clip bonding lo the flexible PCB. The bonding conditions, including sufficient bonding margins, were studied. Another bonding method is the flip chip bonding method for DSP (Digital Signal Processor) chip. A new AC\ulcorner was developed to enable the short resin curing time of 10 sec. The bonding mechanism of the resin curing method was evaluated using FEM analysis. By using these flip chip bonding techniques, small camera module was realized.

  • PDF

Hybrid Cepstral Filter for Precise Vergence Control of Parallel Stereoscopic Camera (수평이동방식 입체카메라의 주시각 제어를 위한 Hybrid Cepstral Filter에 의한 시차정보 추출)

  • Kwon, Ki-Chul;Kim, Nam
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.91-94
    • /
    • 2004
  • The vergence controls of the parallel stereoscopic camera need only the disparity information of left and right images in horizontal direction. This paper proposed past and precise disparity value for stereoscopicimage pair in horizontal direction and the algorithm which can abstract disparity information through the HCF(Hybrid Cepstral Filter) for sign information. The proposed disparity information- extracting algorithm can obtain accurate disparity value of horizontal direction and signinformation by using both the one dimension cepstral filter which uses vertical projection data of left and right Image and the two dimension cepstral filter which uses down sampled image.