• Title/Summary/Keyword: single camera

Search Result 776, Processing Time 0.03 seconds

Single Photo Resection Using Cosine Law and Three-dimensional Coordinate Transformation (코사인 법칙과 3차원 좌표 변환을 이용한 단사진의 후방교회법)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.189-198
    • /
    • 2019
  • In photogrammetry, single photo resection is a method of determining exterior orientation parameters corresponding to a position and an attitude of a camera at the time of taking a photograph using known interior orientation parameters, ground coordinates, and image coordinates. In this study, we proposed a single photo resection algorithm that determines the exterior orientation parameters of the camera using cosine law and linear equation-based three-dimensional coordinate transformation. The proposed algorithm first calculated the scale between the ground coordinates and the corresponding normalized coordinates using the cosine law. Then, the exterior orientation parameters were determined by applying linear equation-based three-dimensional coordinate transformation using normalized coordinates and ground coordinates considering the calculated scale. The proposed algorithm was not sensitive to the initial values by using the method of dividing the longest distance among the combinations of the ground coordinates and dividing each ground coordinates, although the partial derivative was required for the nonlinear equation. In addition, since the exterior orientation parameters can be determined by using three points, there was a stable advantage in the geometrical arrangement of the control points.

A Fast Vision-based Head Tracking Method for Interactive Stereoscopic Viewing

  • Putpuek, Narongsak;Chotikakamthorn, Nopporn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1102-1105
    • /
    • 2004
  • In this paper, the problem of a viewer's head tracking in a desktop-based interactive stereoscopic display system is considered. A fast and low-cost approach to the problem is important for such a computing environment. The system under consideration utilizes a shuttle glass for stereoscopic display. The proposed method makes use of an image taken from a single low-cost video camera. By using a simple feature extraction algorithm, the obtained points corresponding to the image of the user-worn shuttle glass are used to estimate the glass center, its local 'yaw' angle, as measured with respect to the glass center, and its global 'yaw' angle as measured with respect to the camera location. With these estimations, the stereoscopic image synthetic program utilizes those values to interactively adjust the two-view stereoscopic image pair as displayed on a computer screen. The adjustment is carried out such that the so-obtained stereoscopic picture, when viewed from a current user position, provides a close-to-real perspective and depth perception. However, because the algorithm and device used are designed for fast computation, the estimation is typically not precise enough to provide a flicker-free interactive viewing. An error concealment method is thus proposed to alleviate the problem. This concealment method should be sufficient for applications that do not require a high degree of visual realism and interaction.

  • PDF

Experimental Setup for Autonomous Navigation of Robotic Vehicle for University Campus (대학 캠퍼스용 로봇차량의 자율주행을 위한 실험환경 구축)

  • Cho, Sung Taek;Park, Young Jun;Jung, Seul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.105-112
    • /
    • 2016
  • This paper presents the experimental setup for autonomous navigation of a robotic vehicle for touring university campus. The robotic vehicle is developed for navigation of specific areas such as university campus or play parks. The robotic vehicle can carry two passengers to travel short distances. For the robotic vehicle to navigate autonomously the specific distance from the main gate to the administrative building in the university, the experimental setup for SLAM is presented. As an initial step, a simple method of following the line detected by a single camera is implemented for the partial area. The central line on the pavement colored with two kinds, red and yellow, is detected by image processing, and the robotic vehicle is commanded to follow the line. Experimental studies are conducted to demonstrate the performance of navigation as a possible touring vehicle.

Solving the Correspondence Problem by Multiple Stereo Image and Error Analysis of Computed Depth (다중 스테레오영상을 이용한 대응문제의 해결과 거리오차의 해석)

  • 이재웅;이진우;박광일
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.19 no.6
    • /
    • pp.1431-1438
    • /
    • 1995
  • In this paper, we present a multiple-view stereo matching method in case of moving in the direction of optical axis with stereo camera. Also we analyze the obtainable depth precision to show that multiple-view stereo increases the virtual baseline with single-view stereo. This method decides candidate points for correspondence in each image pair and then search for the correct combinations of correspondences among them using the geometrical consistency they must satisfy. Adantages of this method are capability in increasing the accuracy in matching by using the multiple stereo images and less computation due to local processing. This method computes 3-D depth by averaging the depth obtained in each multiple-view stereo. We show that the resulting depth has more precision than depth obtainable by each independent stereo when the position of image feature is uncertain due to image noise. This paper first defines a multipleview stereo agorithm in case of moving in the direction of optical axis with stereo camera and analyze the obtainable precision of computed depth. Then we represent the effect of removing the incorrect matching candidate and precision enhancement with experimental result.

The Etrance Authentication Systems Using Real-Time Object Extraction and the RFID Tag (얼굴 인식과 RFID를 이용한 실시간 객체 추적 및 인증 시스템)

  • Jung, Young Hoon;Lee, Chang Soo;Lee, Kwang Hyung;Jun, Moon Seog
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.4 no.4
    • /
    • pp.51-62
    • /
    • 2008
  • In this paper, the proposal system can achieve the more safety of RFID System with the 2-step authentication procedures for the enhancement about the security of general RFID systems. After authentication RFID Tag, additionally, the proposal system extract the characteristic information in the user image for acquisition of the additional authentication information of the user with the camera. In this paper, the system which was proposed more enforce the security of the automatic entrance and exit authentication system with the cognitive characters of RFID Tag and the extracted characteristic information of the user image through the camera. The RFID system which use the active tag and reader with 2.4GHz bandwidth can recognize the tag of RFID in the various output manner. Additionally, when the RFID system have errors, the characteristic information of the user image is designed to replace the RFID system as it compare with the similarity of the color, outline and input image information which was recorded to the database previously. In the result of experiment, the system can acquire more exact results as compared with the single authentication system when it using RFID Tag and the information of color characteristics.

A real-time robust body-part tracking system for intelligent environment (지능형 환경을 위한 실시간 신체 부위 추적 시스템 -조명 및 복장 변화에 강인한 신체 부위 추적 시스템-)

  • Jung, Jin-Ki;Cho, Kyu-Sung;Choi, Jin;Yang, Hyun S.
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.411-417
    • /
    • 2009
  • We proposed a robust body part tracking system for intelligent environment that will not limit freedom of users. Unlike any previous gesture recognizer, we upgraded the generality of the system by creating the ability the ability to recognize details, such as, the ability to detect the difference between long sleeves and short sleeves. For the precise each body part tracking, we obtained the image of hands, head, and feet separately from a single camera, and when detecting each body part, we separately chose the appropriate feature for certain parts. Using a calibrated camera, we transferred 2D detected body parts into the 3D posture. In the experimentation, this system showed advanced hand tracking performance in real time(50fps).

  • PDF

Analysis method of signal model for synthetic aperture integral imaging (합성 촬영 집적 영상의 신호 모델 해석 방법)

  • Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.11
    • /
    • pp.2563-2568
    • /
    • 2010
  • SAII (synthetic aperture integral imaging) is a useful technique to record many multi view images of 3D objects by using a moving camera and to reconstruct 3D depth images from the recorded multiviews. This is largely composed of two processes. A pickup process provides elemental images of 3D objects and a reconstruction process generates 3D depth images computationally. In this paper, a signal model for SAII is presented. We defined the granular noise and analyzed its characteristics. Our signal model revealed that we could reduce the noise in the reconstructed images and increase the computational speed by reducing the shifting distance of a single camera.

The navigation method of mobile robot using a omni-directional position detection system (전방향 위치검출 시스템을 이용한 이동로봇의 주행방법)

  • Ryu, Ji-Hyoung;Kim, Jee-Hong;Lee, Chang-Goo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.2
    • /
    • pp.237-242
    • /
    • 2009
  • Comparing with fixed-type Robots, Mobile Robots have the advantage of extending their workspaces. But this advantage need some sensors to detect mobile robot's position and find their goal point. This article describe the navigation teaching method of mobile robot using omni-directional position detection system. This system offers the brief position data to a processor with simple devices. In other words, when user points a goal point, this system revise the error by comparing its heading angle and position with the goal. For these processes, this system use a conic mirror and a single camera. As a result, this system reduce the image processing time to search the target for mobile robot navigation ordered by user.

Reconstruction of Wide FOV Image from Hyperbolic Cylinder Mirror Camera (실린더형 쌍곡면 반사체 카메라 광각영상 복원)

  • Kim, Soon-Cheol;Yi, Soo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.3
    • /
    • pp.146-153
    • /
    • 2015
  • In order to contain as much information as possible in a single image, a wide FOV(Field-Of-View) imaging system is required. The catadioptric imaging system with hyperbolic cylinder mirror can acquire over 180 degree horizontal FOV realtime panorama image by using a conventional camera. Because the hyperbolic cylinder mirror has a curved surface in horizontal axis, the original image acquired from the imaging system has the geometrical distortion, which requires the image processing algorithm for reconstruction. In this paper, the image reconstruction algorithms for two cases are studied: (1) to obtain an image with uniform angular resolution and (2) to obtain horizontally rectilinear image. The image acquisition model of the hyperbolic cylinder mirror imaging system is analyzed by the geometrical optics and the image reconstruction algorithms are proposed based on the image acquisition model. To show the validity of the proposed algorithms, experiments are carried out and presented in this paper. The experimental results show that the reconstructed images have a uniform angular resolution and a rectilinear form in horizontal axis, which are natural to human.

2-Dimensional Visualization of the Flame Propagation in a Four-Valve Spark-Ignition Engine (가솔린엔진에서의 2차원 화염 가시화)

  • Bae, Choong-Sik
    • Journal of the Korean Society of Combustion
    • /
    • v.1 no.1
    • /
    • pp.65-73
    • /
    • 1996
  • Flame propagation in a four-valve spark-ignition optical engine was visualized under lean-bum conditions with A/F=18 at 2000rpm. The early flame development in a four-valve pentroof-chamber single-cylinder engine was examined with imaging of the laser-induced Mie scattered light using an image-intensified CCD camera. Flame profiles along the line-of-sight were also visualized through a quartz piston window. Two-dimensional flame structures were visualized with a Proxitronic HF-1 fast motion camera system by Mie scattering from titanium dioxide particles along a planar laser sheet generated by a copper vapor laser. The flame propagation images were subsequently analysed with an image processing programme to obtain information about the flame structure under different tumble flow conditions generated by sleeved and non-sleeved intake ports. This allowed enhancement of the flame images and calculation of the enflamed area, and the displacement of its center, as a function of the tumble flow induced by the pentroof-chamber in the vicinity of spark plug. Image processing of the early flame development quantified the correlation between flame and flow characteristics near the spark plug at the time of ignition which has been known to be one of the most important factors in cyclic combustion variations in lean-burn engines. The results were also compared with direct flame images obtained from the natural flame luminosity of the lean mixture.

  • PDF