• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.036 seconds

Tele-presence System using Homography-based Camera Tracking Method (호모그래피기반의 카메라 추적기술을 이용한 텔레프레즌스 시스템)

  • Kim, Tae-Hyub;Choi, Yoon-Seok;Nam, Bo-Dam;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.3
    • /
    • pp.27-33
    • /
    • 2012
  • Tele-presence and tele-operation techniques are used to build up an immersive scene and control environment for the distant user. This paper presents a novel tele-presence system using the camera tracking based on planar homography. In the first step, the user wears the HMD(head mounted display) with the camera and his/her head motion is estimated. From the panoramic image by the omni-directional camera mounted on the mobile robot, a viewing image by the user is generated and displayed through HMD. The homography of 3D plane with markers is used to obtain the head motion of the user. For the performance evaluation, the camera tracking results by ARToolkit and the homography based method are compared with the really measured positions of the camera.

Augmented Reality System in Real Space using Mobile Projection (이동 투사를 통한 실제 공간에서의 증강현실 시스템)

  • Kim, Moran;Kim, Jun-Sik
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.622-627
    • /
    • 2018
  • In this paper, we introduce an integrated augmented reality system using a small camera and a projector. We extract three-dimensional information of an object with a small portable camera and a projector by using a structured light system. We develop the concept of the virtual camera to generalize the projection method so that the image can be projected at a desired position with only the mesh of the object to be projected without computing the mapping between specific point sets. Therefore, it is possible to project not only simple planes but also complex curved surfaces to desired positions without complicated geometric calculation. Based on a robot with a small camera and a projector, it will largely explain the projector-camera system calibration, the calculation of the position of the recognized object, and the image projection method using the virtual camera concept.

Controller for Single Line Tracking Autonomous Guidance Vehicle Using Machine Vision

  • Shin, Beom-Soo;Choi, Young-Dae;Ying, Yibin
    • Agricultural and Biosystems Engineering
    • /
    • v.6 no.2
    • /
    • pp.47-53
    • /
    • 2005
  • AMachine vision is a promising tool for the autonomous guidance of farm machinery. Conventional CCD camera for the machine vision needs a desktop PC to install a frame grabber, however, a web camera is ready to use when plugged in the USB port. A web camera with a notebook PC can replace existing camera system. Autonomous steering control system of this research was intended to be used for combine harvester. If the web camera can recognize cut/uncut edge of crop, which will be the reference for steering control, then the position of the machine can be determined in terms of lateral offset and heading angle. In this research, a white line was used as a cut/uncut edge of crop for steering control. Image processing algorithm including capturing image in the web camera was developed to determine the desired travel path. An experimental vehicle was constructed to evaluate the system performance. Since the vehicle adopted differential drive steering mechanism, it is steered by the difference of rotation speed between left and right wheels. According to the position of vehicle, the steering algorithm was developed as well. Evaluation tests showed that the experimental vehicle could travel within an RMS error of 0.8cm along the desired path at the ground speed of $9\sim41cm/s$. Even when the vehicle started with initial offsets or tilted heading angle, it could move quickly to track the desired path after traveling $1.52\sim3.5m$. For turning section, i.e., the curved path with curvature of 3 m, the vehicle completed its turning securely.

  • PDF

The Design and Implementation Navigation System For Visually Impaired Person (시각 장애인을 위한 Navigation System의 설계 및 구현)

  • Kong, Sung-Hun;Kim, Young-Kil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.12
    • /
    • pp.2702-2707
    • /
    • 2012
  • In the rapid growth of cities, road has heavy traffic and many buildings are under constructions. These kinds of environments make more difficulty for a person who is visually handicapped to walk comfortable. To alleviate the problem, we introduce Navigation System to help walking for Visually Impaired Person. It follows, service center give instant real time monitoring to visually impaired person for their convenient by this system. This Navigation System has GPS, Camera, Audio and Wi-Fi(wireless fidelity) available. It means that GPS location and Camera image information can be sent to service center by Wi-Fi network. To be specific, transmitted GPS location information enables service center to figure out the visually impaired person's whereabouts and mark the location on the map. By delivered Camera image information, service center monitors the visually impaired person's view. Also, they can offer live guidance to visually impaired person by equipped Audio with live talking. To sum up, Android based Portable Navigation System is a specialized navigation system that gives practical effect to realize more comfortable walking for visually impaired person.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

Development of a Multi-View Camera System Prototype (다각사진촬영시스템 프로토타입 개발)

  • Park, Seon-Dong;Seo, Sang-Il;Yoon, Dong-Jin;Shin, Jin-Soo;Lee, Chang-No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.2
    • /
    • pp.261-271
    • /
    • 2009
  • Due to the recent rise of a need for 3 dimensional geospatial information on urban areas, general interest in aerial multi-view cameras has been on an increase. The conventional geospatial information system depends solely upon vertical images, while the multi-view camera is capable of taking both vertical and oblique images taken from multiple directions, thus making it easier for the user to interpret the object. Through our research we developed a prototype of a multi-view camera system that includes a camera system, GPS/INS, a flight management system, and a control system. We also studied and experimented with the camera viewing angles, the synchronization of image capture, the exposure delay, the data storage that must be considered for the development of the multi-view camera system.

Video Camera Model Identification System Using Deep Learning (딥 러닝을 이용한 비디오 카메라 모델 판별 시스템)

  • Kim, Dong-Hyun;Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.1-9
    • /
    • 2019
  • With the development of imaging information communication technology in modern society, imaging acquisition and mass production technology have developed rapidly. However, crime rates using these technology are increased and forensic studies are conducted to prevent it. Identification techniques for image acquisition devices are studied a lot, but the field is limited to images. In this paper, camera model identification technique for video, not image is proposed. We analyzed video frames using the trained model with images. Through training and analysis by considering the frame characteristics of video, we showed the superiority of the model using the P frame. Then, we presented a video camera model identification system by applying a majority-based decision algorithm. In the experiment using 5 video camera models, we obtained maximum 96.18% accuracy for each frame identification and the proposed video camera model identification system achieved 100% identification rate for each camera model.

On Design of Visual Servoing using an Uncalibrated Camera in 3D Space

  • Morita, Masahiko;Kenji, Kohiyama;Shigeru, Uchikado;Lili, Sun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1121-1125
    • /
    • 2003
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. We use a pinhole camera model as the camera one. The essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. These play an important role in designing visual servoing. For easy understanding of the proposed method we first show a design in case of the calibrated camera. The design is constructed by 4 steps and the directional motion of the robot arm is fixed only to a constant direction. This means that an estimated epipole denotes the direction, to which the robot arm translates in 3D space, on the image plane.

  • PDF

An Application Based on Smart Device for Special Effect Shooting of Movies

  • Chung, Myoungbeom
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.7
    • /
    • pp.39-46
    • /
    • 2016
  • In this paper, we proposed an application which can conduct to take repetition images for special effect shooting at movies or drama using Bluetooth of smart device. The proposed application can move a camera with motors several times as the same moving after saving of the control data based on the application during specific time using Bluetooth. At the repetition moving, we do not permit to control motors of the camera by person for keeping the same start position and end position after saving the moving data. The camera motors are only moved remotely by saving data of the proposed application. We developed the proposed application and a hardware which works motors with camera to check performance evaluation. Then, we confirmed that the proposed application exactly did the same moving to the motors with camera several times according to saved data. Therefore, because the proposed application can take a same images as control remotely the motors of camera, it will be a useful technology for special effect shooting of movies or dramas.

REAL-TIME DETECTION OF MOVING OBJECTS IN A ROTATING AND ZOOMING CAMERA

  • Li, Ying-Bo;Cho, Won-Ho;Hong, Ki-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.71-75
    • /
    • 2009
  • In this paper, we present a real-time method to detect moving objects in a rotating and zooming camera. It is useful for camera surveillance of fixed but rotating camera, camera on moving car, and so on. We first compensate the global motion, and then exploit the displaced frame difference (DFD) to find the block-wise boundary. For robust detection, we propose a kind of image to combine the detections from consecutive frames. We use the block-wise detection to achieve the real-time speed, except the pixel-wise DFD. In addition, a fast block-matching algorithm is proposed to obtain local motions and then global affine motion. In the experimental results, we demonstrate that our proposed algorithm can handle the real-time detection of common object, small object, multiple objects, the objects in low-contrast environment, and the object in zooming camera.

  • PDF