• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.028 seconds

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

Smart Phone Picture Recognition Algorithm Using Electronic Maps of Architecture Configuration (건물 배치 전자도면을 이용한 모바일 폰의 피사체 인지 방법)

  • Yim, Jae-Geol;Joo, Jae-Hun;Lee, Gye-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.17 no.3
    • /
    • pp.1-14
    • /
    • 2012
  • As the techniques of electronic and information are advancing, the computing power of a smart phone is becoming more powerful and the storage capacity of a smart phone is becoming larger. As the result, various new useful services are becoming available on smart phones. The context-aware service and mobile augmented reality have recently been the most popular research topics. For those newly developed services, identifying the object in the picture taken by the camera on the phone performs an extremely important role. So, many researches of identifying pictures have been published and most of them are based on the time consuming image recognition techniques. On the contrary, this paper introduces a very fast and effective method of identifying the objects on the photo making use of the sensor data obtained from the smart phone and electronic maps. Our method estimates the line of sight of the camera with the location and orientation information provided by the smart phone. Then it finds any element of the map which intersects the line of sight. By investigating those intersecting elements, our method identifies the objects on the photo.

Remote monitoring of light environment using web-camera for protected chrysanthemum production (웹 카메라를 이용한 시설 내 국화생산 광 환경 원격 모니터링)

  • Chung, Sun-Ok;Kim, Yong-Joo;Lee, Kyu-Ho;Sung, Nam-Seok;Lee, Cheol-Hwi;Noh, Hyun-Kwon
    • Korean Journal of Agricultural Science
    • /
    • v.42 no.4
    • /
    • pp.447-453
    • /
    • 2015
  • Increase of national family income improved demand of high-quality and year-round horticultural products including chrysanthemum. To meet these demand, farmers have introduced protected facilities, such as greenhouses, of which environmental conditions could be monitored and controlled. Environment management up to three weeks after transplanting is critical for chrysanthemum quality. Artificial lighting and light-blocking screen are especially important for long-day (day period > 13 hours) and short-day (night period > 13 hours) treatments. In this study, a web-camera was installed, and the image was obtained and transmitted to mobile phones to monitor the status of 3-wavelength(RGB) lighting environments. RGB pixel values were used to determine malfunctioning of the lighting lamps, and leaking out and incoming illumination status during short-day and long-day treatment periods. Normal lighting lamps provided RGB pixel values of 240~255. During long-day treatment period, G pixel values were useful to detect abnormal lighting conditions (e.g., leaking). During short-day treatment period, R pixel values were useful to determine incoming light (e.g., sun-light). Results of this study would provide useful information for remote monitoring of light conditions for protected chrysanthemum production under artificial lights.

Gesture-based Table Tennis Game in AR Environment (증강현실과 제스처를 이용한 비전기반 탁구 게임)

  • Yang, Jong-Yeol;Lee, Sang-Kyung;Kyoung, Dong-Wuk;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.5 no.3
    • /
    • pp.3-10
    • /
    • 2005
  • We present the computer table tennis game using player's swing motion. We need to transform a real world coordinate into a virtual world coordinate in order to hit the virtual ball. We can not get a correct 3-dimension position of racket in environment that using one camera or simple image processing. Therefore we use Augmented Reality (AR) concept to develop the game. This paper shows the AR table tennis game using gesture and method to develop the 3D interaction game that only using one camera without any motion detection device or stereo cameras. Also, we use a scan line method to recognize gesture for speedy processing. The game is developed using ARtoolkit and DirectX that is popular tool of SDK for game development.

  • PDF

Development of Intergrated Vision System for Unmanned-Crane Automation System (무인 크레인 자동화 시스템 구축을 위한 통합 비전 시스템 개발)

  • Lee, Ji-Hyun;Kim, Mu-Hyun;Park, Mu-Hun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.259-263
    • /
    • 2010
  • This paper introduces an integrated vision system that enables us to detect the image of Slabs and Coils and get the complete three dimensional location data without any other obstacles in the field of unmanned-crane automation system. Existing researches with laser scanner tend to be easily influenced by environment in the work place so they cannot give the exact location information. Also, CCD camera has some problems recognize the pattern because of intensity of illumination caused in the industrial setting. To overcome these two weaknesses, this thesis suggests laser scanner should be combined with CCD camera named integrated vision system. This system can draw more clear pictures and take the advanced 3D location information. The suggested system is expected to help build unmanned-crane automation system.

  • PDF

Development of Fire Detection Algorithm using Intelligent context-aware sensor (상황인지 센서를 활용한 지능형 화재감지 알고리즘 설계 및 구현)

  • Kim, Hyeng-jun;Shin, Gyu-young;Oh, Young-jun;Lee, Kang-whan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.93-96
    • /
    • 2015
  • In this paper, we introduce a fire detection system using context-aware sensor. In existing weather and based on vision sensor of fire detection system case, acquired image through sensor of camera is extracting features about fire range as processing to convert HSI(Hue, Saturation, Intensity) model HSI which is color space can have durability in illumination changes. However, in this case, until a fire occurs wide range of sensing a fire in a single camera sensor, it is difficult to detect the occurrence of a fire. Additionally, the fire detection in complex situations as well as difficult to separate continuous boundary is set for the required area is difficult. In this paper, we propose an algorithm for real-time by using a temperature sensor, humidity, Co2, the flame presence information acquired and comparing the data based on multiple conditions, analyze and determine the weighting according to fire it. In addition, it is possible to differential management to intensive fire detection is required zone dividing the state of fire.

  • PDF

Design of Imaging Optical System with 24mm Focal length for MWIR (MWIR용 24mm 초점거리를 가지는 결상광학계의 설계)

  • Lee, Sang-Kil;Lee, Dong-Hee
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.6
    • /
    • pp.203-207
    • /
    • 2018
  • This paper deals with the design and development of a lens system capable of imaging an infrared image of $3{\sim}5{\mu}m$ wavelength bands with a focal length of 24mm and good atmospheric transmission characteristics. The design used CodeV, a commercial design program, and the optimization is carried out with weighting to eliminate chromatic aberration, spherical aberration and distortion. The designed lens system consists of two lenses consisting of Si and Ge. Each lens has an aspherical surface on one side. And this optical system has the resolution of the characteristics that the MTF value is 0.40 at the line width of 29lp/mm and the MTF value is 0.25 at the line width of 20lp/mm. This optical system is considered to have the capability to be applied to the thermal imaging camera for MWIR using the $206{\times}156$ array infrared detector of $25{\mu}m$ pixels and the $320{\times}240$ array infrared detector of $17{\mu}m$ pixels.

Robust Estimation of Camera Motion Using A Local Phase Based Affine Model (국소적 위상기반 어파인 모델을 이용한 강인한 카메라 움직임 추정)

  • Jang, Suk-Yoon;Yoon, Chang-Yong;Park, Mig-Non
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.1
    • /
    • pp.128-135
    • /
    • 2009
  • Techniques for tracking the same region of physical space with the temporal sequences of images by matching the contours of constant phase show robust and stable performance in relative to the tracking techniques using or assuming the constant intensity. Using this property, we describe an algorithm for obtaining the robust motion parameters caused by the global camera motion. First, we obtain the optical flow based on the phase of spacially filtered sequential images on the region in a direction orthogonal to orientation of each component of gabor filter bank. And then, we apply the least squares method to the optical flow to determine the affine motion parameters. We demonstrate hat proposed method can be applied to the vision based pointing device which estimate its motion using the image including the display device which cause lighting condition varieties and noise.

Text Region Detection Method in Mobile Phone Video (휴대전화 동영상에서의 문자 영역 검출 방법)

  • Lee, Hoon-Jae;Sull, Sang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.192-198
    • /
    • 2010
  • With the popularization of the mobile phone with a built-in camera, there are a lot of effort to provide useful information to users by detecting and recognizing the text in the video which is captured by the camera in mobile phone, and there is a need to detect the text regions in such mobile phone video. In this paper, we propose a method to detect the text regions in the mobile phone video. We employ morphological operation as a preprocessing and obtain binarized image using modified k-means clustering. After that, candidate text regions are obtained by applying connected component analysis and general text characteristic analysis. In addition, we increase the precision of the text detection by examining the frequency of the candidate regions. Experimental results show that the proposed method detects the text regions in the mobile phone video with high precision and recall.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.