• Title/Summary/Keyword: Captured Image

Search Result 984, Processing Time 0.025 seconds

A Study on the Improvement of the Facial Image Recognition by Extraction of Tilted Angle (기울기 검출에 의한 얼굴영상의 인식의 개선에 관한 연구)

  • 이지범;이호준;고형화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.7
    • /
    • pp.935-943
    • /
    • 1993
  • In this paper, robust recognition system for tilted facial image was developed. At first, standard facial image and lilted facial image are captured by CCTV camera and then transformed into binary image. The binary image is processed in order to obtain contour image by Laplacian edge operator. We trace and delete outermost edge line and use inner contour lines. We label four inner contour lines in order among the inner lines, and then we extract left and right eye with known distance relationship and with two eyes coordinates, and calculate slope information. At last, we rotate the tilted image in accordance with slope information and then calculate the ten distance features between element and element. In order to make the system invariant to image scale, we normalize these features with distance between left and righ eye. Experimental results show 88% recognition rate for twenty five face images when tilted degree is considered and 60% recognition rate when tilted degree is not considered.

  • PDF

Epipolar Resampling for High Resolution Satellite Imagery Based on Parallel Projection (평행투영 기반의 고해상도 위성영상 에피폴라 재배열)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Chang, Hwi-Jeong;Jeong, Ji-Yeon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.15 no.4
    • /
    • pp.81-88
    • /
    • 2007
  • The geometry of satellite image captured by linear CCD sensor is different from that of frame camera image. The fact that the exterior orientation parameters for satellite image with linear CCD sensor varies from scan line by scan line, causes the difference of image geometry between frame and linear CCD sensor. Therefore, we need the epipolar geometry for linear CCD image which differs from that of frame camera image. In this paper, we proposed a method of resampling linear CCD satellite image in epipolar geometry under the assumption that image is not formed in perspective projection but in parallel projection, and the sensor model is a 2D affine sensor model based on parallel projection. For the experiment, IKONOS stereo images, which are high resolution linear CCD images, were used and tested. As results, the spatial accuracy of 2D affine sensor model is investigated and the accuracy of epipolar resampled image with RFM was presented.

  • PDF

Sequence Images Registration by using KLT Feature Detection and Tracking (KLT특징점 검출 및 추적에 의한 비디오영상등록)

  • Ochirbat, Sukhee;Park, Sang-Eon;Shin, Sung-Woong;Yoo, Hwan-Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.2
    • /
    • pp.49-56
    • /
    • 2008
  • Image registration is one of the critical techniques of image mosaic which has many applications such as generating panoramas, video monitoring, image rendering and reconstruction, etc. The fundamental tasks of image registration are point features extraction and tracking which take much computation time. KLT(Kanade-Lucas-Tomasi) feature tracker has proposed for extracting and tracking features through image sequences. The aim of this study is to demonstrate the usage of effective and robust KLT feature detector and tracker for an image registration using the sequence image frames captured by UAV video camera. In result, by using iterative implementation of the KLT tracker, the features extracted from the first frame of image sequences could be successfully tracked through all frames. The process of feature tracking in the various frames with rotation, translation and small scaling could be improved by a careful choice of the process condition and KLT pyramid implementation.

  • PDF

A Trial Toward Marine Watch System by Image Processing

  • Shimpo, Masatoshi;Hirasawa, Masato;Ishida, Keiichi;Oshima, Masaki
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.1
    • /
    • pp.41-46
    • /
    • 2006
  • This paper describes a marine watch system on a ship, which is aided by an image processing method. The system detects other ships through a navigational image sequence to prevent oversights, and it measures their bearings to maintain their movements. The proposed method is described, the detection techniques and measurement of bearings techniques are derived, and the results have been reported. The image is divided into small regions on the basis of the brightness value and then labeled. Each region is considered as a template. A template is assumed to be a ship. Then, the template is compared with frames in the original image after a selected time. A moving vector of the regions is calculated using an Excel table. Ships are detected using the characteristics of the moving vector. The video camera captures 30 frames per second. We segmented one frame into approximately 5000 regions; from these, approximately 100 regions are presumed to be ships and considered to be templates. Each template was compared with frames captured at 0.33 s or 0.66 s. In order to improve the accuracy, this interval was changed on the basis of the magnification of the video camera. Ships’ bearings also need to be determined. The proposed method can measure the ships’ bearings on the basis of three parameters: (1) the course of the own ship, (2) arrangement between the camera and hull, and (3) coordinates of the ships detected from the image. The course of the own ship can be obtained by using a gyrocompass. The camera axis is calibrated along a particular direction using a stable position on a bridge. The field of view of the video camera is measured from the size of a known structure on the hull in the image. Thus, ships’ bearings can be calculated using these parameters.

  • PDF

Development of Computer Vision System for Individual Recognition and Feature Information of Cow (I) - Individual recognition using the speckle pattern of cow - (젖소의 개체인식 및 형상 정보화를 위한 컴퓨터 시각 시스템 개발 (I) - 반문에 의한 개체인식 -)

  • 이종환
    • Journal of Biosystems Engineering
    • /
    • v.27 no.2
    • /
    • pp.151-160
    • /
    • 2002
  • Cow image processing technique would be useful not only for recognizing an individual but also for establishing the image database and analyzing the shape of cows. A cow (Holstein) has usually the unique speckle pattern. In this study, the individual recognition of cow was carried out using the speckle pattern and the content-based image retrieval technique. Sixty cow images of 16 heads were captured under outdoor illumination, which were complicated images due to shadow, obstacles and walking posture of cow. Sixteen images were selected as the reference image for each cow and 44 query images were used for evaluating the efficiency of individual recognition by matching to each reference image. Run-lengths and positions of runs across speckle area were calculated from 40 horizontal line profiles for ROI (region of interest) in a cow body image after 3 passes of 5$\times$5 median filtering. A similarity measure for recognizing cow individuals was calculated using Euclidean distance of normalized G-frame histogram (GH). normalized speckle run-length (BRL), normalized x and y positions (BRX, BRY) of speckle runs. This study evaluated the efficiency of individual recognition of cow using Recall(Success rate) and AVRR(Average rank of relevant images). Success rate of individual recognition was 100% when GH, BRL, BRX and BRY were used as image query indices. It was concluded that the histogram as global property and the information of speckle runs as local properties were good image features for individual recognition and the developed system of individual recognition was reliable.

The Verification of Image Merging for Lumber Scanning System (제재목 화상입력시스템의 화상병합 성능 검증)

  • Kim, Byung Nam;Kim, Kwang Mo;Shim, Kug-Bo;Lee, Hyoung Woo;Shim, Sang-Ro
    • Journal of the Korean Wood Science and Technology
    • /
    • v.37 no.6
    • /
    • pp.556-565
    • /
    • 2009
  • Automated visual grading system of lumber needs correct input image. In order to create a correct image of domestic red pine lumber 3.6 m long feeding on a conveyer, part images were captured using area sensor and template matching algorithm was applied to merge part images. Two kinds of template matching algorithms and six kinds of template sizes were adopted in this operation. Feature extracted method appeared to have more excellent image merging performance than fixed template method. Error length was attributed to a decline of similarity related by difference of partial brightness on a part image, specific pattern and template size. The mismatch part was repetitively generated at the long grain. The best size of template for image merging was $100{\times}100$ pixels. In a further study, assignment of exact template size, preprocessing of image merging for reduction of brightness difference will be needed to improve image merging.

Behavior Monitoring System based on WAP for the Elderly and the Disabled (장애인 및 노약자를 위한 WAP 기반 행동 모니터링 시스템)

  • 김택현;이희영
    • Proceedings of the IEEK Conference
    • /
    • 2003.07c
    • /
    • pp.2565-2568
    • /
    • 2003
  • This paper presents a behavioral data monitoring system based on WAP(wireless application protocol) service for the 24-hour continuous health state monitoring of the elderly and the disabled. The developed system transmits a character message to the predefined mobile cell phone through SMS service when an emergency state takes place. Simultaneously, the image captured by a CCD camera is transmitted to the server computer installed WAP service program. Then, the user of the cell phone who received he message can access the server and open the transmitted image. This system can be used for the effective health monitoring of the elderly and disabled.

  • PDF

Gaze Detection in Head Mounted Camera environment (Head Mounted Camera 환경에서 응시위치 추적)

  • 이철한;이정준;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.25-28
    • /
    • 2000
  • Gaze detection is to find out the position on a monitor screen where a user is looking at, using the computer vision processing. This System can help the handicapped to use a computer, substitute a touch screen which is expensive, and navigate the virtual reality. There are basically two main types of the study of gaze detection. The first is to find out the location by face movement, and the second is by eye movement. In the gaze detection by eye movement, we find out the position with special devices, or the methode of image processing. In this paper, we detect not the iris but the pupil from the image captured by Head-Mounted Camera with infra-red light, and accurately locate the position where a user looking at by A(fine Transform.

  • PDF

Modal Parameter Extraction Using a Digital Camera (디지털 카메라를 이용한 구조물의 동특성 추출)

  • Kim, Byeong-Hwa
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2008.11a
    • /
    • pp.61-68
    • /
    • 2008
  • A set of modal parameters of a stay-cable have been extracted from a moving picture captured by a digital camera supported by shaking hands. It is hard to identify the center of targets attached on the cable surface from the blurred cable motion image, because of the high speed motion of cable, low sampling frequency of camera, and the shaking effect of camera. This study proposes a multi-template matching algorithm to resolve such difficulties. In addition, a sensitivity-based system identification algorithm is introduced to extract the natural frequencies and damping ratios from the ambient cable vibration data. Three sets of vibration tests are conducted to examine the validity of the proposed algorithms. The results show that the proposed technique is pretty feasible for extracting modal parameters from the severely shaking motion pictures.

  • PDF

Localization of a Mobile Robot Using the Information of a Moving Object (운동물체의 정보를 이용한 이동로봇의 자기 위치 추정)

  • Roh, Dong-Kyu;Kim, Il-Myung;Kim, Byung-Hwa;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.11
    • /
    • pp.933-938
    • /
    • 2001
  • In this paper, we describe a method for the mobile robot using images of a moving object. This method combines the observed position from dead-reckoning sensors and the estimated position from the images captured by a fixed camera to localize a mobile robot. Using the a priori known path of a moving object in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a moving object and the estimated robot`s position. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot. The Kalman filter scheme is applied to this method. Effectiveness of the proposed method is demonstrated by the simulation.

  • PDF