• Title/Summary/Keyword: vision camera

Search Result 1,379, Processing Time 0.026 seconds

Person Identification based on Clothing Feature (의상 특징 기반의 동일인 식별)

  • Choi, Yoo-Joo;Park, Sun-Mi;Cho, We-Duke;Kim, Ku-Jin
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.1
    • /
    • pp.1-7
    • /
    • 2010
  • With the widespread use of vision-based surveillance systems, the capability for person identification is now an essential component. However, the CCTV cameras used in surveillance systems tend to produce relatively low-resolution images, making it difficult to use face recognition techniques for person identification. Therefore, an algorithm is proposed for person identification in CCTV camera images based on the clothing. Whenever a person is authenticated at the main entrance of a building, the clothing feature of that person is extracted and added to the database. Using a given image, the clothing area is detected using background subtraction and skin color detection techniques. The clothing feature vector is then composed of textural and color features of the clothing region, where the textural feature is extracted based on a local edge histogram, while the color feature is extracted using octree-based quantization of a color map. When given a query image, the person can then be identified by finding the most similar clothing feature from the database, where the Euclidean distance is used as the similarity measure. Experimental results show an 80% success rate for person identification with the proposed algorithm, and only a 43% success rate when using face recognition.

Development of a real-time surface image velocimeter using an android smartphone (스마트폰을 이용한 실시간 표면영상유속계 개발)

  • Yu, Kwonkyu;Hwang, Jeong-Geun
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.469-480
    • /
    • 2016
  • The present study aims to develop a real-time surface image velocimeter (SIV) using an Android smartphone. It can measure river surface velocity by using its built-in sensors and processors. At first the SIV system figures out the location of the site using the GPS of the phone. It also measures the angles (pitch and roll) of the device by using its orientation sensors to determine the coordinate transform from the real world coordinates to image coordinates. The only parameter to be entered is the height of the phone from the water surface. After setting, the camera of the phone takes a series of images. With the help of OpenCV, and open source computer vision library, we split the frames of the video and analyzed the image frames to get the water surface velocity field. The image processing algorithm, similar to the traditional STIV (Spatio-Temporal Image Velocimeter), was based on a correlation analysis of spatio-temporal images. The SIV system can measure instantaneous velocity field (1 second averaged velocity field) once every 11 seconds. Averaging this instantaneous velocity measurement for sufficient amount of time, we can get an average velocity field. A series of tests performed in an experimental flume showed that the measurement system developed was greatly effective and convenient. The measured results by the system showed a maximum error of 13.9 % and average error less than 10 %, when we compared with the measurements by a traditional propeller velocimeter.

A Study on the Selection and Applicability Analysis of 3D Terrain Modeling Sensor for Intelligent Excavation Robot (지능형 굴삭 로봇의 개발을 위한 로컬영역 3차원 모델링 센서 선정 및 현장 적용성 분석에 관한 연구)

  • Yoo, Hyun-Seok;Kwon, Soon-Wook;Kim, Young-Suk
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.6
    • /
    • pp.2551-2562
    • /
    • 2013
  • Since 2006, an Intelligent Excavation Robot which automatically performs the earth-work without operator has been developed in Korea. The technologies for automatically recognizing the terrain of work environment and detecting the objects such as obstacles or dump trucks are essential for its work quality and safety. In several countries, terrestrial 3D laser scanner and stereo vision camera have been used to model the local area around workspace of the automated construction equipment. However, these attempts have some problems that require high cost to make the sensor system or long processing time to eliminate the noise from 3D model outcome. The objectives of this study are to analyze the advantages of the existing 3D modeling sensors and to examine the applicability for practical use by using Analytic Hierarchical Process(AHP). In this study, 3D modeling quality and accuracy of modeling sensors were tested at the real earth-work environment.

A Method of Hand Recognition for Virtual Hand Control of Virtual Reality Game Environment (가상 현실 게임 환경에서의 가상 손 제어를 위한 사용자 손 인식 방법)

  • Kim, Boo-Nyon;Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper, we propose a control method of virtual hand by the recognition of a user's hand in the virtual reality game environment. We display virtual hand on the game screen after getting the information of the user's hand movement and the direction thru input images by camera. We can utilize the movement of a user's hand as an input interface for virtual hand to select and move the object. As a hand recognition method based on the vision technology, the proposed method transforms input image from RGB color space to HSV color space, then segments the hand area using double threshold of H, S value and connected component analysis. Next, The center of gravity of the hand area can be calculated by 0 and 1 moment implementation of the segmented area. Since the center of gravity is positioned onto the center of the hand, the further apart pixels from the center of the gravity among the pixels in the segmented image can be recognized as fingertips. Finally, the axis of the hand is obtained as the vector of the center of gravity and the fingertips. In order to increase recognition stability and performance the method using a history buffer and a bounding box is also shown. The experiments on various input images show that our hand recognition method provides high level of accuracy and relatively fast stable results.

Comparison of 3D Space Perception for the Stereoscopic AR Holography (스테레오 증강현실 홀로그래피에서의 삼차원 공간감 비교)

  • Kim, Minju;Wohn, Kwangyun
    • Journal of the HCI Society of Korea
    • /
    • v.8 no.2
    • /
    • pp.21-27
    • /
    • 2013
  • Recently, the use of floating hologram has increased in many different aspects, such as exhibitions, education, advertisements, and so on. Especially, the floating hologram that makes use of half-mirror is widely used. Nevertheless, half-mirror, unfortunately, cannot lead users to the perfect three dimensional hologram experience. Even though it can make the vision look to be up on the air, it does not have the capacity to display itself up on the air, which is the ultimate goal of hologram. In addition, it looks inconsistent when a real object is located behind the half-mirror in order to show the convergence of the two (object and the half-mirror). In this paper, we did the study on comparison of 3D space perception for the stereoscopic AR holography. At first, we applied stereoscopic technology to the half-mirror hologram system for the accurate and realistic AR environment. Then, the users can feel as if the real 3D object behind half-mirror and the reflected virtual image are converged much better in the 3D space. Furthermore, by using depth camera, the location and direction of graphics can be controlled to change depending on the user's point of view. This is the effective way to produce augmented stereoscopic images simply and accurately through half-mirror film without any additional devices. What we saw from the user test were applying 3D images and user interaction leads the users to have 3D spatial awareness and realism more effectively and accurately.

  • PDF

Automatic Classification Algorithm for Raw Materials using Mean Shift Clustering and Stepwise Region Merging in Color (컬러 영상에서 평균 이동 클러스터링과 단계별 영역 병합을 이용한 자동 원료 분류 알고리즘)

  • Kim, SangJun;Kwak, JoonYoung;Ko, ByoungChul
    • Journal of Broadcast Engineering
    • /
    • v.21 no.3
    • /
    • pp.425-435
    • /
    • 2016
  • In this paper, we propose a classification model by analyzing raw material images recorded using a color CCD camera to automatically classify good and defective agricultural products such as rice, coffee, and green tea, and raw materials. The current classifying agricultural products mainly depends on visual selection by skilled laborers. However, classification ability may drop owing to repeated labor for a long period of time. To resolve the problems of existing human dependant commercial products, we propose a vision based automatic raw material classification combining mean shift clustering and stepwise region merging algorithm. In this paper, the image is divided into N cluster regions by applying the mean-shift clustering algorithm to the foreground map image. Second, the representative regions among the N cluster regions are selected and stepwise region-merging method is applied to integrate similar cluster regions by comparing both color and positional proximity to neighboring regions. The merged raw material objects thereby are expressed in a 2D color distribution of RG, GB, and BR. Third, a threshold is used to detect good and defective products based on color distribution ellipse for merged material objects. From the results of carrying out an experiment with diverse raw material images using the proposed method, less artificial manipulation by the user is required compared to existing clustering and commercial methods, and classification accuracy on raw materials is improved.

The Analysis of Evergreen Tree Area Using UAV-based Vegetation Index (UAV 기반 식생지수를 활용한 상록수 분포면적 분석)

  • Lee, Geun-Sang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.1
    • /
    • pp.15-26
    • /
    • 2017
  • The decrease of green space according to the urbanization has caused many environmental problems as the destruction of habitat, air pollution, heat island effect. With interest growing in natural view recently, proper management of evergreen tree which is lived even the winter season has been on the rise importantly. This study analyzed the distribution area of evergreen tree using vegetation index based on unmanned aerial vehicle (UAV). Firstly, RGB and NIR+RG camera were loaded in fixed-wing UAV and image mosaic was achieved using GCPs based on Pix4d SW. And normalized differences vegetation index (NDVI) and soil adjusted vegetation index (SAVI) was calculated by band math function from acquired ortho mosaic image. validation points were applied to evaluate accuracy of the distribution of evergreen tree for each range value and analysis showed that kappa coefficient marked the highest as 0.822 and 0.816 respectively in "NDVI > 0.5" and "SAVI > 0.7". The area of evergreen tree in "NDVI > 0.5" and "SAVI > 0.7" was $11,824m^2$ and $15,648m^2$ respectively, that was ratio of 4.8% and 6.3% compared to total area. It was judged that UAV could supply the latest and high resolution information to vegetation works as urban environment, air pollution, climate change, and heat island effect.

Drone-based Vegetation Index Analysis Considering Vegetation Vitality (식생 활력도를 고려한 드론 기반의 식생지수 분석)

  • CHO, Sang-Ho;LEE, Geun-Sang;HWANG, Jee-Wook
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.2
    • /
    • pp.21-35
    • /
    • 2020
  • Vegetation information is a very important factor used in various fields such as urban planning, landscaping, water resources, and the environment. Vegetation varies according to canopy density or chlorophyll content, but vegetation vitality is not considered when classifying vegetation areas in previous studies. In this study, in order to satisfy various applied studies, a study was conducted to set a threshold value of vegetation index considering vegetation vitality. First, an eBee fixed-wing drone was equipped with a multi-spectral camera to construct optical and near-infrared orthomosaic images. Then, GIS calculation was performed for each orthomosaic image to calculate the NDVI, GNDVI, SAVI, and MSAVI vegetation index. In addition, the vegetation position of the target site was investigated through VRS survey, and the accuracy of each vegetation index was evaluated using vegetation vitality. As a result, the scenario in which the vegetation vitality point was selected as the vegetation area was higher in the classification accuracy of the vegetation index than the scenario in which the vegetation vitality point was slightly insufficient. In addition, the Kappa coefficient for each vegetation index calculated by overlapping with each site survey point was used to select the best threshold value of vegetation index for classifying vegetation by scenario. Therefore, the evaluation of vegetation index accuracy considering the vegetation vitality suggested in this study is expected to provide useful information for decision-making support in various business fields such as city planning in the future.

Estimation of Manhattan Coordinate System using Convolutional Neural Network (합성곱 신경망 기반 맨하탄 좌표계 추정)

  • Lee, Jinwoo;Lee, Hyunjoon;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.31-38
    • /
    • 2017
  • In this paper, we propose a system which estimates Manhattan coordinate systems for urban scene images using a convolutional neural network (CNN). Estimating the Manhattan coordinate system from an image under the Manhattan world assumption is the basis for solving computer graphics and vision problems such as image adjustment and 3D scene reconstruction. We construct a CNN that estimates Manhattan coordinate systems based on GoogLeNet [1]. To train the CNN, we collect about 155,000 images under the Manhattan world assumption by using the Google Street View APIs and calculate Manhattan coordinate systems using existing calibration methods to generate dataset. In contrast to PoseNet [2] that trains per-scene CNNs, our method learns from images under the Manhattan world assumption and thus estimates Manhattan coordinate systems for new images that have not been learned. Experimental results show that our method estimates Manhattan coordinate systems with the median error of $3.157^{\circ}$ for the Google Street View images of non-trained scenes, as test set. In addition, compared to an existing calibration method [3], the proposed method shows lower intermediate errors for the test set.

A Framework of Recognition and Tracking for Underwater Objects based on Sonar Images : Part 2. Design and Implementation of Realtime Framework using Probabilistic Candidate Selection (소나 영상 기반의 수중 물체 인식과 추종을 위한 구조 : Part 2. 확률적 후보 선택을 통한 실시간 프레임워크의 설계 및 구현)

  • Lee, Yeongjun;Kim, Tae Gyun;Lee, Jihong;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.164-173
    • /
    • 2014
  • In underwater robotics, vision would be a key element for recognition in underwater environments. However, due to turbidity an underwater optical camera is rarely available. An underwater imaging sonar, as an alternative, delivers low quality sonar images which are not stable and accurate enough to find out natural objects by image processing. For this, artificial landmarks based on the characteristics of ultrasonic waves and their recognition method by a shape matrix transformation were proposed and were proven in Part 1. But, this is not working properly in undulating and dynamically noisy sea-bottom. To solve this, we propose a framework providing a selection phase of likelihood candidates, a selection phase for final candidates, recognition phase and tracking phase in sequence images, where a particle filter based selection mechanism to eliminate fake candidates and a mean shift based tracking algorithm are also proposed. All 4 steps are running in parallel and real-time processing. The proposed framework is flexible to add and to modify internal algorithms. A pool test and sea trial are carried out to prove the performance, and detail analysis of experimental results are done. Information is obtained from tracking phase such as relative distance, bearing will be expected to be used for control and navigation of underwater robots.