• Title/Summary/Keyword: camera image

Search Result 4,922, Processing Time 0.029 seconds

Comparative Performance Analysis of Feature Detection and Matching Methods for Lunar Terrain Images (달 지형 영상에서 특징점 검출 및 정합 기법의 성능 비교 분석)

  • Hong, Sungchul;Shin, Hyu-Soung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.4
    • /
    • pp.437-444
    • /
    • 2020
  • A lunar rover's optical camera is used to provide navigation and terrain information in an exploration zone. However, due to the scant presence of atmosphere, the Moon has homogeneous terrain with dark soil. Also, in extreme environments, the rover has limited data storage with low computation capability. Thus, for successful exploration, it is required to examine feature detection and matching methods which are robust to lunar terrain and environmental characteristics. In this research, SIFT, SURF, BRISK, ORB, and AKAZE are comparatively analyzed with lunar terrain images from a lunar rover. Experimental results show that SIFT and AKAZE are most robust for lunar terrain characteristics. AKAZE detects less quantity of feature points than SIFT, but feature points are detected and matched with high precision and the least computational cost. AKAZE is adequate for fast and accurate navigation information. Although SIFT has the highest computational cost, the largest quantity of feature points are stably detected and matched. The rover periodically sends terrain images to Earth. Thus, SIFT is suitable for global 3D terrain map construction in that a large amount of terrain images can be processed on Earth. Study results are expected to provide a guideline to utilize feature detection and matching methods for future lunar exploration rovers.

Automatic Classification Algorithm for Raw Materials using Mean Shift Clustering and Stepwise Region Merging in Color (컬러 영상에서 평균 이동 클러스터링과 단계별 영역 병합을 이용한 자동 원료 분류 알고리즘)

  • Kim, SangJun;Kwak, JoonYoung;Ko, ByoungChul
    • Journal of Broadcast Engineering
    • /
    • v.21 no.3
    • /
    • pp.425-435
    • /
    • 2016
  • In this paper, we propose a classification model by analyzing raw material images recorded using a color CCD camera to automatically classify good and defective agricultural products such as rice, coffee, and green tea, and raw materials. The current classifying agricultural products mainly depends on visual selection by skilled laborers. However, classification ability may drop owing to repeated labor for a long period of time. To resolve the problems of existing human dependant commercial products, we propose a vision based automatic raw material classification combining mean shift clustering and stepwise region merging algorithm. In this paper, the image is divided into N cluster regions by applying the mean-shift clustering algorithm to the foreground map image. Second, the representative regions among the N cluster regions are selected and stepwise region-merging method is applied to integrate similar cluster regions by comparing both color and positional proximity to neighboring regions. The merged raw material objects thereby are expressed in a 2D color distribution of RG, GB, and BR. Third, a threshold is used to detect good and defective products based on color distribution ellipse for merged material objects. From the results of carrying out an experiment with diverse raw material images using the proposed method, less artificial manipulation by the user is required compared to existing clustering and commercial methods, and classification accuracy on raw materials is improved.

Compression and Performance Evaluation of CNN Models on Embedded Board (임베디드 보드에서의 CNN 모델 압축 및 성능 검증)

  • Moon, Hyeon-Cheol;Lee, Ho-Young;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.200-207
    • /
    • 2020
  • Recently, deep neural networks such as CNN are showing excellent performance in various fields such as image classification, object recognition, visual quality enhancement, etc. However, as the model size and computational complexity of deep learning models for most applications increases, it is hard to apply neural networks to IoT and mobile environments. Therefore, neural network compression algorithms for reducing the model size while keeping the performance have been being studied. In this paper, we apply few compression methods to CNN models and evaluate their performances in the embedded environment. For evaluate the performance, the classification performance and inference time of the original CNN models and the compressed CNN models on the image inputted by the camera are evaluated in the embedded board equipped with QCS605, which is a customized AI chip. In this paper, a few CNN models of MobileNetV2, ResNet50, and VGG-16 are compressed by applying the methods of pruning and matrix decomposition. The experimental results show that the compressed models give not only the model size reduction of 1.3~11.2 times at a classification performance loss of less than 2% compared to the original model, but also the inference time reduction of 1.2~2.21 times, and the memory reduction of 1.2~3.8 times in the embedded board.

The Comparison of Motion Correction Methods in Myocardial Perfusion SPECT (심근관류 SPECT에서 움직임 보정 방법들의 비교)

  • Park, Jang-Won;Nam, Ki-Pyo;Lee, Hoon-Dong;Kim, Sung-Hwan
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.2
    • /
    • pp.28-32
    • /
    • 2014
  • Purpose Patient motion during myocardial perfusion SPECT can produce images that show visual artifacts and perfusion defects. This artifacts and defects remain a significant source of unsatisfactory myocardial perfusion SPECT. Motion correction has been developed as a way to correct and detect the patient motion for reducing artifacts and defects, and each motion correction uses different algorithm. We corrected simulated motion patterns with several motion correction methods and compared those images. Materials and Methods Phantom study was performed. The anthropomorphic torso phantom was made with equal counts from patient's body and simulated defect was added in myocardium phantom for to observe the change in defect. Vertical motion was intentionally generated by moving phantom downward in a returning pattern and in a non-returning pattern throughout the acquisition. In addition, Lateral motion was generated by moving phantom upward in a returning pattern and in a non-returning pattern. The simulated motion patterns were detected and corrected similarly to no-motion pattern image and QPS score, after Motion Detection and Correction Method (MDC), stasis, Hopkins method were applied. Results In phantom study, Changes of perfusion defect were shown in the anterior wall by the simulated phantom motions, and inferior wall's defect was found in some situations. The changes derived from motion were corrected by motion correction methods, but Hopkins and Stasis method showed visual artifact, and this visual artifact did not affect to perfusion score. Conclusion It was confirmed that motion correction method is possible to reduce the motion artifact and artifactual perfusion defect, through the apply on the phantom tests. Motion Detection and Correction Method (MDC) performed better than other method with polar map image and perfusion score result.

  • PDF

Watching environment-independent color reproduction system development based on color adaption (색순응을 기반하여 관촬환경에 독립한 색재현 시스템 개발)

  • An, Seong-A;Kim, Jong-Pil;An, Seok-Chul
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.21 no.2
    • /
    • pp.43-53
    • /
    • 2003
  • As information-communication network has been developed rapidly, internet users' circumstances also have been changed for the better, in result, more information can be applied than before. At this moment, there are many differences between real color and reappeared color on the CRT. When we observe a material object, our eyes perceive the multiplied form of light sources and nature spectral reflection. However, when the photographed signal is reappeared, illumination at that time of photographing and spectral reflection of a material object are converted into signal, and this converted RGB signal is observed on the CRT under another illumination. At this time, RGB signal is the reflected result of illumination at that time of photographing Therefore, this signal is influenced by the illumination at present, so it can be perceived another color. To accord the colro reflections of another color source, the study has been reported by S.C.Ahn$^{[1]}$, which study is about the color reapperarance system using neuron network. Furthermore, color reappearing method become independent of its circumstances has been reported by Y.Miyake$^{[2]}$. This method can make the same illuminations even if the observe circumstances are changed. To assume the light sources of observe circumstances, the study about color reappearing system using CCD sensor also have been studied by S.C.Ahn$^{[3]}$. In these studies, a population is fixed, first, on ab coordinates of CIE L${\ast}$a${\ast}$b${\ast}$. Then, color reappearing can be possible using every population and existing digital camera. However, the color is changed curvedly, not straightly, according to value's changes on the ab coordinates of CIE L${\ast}$a${\ast}$b. To solve these problems in this study, first of all, Labeling techniques are introduced. Next, basis color-it is based on Munsell color system-is divided into 10 color fields. And then, 4 special color- skin color, grass color, sky color, and gray-are added to the basis color. Finally, 14 color fields are fixed. After analyzing of the principle elements of new-defined-color fields' population, utility value and propriety value are going to be examined in 3-Band system from now on.

  • PDF

Face Detection Using Adaboost and Template Matching of Depth Map based Block Rank Patterns (Adaboost와 깊이 맵 기반의 블록 순위 패턴의 템플릿 매칭을 이용한 얼굴검출)

  • Kim, Young-Gon;Park, Rae-Hong;Mun, Seong-Su
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.437-446
    • /
    • 2012
  • A face detection algorithms using two-dimensional (2-D) intensity or color images have been studied for decades. Recently, with the development of low-cost range sensor, three-dimensional (3-D) information (i.e., depth image that represents the distance between a camera and objects) can be easily used to reliably extract facial features. Most people have a similar pattern of 3-D facial structure. This paper proposes a face detection method using intensity and depth images. At first, adaboost algorithm using intensity image classifies face and nonface candidate regions. Each candidate region is divided into $5{\times}5$ blocks and depth values are averaged in each block. Then, $5{\times}5$ block rank pattern is constructed by sorting block averages of depth values. Finally, candidate regions are classified as face and nonface regions by matching the constructed depth map based block rank patterns and a template pattern that is generated from training data set. For template matching, the $5{\times}5$ template block rank pattern is prior constructed by averaging block ranks using training data set. The proposed algorithm is tested on real images obtained by Kinect range sensor. Experimental results show that the proposed algorithm effectively eliminates most false positives with true positives well preserved.

Vehicle Detection and Tracking using Billboard Sweep Stereo Matching Algorithm (빌보드 스윕 스테레오 시차정합 알고리즘을 이용한 차량 검출 및 추적)

  • Park, Min Woo;Won, Kwang Hee;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.764-781
    • /
    • 2013
  • In this paper, we propose a highly precise vehicle detection method with low false alarm using billboard sweep stereo matching and multi-stage hypothesis generation. First, we capture stereo images from cameras established in front of the vehicle and obtain the disparity map in which the regions of ground plane or background are removed using billboard sweep stereo matching algorithm. And then, we perform the vehicle detection and tracking on the labeled disparity map. The vehicle detection and tracking consists of three steps. In the learning step, the SVM(support vector machine) classifier is obtained using the features extracted from the gabor filter. The second step is the vehicle detection which performs the sobel edge detection in the image of the left camera and extracts candidates of the vehicle using edge image and billboard sweep stereo disparity map. The final step is the vehicle tracking using template matching in the next frame. Removal process of the tracking regions improves the system performance in the candidate region of the vehicle on the succeeding frames.

Vehicle Area Segmentation from Road Scenes Using Grid-Based Feature Values (격자 단위 특징값을 이용한 도로 영상의 차량 영역 분할)

  • Kim Ku-Jin;Baek Nakhoon
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1369-1382
    • /
    • 2005
  • Vehicle segmentation, which extracts vehicle areas from road scenes, is one of the fundamental opera tions in lots of application areas including Intelligent Transportation Systems, and so on. We present a vehicle segmentation approach for still images captured from outdoor CCD cameras mounted on the supporting poles. We first divided the input image into a set of two-dimensional grids and then calculate the feature values of the edges for each grid. Through analyzing the feature values statistically, we can find the optimal rectangular grid area of the vehicle. Our preprocessing process calculates the statistics values for the feature values from background images captured under various circumstances. For a car image, we compare its feature values to the statistics values of the background images to finally decide whether the grid belongs to the vehicle area or not. We use dynamic programming technique to find the optimal rectangular gird area from these candidate grids. Based on the statistics analysis and global search techniques, our method is more systematic compared to the previous methods which usually rely on a kind of heuristics. Additionally, the statistics analysis achieves high reliability against noises and errors due to brightness changes, camera tremors, etc. Our prototype implementation performs the vehicle segmentation in average 0.150 second for each of $1280\times960$ car images. It shows $97.03\%$ of strictly successful cases from 270 images with various kinds of noises.

  • PDF

The Analysis of Evergreen Tree Area Using UAV-based Vegetation Index (UAV 기반 식생지수를 활용한 상록수 분포면적 분석)

  • Lee, Geun-Sang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.1
    • /
    • pp.15-26
    • /
    • 2017
  • The decrease of green space according to the urbanization has caused many environmental problems as the destruction of habitat, air pollution, heat island effect. With interest growing in natural view recently, proper management of evergreen tree which is lived even the winter season has been on the rise importantly. This study analyzed the distribution area of evergreen tree using vegetation index based on unmanned aerial vehicle (UAV). Firstly, RGB and NIR+RG camera were loaded in fixed-wing UAV and image mosaic was achieved using GCPs based on Pix4d SW. And normalized differences vegetation index (NDVI) and soil adjusted vegetation index (SAVI) was calculated by band math function from acquired ortho mosaic image. validation points were applied to evaluate accuracy of the distribution of evergreen tree for each range value and analysis showed that kappa coefficient marked the highest as 0.822 and 0.816 respectively in "NDVI > 0.5" and "SAVI > 0.7". The area of evergreen tree in "NDVI > 0.5" and "SAVI > 0.7" was $11,824m^2$ and $15,648m^2$ respectively, that was ratio of 4.8% and 6.3% compared to total area. It was judged that UAV could supply the latest and high resolution information to vegetation works as urban environment, air pollution, climate change, and heat island effect.

Accuracy Assessment on the Stereoscope based Digital Mapping Using Unmanned Aircraft Vehicle Image (무인항공기 영상을 이용한 입체시기반 수치도화 정확도 평가)

  • Yun, Kong-Hyun;Kim, Deok-In;Song, Yeong Sun
    • Journal of Cadastre & Land InformatiX
    • /
    • v.48 no.1
    • /
    • pp.111-121
    • /
    • 2018
  • RIn this research, digital elevation models, true-ortho image and 3-dimensional digital complied data was generated and evaluated using unmanned aircraft vehicle stereoscopic images by applying photogrammetric principles. In order to implement stereoscopic vision, digital Photogrammetric Workstation should be used necessarily. For conducting this, in this study GEOMAPPER 1.0 is used. That was developed by the Ministry of Trade, Industry and Energy. To realize stereoscopic vision using two overlapping images of the unmanned aerial vehicle, the interior and exterior orientation parameters should be calculated. Especially lens distortion of non-metric camera must be accurately compensated for stereoscope. In this work. photogrammetric orientation process was conducted using commercial Software, PhotoScan 1.4. Fixed wing KRobotics KD-2 was used for the acquisition of UAV images. True-ortho photo was generated and digital topographic map was partially produced. Finally, we presented error analysis on the generated digital complied map. As the results, it is confirmed that the production of digital terrain map with a scale 1:2,500~1:3,000 is available using stereoscope method.