• Title/Summary/Keyword: Pixel-Based

Search Result 1,750, Processing Time 0.028 seconds

Real-Time Vehicle License Plate Recognition System Using Adaptive Heuristic Segmentation Algorithm (적응 휴리스틱 분할 알고리즘을 이용한 실시간 차량 번호판 인식 시스템)

  • Jin, Moon Yong;Park, Jong Bin;Lee, Dong Suk;Park, Dong Sun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.361-368
    • /
    • 2014
  • The LPR(License plate recognition) system has been developed to efficient control for complex traffic environment and currently be used in many places. However, because of light, noise, background changes, environmental changes, damaged plate, it only works limited environment, so it is difficult to use in real-time. This paper presents a heuristic segmentation algorithm for robust to noise and illumination changes and introduce a real-time license plate recognition system using it. In first step, We detect the plate utilized Haar-like feature and Adaboost. This method is possible to rapid detection used integral image and cascade structure. Second step, we determine the type of license plate with adaptive histogram equalization, bilateral filtering for denoise and segment accurate character based on adaptive threshold, pixel projection and associated with the prior knowledge. The last step is character recognition that used histogram of oriented gradients (HOG) and multi-layer perceptron(MLP) for number recognition and support vector machine(SVM) for number and Korean character classifier respectively. The experimental results show license plate detection rate of 94.29%, license plate false alarm rate of 2.94%. In character segmentation method, character hit rate is 97.23% and character false alarm rate is 1.37%. And in character recognition, the average character recognition rate is 98.38%. Total average running time in our proposed method is 140ms. It is possible to be real-time system with efficiency and robustness.

Relative RPCs Bias-compensation for Satellite Stereo Images Processing (고해상도 입체 위성영상 처리를 위한 무기준점 기반 상호표정)

  • Oh, Jae Hong;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.4
    • /
    • pp.287-293
    • /
    • 2018
  • It is prerequisite to generate epipolar resampled images by reducing the y-parallax for accurate and efficient processing of satellite stereo images. Minimizing y-parallax requires the accurate sensor modeling that is carried out with ground control points. However, the approach is not feasible over inaccessible areas where control points cannot be easily acquired. For the case, a relative orientation can be utilized only with conjugate points, but its accuracy for satellite sensor should be studied because the sensor has different geometry compared to well-known frame type cameras. Therefore, we carried out the bias-compensation of RPCs (Rational Polynomial Coefficients) without any ground control points to study its precision and effects on the y-parallax in epipolar resampled images. The conjugate points were generated with stereo image matching with outlier removals. RPCs compensation was performed based on the affine and polynomial models. We analyzed the reprojection error of the compensated RPCs and the y-parallax in the resampled images. Experimental result showed one-pixel level of y-parallax for Kompsat-3 stereo data.

Characteristics of Speckle Errors of SeaWiFS Chlorophyll-α Concentration in the East Sea (동해 SeaWiFS 클로로필-α 농도의 스펙클 오차 특성)

  • Chae, Hwa-Jeong;Park, Kyung-Ae
    • Journal of the Korean earth science society
    • /
    • v.30 no.2
    • /
    • pp.234-246
    • /
    • 2009
  • Characteristics of speckle errors of Sea-viewing Wide Field-of-view Sensor (SeaWiFS) chlorophyll-${\alpha}$ concentration were analyzed, and its causes were investigated by using SeaWiFS data in the East Sea from September 1997 to December 2007. The speckles with anomalously high concentrations were randomly distributed and showed remarkably high bias of greater than $10mg/m^3$, compared with their neighboring pixels. The speckles tended to appear frequently in winter, which might be related to cloud distribution. Ten-year averaged cloudiness of winter was much higher over the southeastern part, with frequent speckles, than the northwestern part of the East Sea. Statistical analysis results showed that the number of the speckles was increased as cloudiness increased. Normalized water-leaving radiance of the speckle pixel was considerably low at the short wavelengths (443, 490, and 510 nm), whereas the radiance at 555 nm band was normal. These low measurements produced extraordinarily high concentration from the chlorophyll-${\alpha}$ estimation formula. This study presented the speckle errors of SeaWiFS chlorophyll-${\alpha}$ concentration in the East Sea and suggested that more reliable chlorophyll-${\alpha}$ data based on appropriate ocean color remote sensing techniques should be used for the oceanic application researches.

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF

Image Segmentation Algorithm Based on Geometric Information of Circular Shape Object (원형객체의 기하학적 정보를 이용한 영상분할 알고리즘)

  • Eun, Sung-Jong;WhangBo, Taeg-Keun
    • Journal of Internet Computing and Services
    • /
    • v.10 no.6
    • /
    • pp.99-111
    • /
    • 2009
  • The result of Image segmentation, an indispensable process in image processing, significantly affects the analysis of an image. Despite the significance of image segmentation, it produces some problems when the variation of pixel values is large, or the boundary between background and an object is not clear. Also, these problems occur frequently when many objects in an image are placed very close by. In this paper, when the shape of objects in an image is circular, we proposed an algorithm which segment an each object in an image using the geometric characteristic of circular shape. The proposed algorithm is composed of 4 steps. First is the boundary edge extraction of whole object. Second step is to find the candidate points for further segmentation using the boundary edge in the first step. Calculating the representative circles using the candidate points is the third step. Final step is to draw the line connecting the overlapped points produced by the several erosions and dilations of the representative circles. To verify the efficiency of the proposed algorithm, the algorithm is compared with the three well-known cell segmentation algorithms. Comparison is conducted by the number of segmented region and the correctness of the inner segment line. As the result, the proposed algorithm is better than the well-known algorithms in both the number of segmented region and the correctness of the inner segment line by 16.7% and 21.8%, respectively.

  • PDF

LiDAR Chip for Automated Geo-referencing of High-Resolution Satellite Imagery (라이다 칩을 이용한 고해상도 위성영상의 자동좌표등록)

  • Lee, Chang No;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.4_1
    • /
    • pp.319-326
    • /
    • 2014
  • The accurate geo-referencing processes that apply ground control points is prerequisite for effective end use of HRSI (High-resolution satellite imagery). Since the conventional control point acquisition by human operator takes long time, demands for the automated matching to existing reference data has been increasing its popularity. Among many options of reference data, the airborne LiDAR (Light Detection And Ranging) data shows high potential due to its high spatial resolution and vertical accuracy. Additionally, it is in the form of 3-dimensional point cloud free from the relief displacement. Recently, a new matching method between LiDAR data and HRSI was proposed that is based on the image projection of whole LiDAR data into HRSI domain, however, importing and processing the large amount of LiDAR data considered as time-consuming. Therefore, we wmotivated to ere propose a local LiDAR chip generation for the HRSI geo-referencing. In the procedure, a LiDAR point cloud was rasterized into an ortho image with the digital elevation model. After then, we selected local areas, which of containing meaningful amount of edge information to create LiDAR chips of small data size. We tested the LiDAR chips for fully-automated geo-referencing with Kompsat-2 and Kompsat-3 data. Finally, the experimental results showed one-pixel level of mean accuracy.

Estimation of Classification Accuracy of JERS-1 Satellite Imagery according to the Acquisition Method and Size of Training Reference Data (훈련지역의 취득방법 및 규모에 따른 JERS-1위성영상의 토지피복분류 정확도 평가)

  • Ha, Sung-Ryong;Kyoung, Chon-Ku;Park, Sang-Young;Park, Dae-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.5 no.1
    • /
    • pp.27-37
    • /
    • 2002
  • The classification accuracy of land cover has been considered as one of the major issues to estimate pollution loads generated from diffuse landuse patterns in a watershed. This research aimed to assess the effects of the acquisition methods and sampling size of training reference data on the classification accuracy of land cover using an imagery acquired by optical sensor(OPS) on JERS-1. Two kinds of data acquisition methods were considered to prepare training data. The first was to assign a certain land cover type to a specific pixel based on the researchers subjective discriminating capacity about current land use and the second was attributed to an aerial photograph incorporated with digital maps with GIS. Three different sizes of samples, 0.3%, 0.5%, and 1.0% of all pixels, were applied to examine the consistency of the classified land cover with the training data of corresponding pixels. Maximum likelihood scheme was applied to classify the land use patterns of JERS-1 imagery. Classification run applying an aerial photograph achieved 18 % higher consistency with the training data than the run applying the researchers subjective discriminating capacity. Regarding the sample size, it was proposed that the size of training area should be selected at least over 1% of all of the pixels in the study area in order to obtain the accuracy with 95% for JERS-1 satellite imagery on a typical small-to-medium-size urbanized area.

  • PDF

Spatial Integration of Multiple Data Sets regarding Geological Lineaments using Fuzzy Set Operation (퍼지집합연산을 통한 다중 지질학적 선구조 관련자료의 공간통합)

  • 이기원;지광훈
    • Korean Journal of Remote Sensing
    • /
    • v.11 no.3
    • /
    • pp.49-60
    • /
    • 1995
  • Features of geological lineaments generally play an important role at the data interpretation concerned geological processes, mineral exploration or natural hazard risk estimation. However, there are intrinsically discordances between lineaments-related features extracted from surficial geological syrvey and those from satellite imagery;nevertheless, any data set contained those information should not be considred as less meaningful within their own task. For the purpose of effective utilization task of extracted lineaments, the mathematical scheme, based on fuzzy set theory, for practical integration of various types of rasterized data sets is studied. As a real application, the geological map named Homyeong sheet(1:50,000) and the Landset TM imageries covering same area were used, and then lineaments-related data sets such as lineaments on the geological map, lineaments extracted from a false-color image composite satellite, and major drainage pattern were utilized. For data fusion process, fuzzy membership functions of pixel values in each data set were experimentally assigned by percentile, and then fuzzy algebraic sum operator was tested. As a result, integrated lineaments by this well-known operator are regarded as newly-generated reasonable ones. Conclusively, it was thought that the implementation within available GISs, or the stand-alone module for general applications of this simple scheme can be utilized as an effective scheme can be utilized as an effective scheme for further studies for spatial integration task for providing decision-supporting information, or as a kind of spatial reasoning scheme.

Development of Crack Detection System for Highway Tunnels using Imaging Device and Deep Learning (영상장비와 딥러닝을 이용한 고속도로 터널 균열 탐지 시스템 개발)

  • Kim, Byung-Hyun;Cho, Soo-Jin;Chae, Hong-Je;Kim, Hong-Ki;Kang, Jong-Ha
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.25 no.4
    • /
    • pp.65-74
    • /
    • 2021
  • In order to efficiently inspect rapidly increasing old tunnels in many well-developed countries, many inspection methodologies have been proposed using imaging equipment and image processing. However, most of the existing methodologies evaluated their performance on a clean concrete surface with a limited area where other objects do not exist. Therefore, this paper proposes a 6-step framework for tunnel crack detection deep learning model development. The proposed method is mainly based on negative sample (non-crack object) training and Cascade Mask R-CNN. The proposed framework consists of six steps: searching for cracks in images captured from real tunnels, labeling cracks in pixel level, training a deep learning model, collecting non-crack objects, retraining the deep learning model with the collected non-crack objects, and constructing final training dataset. To implement the proposed framework, Cascade Mask R-CNN, an instance segmentation model, was trained with 1561 general crack images and 206 non-crack images. In order to examine the applicability of the trained model to the real-world tunnel crack detection, field testing is conducted on tunnel spans with a length of about 200m where electric wires and lights are prevalent. In the experimental result, the trained model showed 99% precision and 92% recall, which shows the excellent field applicability of the proposed framework.

The Design of the Obstacle Avoidances System for Unmanned Vehicle Using a Depth Camera (깊이 카메라를 이용한 무인이동체의 장애물 회피 시스템 설계)

  • Kim, Min-Joon;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.224-226
    • /
    • 2016
  • With the technical development and rapid increase of private demand, the new market for unmanned vehicle combined with the characteristics of 'unmanned automation' and 'vehicle' is rapidly growing. Even though the pilot driving is currently allowed in some countries, there is no country that has institutionalized the formal driving of self-driving cars. In case of the existing vehicles, safety incidents are frequently happening due to the frequent malfunction of the rear sensor, blind spot of the rear camera, or drivers' carelessness. Once such minor flaws are complemented, the relevant regulations for the commercialization of self-driving car and small drone could be relieved. Contrary to the ultrasonic and laser sensors used for the existing vehicles, this paper aims to attempt the distance measurement by using the depth sensor. A depth camera calculates the distance data based on the TOF method calculating the time difference by lighting laser or infrared light onto an object or area and then receiving the beam coming back. As this camera can obtain the depth data in the pixel unit of CCD camera, it can be used for collecting depth data in real-time. This paper suggests to solve problems mentioned above by using depth data in real-time and also to design the obstacle avoidance system through distance measurement.

  • PDF