• Title/Summary/Keyword: Image Edge

Search Result 2,464, Processing Time 0.036 seconds

Vector Quantization Codebook Design Using Unbalanced Binary Tree and DCT Coefficients (불균형 이진트리와 DCT 계수를 이용한 벡터양자화 코드북)

  • 이경환;최정현;이법기;정원식;김경규;김덕규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2342-2348
    • /
    • 1999
  • DCT-based codebook design using binary tree was proposed to reduce computation time and to solve the initial codebook problem. In this method, DCT coefficient of training vectors that has maximum variance is to be a split key and the mean of coefficients at the location is used as split threshold, then balanced binary tree for final codebook is formed. However edge degradation appears in the reconstructed image, since the blocks of shade region are frequently selected for codevector. In this paper, we propose DCT-based vector quantization codebook design using unbalanced binary tree. Above all, the node that has the largest split key is splited. So the number of edge codevector can be increased. From the simulation results, this method reconstructs the edge region sincerely and shows higher PSNR than previous methods.

  • PDF

Edge detection method using unbalanced mutation operator in noise image (잡음 영상에서 불균등 돌연변이 연산자를 이용한 효율적 에지 검출)

  • Kim, Su-Jung;Lim, Hee-Kyoung;Seo, Yo-Han;Jung, Chai-Yeoung
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.673-680
    • /
    • 2002
  • This paper proposes a method for detecting edge using an evolutionary programming and a momentum back-propagation algorithm. The evolutionary programming does not perform crossover operation as to consider reduction of capability of algorithm and calculation cost, but uses selection operator and mutation operator. The momentum back-propagation algorithm uses assistant to weight of learning step when weight is changed at learning step. Because learning rate o is settled as less in last back-propagation algorithm the momentum back-propagation algorithm discard the problem that learning is slow as relative reduction because change rate of weight at each learning step. The method using EP-MBP is batter than GA-BP method in both learning time and detection rate and showed the decreasing learning time and effective edge detection, in consequence.

Building Roof Reconstruction in Remote Sensing Image using Line Segment Extraction and Grouping (선소의 추출과 그룹화를 이용한 원격탐사영상에서 건물 지붕의 복원)

  • 예철수;전승헌;이호영;이쾌희
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.2
    • /
    • pp.159-169
    • /
    • 2003
  • This paper presents a method for automatic 3-d building reconstruction using high resolution aerial imagery. First, by using edge preserving filtering, noise is eliminated and then images are segmented by watershed algorithm, which preserves location of edge pixels. To extract line segments between control points from boundary of each region, we calculate curvature of each pixel on the boundary and then find the control points. Line segment linking is performed according to direction and length of line segments and the location of line segments is adjusted using gradient magnitudes of all pixels of the line segment. Coplanar grouping and pplygonal patch formation are performed per region by selecting 3-d line segments that are matched using epipolar geometry and flight information. The algorithm has been applied to high resolution aerial images and the results show accurate 3D building reconstruction.

Performance Analysis of Hough Transform Using Extended Lookup Table (확장 참조표를 활용한 허프변환의 성능 분석)

  • Oh, Jeong-su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1868-1873
    • /
    • 2021
  • This paper proposes the Hough transform(HT) using an extended lookup table(LUT) to reduce the computational burden of the HT that is a typical straight line detection algorithm, and analyzes its performance. The conventional HT also uses a LUT to the calculation of the parameter 𝜌 of all straight lines passing through an edge pixel of interest(ePel) in order to reduce the computational burden. However, the proposed HT adopts an extended LUT that can be applied to straight lines across the ePel as well as its peripheral edge pixels to induce more computational reduction. This paper proves the validity of the proposed algorithm mathematically and also verifies it through simulation. The simulation results show that the proposed HT reduces the multiplication computation from 49.6% up to 16.1%, depending on the image and the applied extended LUT, compared to the conventional HT.

Iterative Generalized Hough Transform using Multiresolution Search (다중해상도 탐색을 이용한 반복 일반화 허프 변환)

  • ;W. Nick Street
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.10
    • /
    • pp.973-982
    • /
    • 2003
  • This paper presents an efficient method for automatically detecting objects in a given image. The GHT is a robust template matching algorithm for automatic object detection in order to find objects of various shapes. Many different templates are applied by the GHT in order to find objects of various shapes and size. Every boundary detected by the GHT scan be used as an initial outline for more precise contour-finding techniques. The main weakness of the GHT is the excessive time and memory requirements. In order to overcome this drawback, the proposed algorithm uses a multiresolution search by scaling down the original image to half-sized and quarter-sized images. Using the information from the first iterative GHT on a quarter-sized image, the range of nuclear sizes is determined to limit the parameter space of the half-sized image. After the second iterative GHT on the half-sized image, nuclei are detected by the fine search and segmented with edge information which helps determine the exact boundary. The experimental results show that this method gives reduction in computation time and memory usage without loss of accuracy.

Developing Image Processing Program for Automated Counting of Airborne Fibers (이미지 처리를 통한 공기 중 섬유의 자동계수 알고리즘 프로그램 개발)

  • Choi, Sungwon;Lee, Heekong;Lee, Jong Il;Kim, Hyunwook
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.24 no.4
    • /
    • pp.484-491
    • /
    • 2014
  • Objectives: An image processing program for asbestos fibers analyzing the gradient components and partial linearity was developed in order to accurately segment fibers. The objectives were to increase the accuracy of counting through the formulation of the size and shape of fibers and to guarantee robust fiber detection in noisy backgrounds. Methods: We utilized samples mixed with sand and sepiolite, which has a similar structure to asbestos. Sample concentrations of 0.01%, 0.05%, 0.1%, 0.5%, 1%, 2%, and 3%(w/w) were prepared. The sand used was homogenized after being sieved to less than $180{\mu}m$. Airborne samples were collected on MCE filters by utilizing a personal pump with 2 L/min flow rate for 30 minutes. We used the NIOSH 7400 method for pre-treating and counting the fibers on the filters. The results of the NIOSH 7400 method were compared with those of the image processing program. Results: The performance of the developed algorithm, when compared with the target images acquired by PCM, showed that the detection rate was on average 88.67%. The main causes of non-detection were missing fibers with a low degree of contrast and overlapping of faint and thin fibers. Also, some duplicate countings occurred for fibers with breaks in the middle due to overlapping particles. Conclusions: An image detection algorithm that could increase the accuracy of fiber counting was developed by considering the direction of the edge to extract images of fibers. It showed comparable results to PCM analysis and could be used to count fibers through real-time tracking by modeling a branch point to graph. This algorithm can be utilized to measure the concentrations of asbestos in real-time if a suitable optical design is developed.

Evaluation of the Spatial Resolution for Exposure Class in Computed Radiography by Using the Modulation Transfer Function (변조전달함수를 이용한 컴퓨터 방사선영상의 감도 노출 분류에 따른 공간분해능 평가)

  • Seoung, Youl-Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.8
    • /
    • pp.273-279
    • /
    • 2013
  • The purpose of the study was to present basic data to evaluation of the spatial resolution for exposure class(EC) in computed radiography(CR) by using the modulation transfer function(MTF). In this study, MTF was measured the edge method by using image plate(IP) of $100{\mu}mm$ pixels. A standard beam quality RQA5 based on an international electro-technical commission(IEC) standard was used to perform the X-ray imaging studies. Digital imaging began to set the sensitivity to EC 50, 100, 200, 300, 400, 600, 800, 1200 in X-ray irradiated to IP. The MTF 50% and 10% in the final images was analysis by using an authorized image analysis program the Origin 8.0 and the image J. As a results, the EC 200 was the best spatial resolution at MTF 50% ($1.979{\pm}0.114lp/mm$) and MTF 10% ($3.932{\pm}0.041$). Therefore, the EC 200 could be useful for the diagnosis of diseases that require high spatial resolution such as fractures.

Novel License Plate Detection Method Based on Heuristic Energy

  • Sarker, Md.Mostafa Kamal;Yoon, Sook;Lee, Jaehwan;Park, Dong Sun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.12
    • /
    • pp.1114-1125
    • /
    • 2013
  • License Plate Detection (LPD) is a key component in automatic license plate recognition system. Despite the success of License Plate Recognition (LPR) methods in the past decades, the problem is quite a challenge due to the diversity of plate formats and multiform outdoor illumination conditions during image acquisition. This paper aims at automatical detection of car license plates via image processing techniques. In this paper, we proposed a real-time and robust method for license plate detection using Heuristic Energy Map(HEM). In the vehicle image, the region of license plate contains many components or edges. We obtain the edge energy values of an image by using the box filter and search for the license plate region with high energy values. Using this energy value information or Heuristic Energy Map(HEM), we can easily detect the license plate region from vehicle image with a very high possibilities. The proposed method consists two main steps: Region of Interest (ROI) Detection and License Plate Detection. This method has better performance in speed and accuracy than the most of existing methods used for license plate detection. The proposed method can detect a license plate within 130 milliseconds and its detection rate is 99.2% on a 3.10-GHz Intel Core i3-2100(with 4.00 GB of RAM) personal computer.

A New Stereo Matching Method based on Reliability Space (신뢰도 공간에 기반한 스테레오 정합 기법)

  • Lee, Seung-Tae;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.82-90
    • /
    • 2010
  • In this paper, A new stereo matching method based on reliability space is proposed to acquire 3D information from 2D image. In conventional stereo matching methods, speed is sacrificed to achieve high accuracy. To increase the matching speed while maintaining a high accuracy, this paper proposes this stereo matching method. It first makes the disparity space image for comparing all of the pixels on the stereo images. Then it produce reliability space through analyzing this value. and, By comparing the reliability space according to disparity, it makes disparity map. Moreover, the parts that make regional boundary errors are corrected by classifying the boundary of each region with the reference to color edge. The performance of the proposed stereo matching method is verified by various experiments. As a result, calculation cost is reduced by 30.6%, while the image quality of proposed method has similar performance with the existing method.

Patch based Multi-Exposure Image Fusion using Unsharp Masking and Gamma Transformation (언샤프 마스킹과 감마 변환을 이용한 패치 기반의 다중 노출 영상 융합)

  • Kim, Jihwan;Choi, Hyunho;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.702-712
    • /
    • 2017
  • In this paper, we propose an unsharp masking algorithm using Laplacian as a weight map for the signal structure and a gamma transformation algorithm using image mean intensity as a weight map for mean intensity. The conventional weight map based on the patch has a disadvantage in that the brightness in the image is shifted to one side in the signal structure and the mean intensity region. So the detailed information is lost. In this paper, we improved the detail using unsharp masking of patch unit and proposed linearly combined the gamma transformed values using the average brightness values of the global and local images. Through the proposed algorithm, the detail information such as edges are preserved and the subjective image quality is improved by adjusting the brightness of the light. Experiment results show that the proposed algorithm show better performance than conventional algorithm.