• Title/Summary/Keyword: image partitioning

Search Result 106, Processing Time 0.025 seconds

The Bi-level Image Mapping Using Density Information in Character Patterns (문자패턴에서의 밀도정보를 이용한 이진영상 매핑)

  • 김봉석;강선미;양정윤;양윤모;김덕진
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.8
    • /
    • pp.8-15
    • /
    • 1993
  • This paper describes a normalization of character which is contained in the character recognition process. Line and dot density is computed on input character image and then image mapping is executed into destination. Also recognition is processed using overlap-partitioning of character image and extraction of 4 directional feature primitives. The validity of proposed nonlinear normalization algorithm could be verified by increment of recognition rate.

  • PDF

A study of a image segmentation by the normalized cut (Normalized cut을 이용한 Image segmentation에 대한 연구)

  • Lee, Kyu-Han;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2243-2245
    • /
    • 1998
  • In this paper, we treat image segmentation as a graph partitioning problem. and use the normalized cut for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different graphs as well as the total similarity within the groups. The minimization of this criterion can formulated as a generalized eigenvalues problem. We have applied this approach to segment static image. This criterion can be shown to be computed efficiently by a generalized eigenvalues problem

  • PDF

Design of a Block Data Flow Architecture for 2-D DWT/IDWT (2차원 DWT/IDWT의 블록 데이터 플로우 구조 설계)

  • 정갑천;강준우
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1157-1160
    • /
    • 1998
  • This paper describes the design of a block data flow architecture(BDFA) which implements 2-D discrete wavelet transform(DWT)/inverse discrete wavelet transform(IDWT) for real time image processing applications. The BDFA uses 2-D product separable filters for DWT/IDWT. It consists of an input module, a processor array, and an output module. It use both data partitioning and algorithm partitioning to achieve high efficiency and high throughput. The 2-D DWT/IDWT algorithm for 256$\times$256 lenna image has been simulated using IDL(Interactive Data Language). The 2-D array structured BDFA for the 2-D filter has been modeled and simulated using VHDL.

  • PDF

Wavelet Packet Image Coder Using Coefficients Partitioning For Remote Sensing Images (위성 영상을 위한 계수분할 웨이블릿 패킷 영상 부호화 알고리즘에 관한 연구)

  • 한수영;조성윤
    • Korean Journal of Remote Sensing
    • /
    • v.18 no.6
    • /
    • pp.359-367
    • /
    • 2002
  • In this paper, a new embedded wavelet packet image coder algorithm is proposed for an effective image coder using correlation between partitioned coefficients. This new algorithm presents parent-child relationship for reducing image reconstruction error using relations between individual frequency sub-bands. By parent-child relationship, every coefficient is partitioned and encoded for the zerotree data structure. It is shown that the proposed wavelet packet image coder algorithm achieves low bit rates and rate-distortion. It also demonstrates higher PSNR under the same bit rate and an improvement in image compression time. The perfect rate control is compared with the conventional method. These results show that the encoding and decoding processes of the proposed coder are simpler and more accurate than the conventional ones for texture images that include many mid and high-frequency elements such as aerial and satellite photograph images. The experimental results imply the possibility that the proposed method can be applied to real-time vision system, on-line image processing and image fusion which require smaller file size and better resolution.

Lossless Coding Scheme for Lattice Vector Quantizer Using Signal Set Partitioning Method (Signal Set Partitioning을 이용한 격자 양자화의 비 손실 부호화 기법)

  • Kim, Won-Ha
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.6
    • /
    • pp.93-105
    • /
    • 2001
  • In the lossless step of Lattice Vector Quantization(LVQ), the lattice codewords produced at quantization step are enumerated into radius sequence and index sequence. The radius sequence is run-length coded and then entropy coded, and the index sequence is represented by fixed length binary bits. As bit rate increases, the index bit linearly increases and deteriorates the coding performances. To reduce the index bits across the wide range of bit rates, we developed a novel lattice enumeration algorithm adopting the set partitioning method. The proposed enumeration method shifts down large index values to smaller ones and so reduces the index bits. When the proposed lossless coding scheme is applied to a wavelet based image coding, the proposed scheme achieves more than 10% at bit rates higher than 0.3 bits/pixel over the conventional lossless coding method, and yields more improvement as bit rate becomes higher.

  • PDF

Multispectral Image Data Compression Using Classified Prediction and KLT in Wavelet Transform Domain (웨이블릿 영역에서 분류 예측과 KLT를 이용한 다분광 화상 데이터 압축)

  • 김태수;김승진;이석환;권기구;김영춘;이건일
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.533-540
    • /
    • 2004
  • This paper proposes a new multispectral image data compression algorithm that can efficiently reduce spatial and spectral redundancies by applying classified prediction, a Karhunen-Loeve transform (KLT), and the three-dimensional set partitioning in hierarchical trees (3-D SPIHT) algorithm in the wavelet transform (WT) domain. The classification is performed in the WT domain to exploit the interband classified dependency, while the resulting class information is used for the interband prediction. The residual image data on the prediction errors between the original image data and the predicted image data is decorrelated by a KLT. Finally, the 3-D SPIHT algorithm is used to encode the transformed coefficients listed in a descending order spatially and spectrally as a result of the WT and KLT. Simulation results showed that the reconstructed images after using the proposed algorithm exhibited a better quality and higher compression ratio than those using conventional algorithms.

Rate-Distortion Optimized Zerotree Image Coding using Wavelet Transform (웨이브렛 변환을 이용한 비트율-왜곡 최적화 제로트리 영상 부호화)

  • 이병기;호요성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.101-109
    • /
    • 2004
  • In this paper, we propose an efficient algerian for wavelet-based sti image coding method that utilizes the rate-distortion (R-D) theory. Since conventional tree-structured image coding schemes do not consider the rate-distortion theory properly, they show reduced coding performance. In this paper, we apply an rate-distortion optimized embedding (RDE) operation into the set partitioning in hierarchical trees (SPIHT) algorithm. In this algorithm, we use the rate-distortion slope as a criterion for the coding order of wavelet coefficients in SPIHT lists. We also describe modified set partitioning and rate-distortion optimized list scan methods. Experimental results demonstrate that the proposed method outperforms the SPIHT algorithm and the rate-distortion optimized embedding algerian with respect to the PSNR (peak signal-to-noise ratio) performance.

Improvement of Set Partitioning Sorting Algorithm for Image Compression in Embedded System (임베디드 시스템의 영상압축을 위한 분할정렬 알고리즘의 개선)

  • Kim, Jin-Man;Ju, Dong-Hyun;Kim, Doo-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.3
    • /
    • pp.107-111
    • /
    • 2005
  • With the increasing use of multimedia technologies, image compression requires higher performance as well as new functionality in the informationized society. Specially, in the specific area of still image encoding in embedded system, a new standard, JPEG2000 that improve various problem of JPEG was developed. This paper proposed a method that reduce quantity of data delivered in EBCOT(Embedded Block Coding with Optimized Truncation) process using SPIHT(Set Partitioning in Hierarchical Trees) Algorithm to optimize selection of threshold from feature of wavelet transform coefficients and to remove sign bit in LL area for the increment of compression efficiency on JPEG2000. The experimental results showed the proposed algorithm achieves more improved bit rate in embedded system.

  • PDF

Water body extraction using block-based image partitioning and extension of water body boundaries (블록 기반의 영상 분할과 수계 경계의 확장을 이용한 수계 검출)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.5
    • /
    • pp.471-482
    • /
    • 2016
  • This paper presents an extraction method for water body which uses block-based image partitioning and extension of water body boundaries to improve the performance of supervised classification for water body extraction. The Mahalanobis distance image is created by computing the spectral information of Normalized Difference Water Index (NDWI) and Near Infrared (NIR) band images over a training site within the water body in order to extract an initial water body area. To reduce the effect of noise contained in the Mahalanobis distance image, we apply mean curvature diffusion to the image, which controls diffusion coefficients based on connectivity strength between adjacent pixels and then extract the initial water body area. After partitioning the extracted water body image into the non-overlapping blocks of same size, we update the water body area using the information of water body belonging to water body boundaries. The update is performed repeatedly under the condition that the statistical distance between water body area belonging to water body boundaries and the training site is not greater than a threshold value. The accuracy assessment of the proposed algorithm was tested using KOMPSAT-2 images for the various block sizes between $11{\times}11$ and $19{\times}19$. The overall accuracy and Kappa coefficient of the algorithm varied from 99.47% to 99.53% and from 95.07% to 95.80%, respectively.

Semantic Segmentation using Convolutional Neural Network with Conditional Random Field (조건부 랜덤 필드와 컨볼루션 신경망을 이용한 의미론적인 객체 분할 방법)

  • Lim, Su-Chang;Kim, Do-Yeon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.3
    • /
    • pp.451-456
    • /
    • 2017
  • Semantic segmentation, which is the most basic and complicated problem in computer vision, classifies each pixel of an image into a specific object and performs a task of specifying a label. MRF and CRF, which have been studied in the past, have been studied as effective methods for improving the accuracy of pixel level labeling. In this paper, we propose a semantic partitioning method that combines CNN, a kind of deep running, which is in the spotlight recently, and CRF, a probabilistic model. For learning and performance verification, Pascal VOC 2012 image database was used and the test was performed using arbitrary images not used for learning. As a result of the study, we showed better partitioning performance than existing semantic partitioning algorithm.