• Title/Summary/Keyword: weighted histogram

Search Result 54, Processing Time 0.03 seconds

An Empirical Evaluation of Color Distribution Descriptor for Image Search (이미지 검색을 위한 칼라 분포 기술자의 성능 평가)

  • Lee, Choon-Sang;Lee, Yong-Hwan;Kim, Young-Seop;Rhee, Sang-Burm
    • Journal of the Semiconductor & Display Technology
    • /
    • v.5 no.2 s.15
    • /
    • pp.27-31
    • /
    • 2006
  • As more and more digital images are made by various applications, image retrieval becomes a primary concern in technology of multimedia. This paper presents color based descriptor that uses information of color distribution in color images which is the most basic element for image search and performance of proposed visual feature is evaluated through the simulation. In designing the image search descriptor used color histogram, HSV, Daubechies 9/7 and 2 level wavelet decomposition provide better results than other parameters in terms of computational time and performances. Also histogram quadratic matrix outperforms the sum of absolute difference in similarity measurements, but spends more than 60 computational times.

  • PDF

Object Cataloging Using Heterogeneous Local Features for Image Retrieval

  • Islam, Mohammad Khairul;Jahan, Farah;Baek, Joong Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4534-4555
    • /
    • 2015
  • We propose a robust object cataloging method using multiple locally distinct heterogeneous features for aiding image retrieval. Due to challenges such as variations in object size, orientation, illumination etc. object recognition is extraordinarily challenging problem. In these circumstances, we adapt local interest point detection method which locates prototypical local components in object imageries. In each local component, we exploit heterogeneous features such as gradient-weighted orientation histogram, sum of wavelet responses, histograms using different color spaces etc. and combine these features together to describe each component divergently. A global signature is formed by adapting the concept of bag of feature model which counts frequencies of its local components with respect to words in a dictionary. The proposed method demonstrates its excellence in classifying objects in various complex backgrounds. Our proposed local feature shows classification accuracy of 98% while SURF,SIFT, BRISK and FREAK get 81%, 88%, 84% and 87% respectively.

Adaptive WTHE Using Mean Brightness Value of Image (영상의 평균 밝기 값을 이용한 적응형 WTHE)

  • Kim, Ma-Ry;Chung, Min-Gyo
    • Annual Conference of KIPS
    • /
    • 2008.05a
    • /
    • pp.84-87
    • /
    • 2008
  • 본 논문에서는 Q.Wang & R.K.Ward 가 제안한 WTHE(weighted and thresholded histogram equalization)방법의 enhancement parameters를 주어진 영상의 히스토그램 분포에 따라 적응적으로 제공하는 방법을 제안한다. WTHE는 영상의 히스토그램을 weight와 threshold를 이용하여 변형한 후 히스토그램 평활화(histogram equalization : HE)방법을 수행 함으로써 화질을 개선하는 방법이다. 이 방법은 두 가지 parameters 제어로 기존의 히스토그램 평활화 방법의 단점인 과도한 밝기 변화와 불필요한 artifacts를 줄일 수 있다. 본 논문에서는 WTHE 방법을 좀 더 간편하면서 다양한 분야에 적용하기 위해서 입력 영상에 따라 달라지는 parameters 값을 자동으로 제공하는 적응형 WTHE(Adaptive WTHE : AWTHE) 방법을 제안하고, 제안된 방법의 성능을 실험으로 제시한다.

Contrast Enhancement Using a Density based Sub-histogram Equalization Technique (밀도기반의 분할된 히스토그램 평활화를 통한 대비 향상 기법)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.1
    • /
    • pp.10-21
    • /
    • 2009
  • In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes those regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrast in the images and the results are compared to the conventional approaches to show its superiority.

Block based Normalized Numeric Image Descriptor (블록기반 정규화 된 이미지 수 표현자)

  • Park, Yu-Yung;Cho, Sang-Bock;Lee, Jong-Hwa
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.61-68
    • /
    • 2012
  • This paper describes a normalized numeric image descriptor used to assess the luminance and contrast of the image. The proposed image descriptor used the each pixel data as weighted value of the probability density function (PDF) and defined by normalization in order to objective represent. The proposed image numeric descriptor can be used to the adaptive gamma process because it suggests the objective basis of the gamma value selection.

Segmentation of Multispectral MRI Using Fuzzy Clustering (퍼지 클러스터링을 이용한 다중 스펙트럼 자기공명영상의 분할)

  • 윤옥경;김현순;곽동민;김범수;김동휘;변우목;박길흠
    • Journal of Biomedical Engineering Research
    • /
    • v.21 no.4
    • /
    • pp.333-338
    • /
    • 2000
  • In this paper, an automated segmentation algorithm is proposed for MR brain images using T1-weighted, T2-weighted, and PD images complementarily. The proposed segmentation algorithm is composed of 3 step. In the first step, cerebrum images are extracted by putting a cerebrum mask upon the three input images. In the second step, outstanding clusters that represent inner tissues of the cerebrum are chosen among 3-dimensional(3D) clusters. 3D clusters are determined by intersecting densely distributed parts of 2D histogram in the 3D space formed with three optimal scale images. Optimal scale image is made up of applying scale space filtering to each 2D histogram and searching graph structure. Optimal scale image best describes the shape of densely distributed parts of pixels in 2D histogram and searching graph structure. Optimal scale image best describes the shape of densely distributed parts of pixels in 2D histogram. In the final step, cerebrum images are segmented using FCM algorithm with its initial centroid value as the outstanding clusters centroid value. The proposed cluster's centroid accurately. And also can get better segmentation results from the proposed segmentation algorithm with multi spectral analysis than the method of single spectral analysis.

  • PDF

Percentile-Based Analysis of Non-Gaussian Diffusion Parameters for Improved Glioma Grading

  • Karaman, M. Muge;Zhou, Christopher Y.;Zhang, Jiaxuan;Zhong, Zheng;Wang, Kezhou;Zhu, Wenzhen
    • Investigative Magnetic Resonance Imaging
    • /
    • v.26 no.2
    • /
    • pp.104-116
    • /
    • 2022
  • The purpose of this study is to systematically determine an optimal percentile cut-off in histogram analysis for calculating the mean parameters obtained from a non-Gaussian continuous-time random-walk (CTRW) diffusion model for differentiating individual glioma grades. This retrospective study included 90 patients with histopathologically proven gliomas (42 grade II, 19 grade III, and 29 grade IV). We performed diffusion-weighted imaging using 17 b-values (0-4000 s/mm2) at 3T, and analyzed the images with the CTRW model to produce an anomalous diffusion coefficient (Dm) along with temporal (𝛼) and spatial (𝛽) diffusion heterogeneity parameters. Given the tumor ROIs, we created a histogram of each parameter; computed the P-values (using a Student's t-test) for the statistical differences in the mean Dm, 𝛼, or 𝛽 for differentiating grade II vs. grade III gliomas and grade III vs. grade IV gliomas at different percentiles (1% to 100%); and selected the highest percentile with P < 0.05 as the optimal percentile. We used the mean parameter values calculated from the optimal percentile cut-offs to do a receiver operating characteristic (ROC) analysis based on individual parameters or their combinations. We compared the results with those obtained by averaging data over the entire region of interest (i.e., 100th percentile). We found the optimal percentiles for Dm, 𝛼, and 𝛽 to be 68%, 75%, and 100% for differentiating grade II vs. III and 58%, 19%, and 100% for differentiating grade III vs. IV gliomas, respectively. The optimal percentile cut-offs outperformed the entire-ROI-based analysis in sensitivity (0.761 vs. 0.690), specificity (0.578 vs. 0.526), accuracy (0.704 vs. 0.639), and AUC (0.671 vs. 0.599) for grade II vs. III differentiations and in sensitivity (0.789 vs. 0.578) and AUC (0.637 vs. 0.620) for grade III vs. IV differentiations, respectively. Percentile-based histogram analysis, coupled with the multi-parametric approach enabled by the CTRW diffusion model using high b-values, can improve glioma grading.

Bidirectional LSTM based light-weighted malware detection model using Windows PE format binary data (윈도우 PE 포맷 바이너리 데이터를 활용한 Bidirectional LSTM 기반 경량 악성코드 탐지모델)

  • PARK, Kwang-Yun;LEE, Soo-Jin
    • Journal of Internet Computing and Services
    • /
    • v.23 no.1
    • /
    • pp.87-93
    • /
    • 2022
  • Since 99% of PCs operating in the defense domain use the Windows operating system, detection and response of Window-based malware is very important to keep the defense cyberspace safe. This paper proposes a model capable of detecting malware in a Windows PE (Portable Executable) format. The detection model was designed with an emphasis on rapid update of the training model to efficiently cope with rapidly increasing malware rather than the detection accuracy. Therefore, in order to improve the training speed, the detection model was designed based on a Bidirectional LSTM (Long Short Term Memory) network that can detect malware with minimal sequence data without complicated pre-processing. The experiment was conducted using the EMBER2018 dataset, As a result of training the model with feature sets consisting of three type of sequence data(Byte-Entropy Histogram, Byte Histogram, and String Distribution), accuracy of 90.79% was achieved. Meanwhile, it was confirmed that the training time was shortened to 1/4 compared to the existing detection model, enabling rapid update of the detection model to respond to new types of malware on the surge.

Finger Vein Recognition based on Matching Score-Level Fusion of Gabor Features

  • Lu, Yu;Yoon, Sook;Park, Dong Sun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.2
    • /
    • pp.174-182
    • /
    • 2013
  • Most methods for fusion-based finger vein recognition were to fuse different features or matching scores from more than one trait to improve performance. To overcome the shortcomings of "the curse of dimensionality" and additional running time in feature extraction, in this paper, we propose a finger vein recognition technology based on matching score-level fusion of a single trait. To enhance the quality of finger vein image, the contrast-limited adaptive histogram equalization (CLAHE) method is utilized and it improves the local contrast of normalized image after ROI detection. Gabor features are then extracted from eight channels based on a bank of Gabor filters. Instead of using the features for the recognition directly, we analyze the contributions of Gabor feature from each channel and apply a weighted matching score-level fusion rule to get the final matching score, which will be used for the last recognition. Experimental results demonstrate the CLAHE method is effective to enhance the finger vein image quality and the proposed matching score-level fusion shows better recognition performance.

Salt & Pepper Noise Removal Using Histogram and Spline Interpolation (히스토그램 및 Spline 보간법을 이용한 Salt & Pepper 잡음 제거)

  • Ko, You-Hak;Kwon, Se-Ik;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.691-693
    • /
    • 2017
  • As the modern society develops into the digital information age, the application field is gradually expanded and used as an important field. The image data is deteriorated due to various causes in the process of transmitting the image, and typically there is salt & pepper noise. Conventional methods for removing salt & pepper noise are somewhat lacking in noise canceling characteristics. In this paper, we propose a weighted filter using the histogram of the image damaged by salt & pepper noise and a spline interpolation method according to the direction of the local mask.

  • PDF