• Title/Summary/Keyword: Data quantization

Search Result 348, Processing Time 0.026 seconds

A Steganography Method Improving Image Quality and Minimizing Image Degradation (영상의 화질 개선과 열화측정 시간을 최소화하는 스테가노그라피 방법)

  • Choi, YongSoo;Kim, JangHwan
    • Journal of Digital Contents Society
    • /
    • v.17 no.5
    • /
    • pp.433-439
    • /
    • 2016
  • In this paper, we propose a optimized steganography how to improve the image degradation of the existing data hiding techniques. This method operates in the compressed domain(JPEG) of an image. Most of the current information concealment methods generally change the coefficients to hide information. And several methods have tried to improve the performance of a typical steganography method such as F5 including a matrix encoding. Those papers achieved the object of reducing the distortion which is generated as hiding data in coefficients of compressed domain. In the proposed paper we analyzed the effect of the quantization table for hiding the data in the compressed domain. As a result, it found that can decrease the distortion that occur in the application of steganography techniques. This paper provides a little (Maximum: approximately 6.5%) further improved results in terms of image quality in a data hiding on compressed domain. Developed algorithm help improve the data hiding performance of compressed image other than the JPEG.

A Study on Performance Evaluation of Clustering Algorithms using Neural and Statistical Method (클러스터링 성능평가: 신경망 및 통계적 방법)

  • 윤석환;신용백
    • Journal of the Korean Professional Engineers Association
    • /
    • v.29 no.2
    • /
    • pp.71-79
    • /
    • 1996
  • This paper evaluates the clustering performance of a neural network and a statistical method. Algorithms which are used in this paper are the GLVQ(Generalized Loaming vector Quantization) for a neural method and the k -means algorithm for a statistical clustering method. For comparison of two methods, we calculate the Rand's c statistics. As a result, the mean of c value obtained with the GLVQ is higher than that obtained with the k -means algorithm, while standard deviation of c value is lower. Experimental data sets were the Fisher's IRIS data and patterns extracted from handwritten numerals.

  • PDF

Sample-Adaptive Product Quantization and Design Algorithm (표본 적응 프러덕트 양자화와 설계 알고리즘)

  • 김동식;박섭형
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2391-2400
    • /
    • 1999
  • Vector quantizer (VQ) is an efficient data compression technique for low bit rate applications. However, the major disadvantage of VQ is its encoding complexity which increases dramatically as the vector dimension and bit rate increase. Even though one can use a modified VQ to reduce the encoding complexity, it is nearly impossible to implement such a VQ at a high bit rate or for a large vector dimension because of the enormously large memory requirement for the codebook and the very large training sequence (TS) size. To overcome this difficulty, in this paper we propose a novel structurally constrained VQ for the high bit rate and the large vector dimension cases in order to obtain VQ-level performance. Furthermore, this VQ can be extended to the low bit rate applications. The proposed quantization scheme has a form of feed-forward adaptive quantizer with a short adaptation period. Hence, we call this quantization scheme sample-adaptive product quantizer (SAPQ). SAPQ can provide a 2 ~3dB improvement over the Lloyd-Max scalar quantizers.

  • PDF

HMM-based Speech Recognition using FSVQ and Fuzzy Concept (FSVQ와 퍼지 개념을 이용한 HMM에 기초를 둔 음성 인식)

  • 안태옥
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.90-97
    • /
    • 2003
  • This paper proposes a speech recognition based on HMM(Hidden Markov Model) using FSVQ(First Section Vector Quantization) and fuzzy concept. In the proposed paper, we generate codebook of First Section, and then obtain multi-observation sequences by order of large propabilistic values based on fuzzy rule from the codebook of the first section. Thereafter, this observation sequences of first section from codebooks is trained and in case of recognition, a word that has the most highest probability of first section is selected as a recognized word by same concept. Train station names are selected as the target recognition vocabulary and LPC cepstrum coefficients are used as the feature parameters. Besides the speech recognition experiments of proposed method, we experiment the other methods under same conditions and data. Through the experiment results, it is proved that the proposed method based on HMM using FSVQ and fuzzy concept is superior to tile others in recognition rate.

Performance Evaluation of VLBI Correlation Subsystem Main Product (VLBI 상관 서브시스템 본제품의 제작현장 성능시험)

  • Oh, Se-Jin;Roh, Duk-Gyoo;Yeom, Jae-Hwan;Oyama, Tomoaki;Park, Sun-Youp;Kang, Yong-Woo;Kawaguchi, Noriyuki;Kobayashi, Hideyuki;Kawakami, Kazuyuki
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.322-332
    • /
    • 2011
  • In this paper, we introduce the 1st performance evaluation of VLBI Correlation Subsystem (VCS) main product, which is core system of Korea-Japan Joint VLBI Correlator (KJJVC). The main goal of the 1st performance evaluation of VCS main product is that the perfection of overall system will be enhanced after checking the unsolved part by performing the experiments towards various test items at the manufacturer before installation of field. The functional test was performed by including the overflow problem occurred in the FFT re-quantization module due to the insufficient of effective bit at the VCS trial product in this performance test of VCS main product. Through the performance test for VCS main product in the factory, the problem such as FFT re-quantization discovered at performance test of VCS trial product in 2008 was clearly solved and the important functions such as delay tracking, daly compensation, and frequency bining were added in this VCS main product. We also confirmed that the predicted correlation results (fringe) was obtained in the correlation test by using real astronomical observed data(wideband/narrow band).

Fuzzy Neural Network Model Using Asymmetric Fuzzy Learning Rates (비대칭 퍼지 학습률을 이용한 퍼지 신경회로망 모델)

  • Kim Yong-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.7
    • /
    • pp.800-804
    • /
    • 2005
  • This paper presents a fuzzy learning rule which is the fuzzified version of LVQ(Learning Vector Quantization). This fuzzy learning rule 3 uses fuzzy learning rates. instead of the traditional learning rates. LVQ uses the same learning rate regardless of correctness of classification. But, the new fuzzy learning rule uses the different learning rates depending on whether classification is correct or not. The new fuzzy learning rule is integrated into the improved IAFC(Integrated Adaptive Fuzzy Clustering) neural network. The improved IAFC neural network is both stable and plastic. The iris data set is used to compare the performance of the supervised IAFC neural network 3 with the performance of backprogation neural network. The results show that the supervised IAFC neural network 3 is better than backpropagation neural network.

An Efficient Architecture of Transform & Quantization Module in MPEG-4 Video Code (MPEG-4 영상코덱에서 DCTQ module의 효율적인 구조)

  • 서기범;윤동원
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.11
    • /
    • pp.29-36
    • /
    • 2003
  • In this paper, an efficient VLSI architecture for DCTQ module, which consists of 2D-DCT, quantization, AC/DC prediction block, scan conversion, inverse quantization and 2D-IDCT, is presented. The architecture of the module is designed to handle a macroblock data within 1064 cycles and suitable for MPEG-4 video codec handling 30 frame CIF image for both encoder and decoder simultaneously. Only single 1-D DCT/IDCT cores are used for the design instead of 2-D DCT/IDCT, respectively. 1-bit serial distributed arithmetic architecture is adopted for 1-D DCT/IDCT to reduce the hardware area in this architecture. To reduce the power consumption of DCTQ modu1e, we propose the method not to operate the DCTQ modu1e exploiting the SAE(sum of absolute error) value from motion estimation and cbp(coded block pattern). To reduce the AC/DC prediction memory size, the memory architecture and memory access method for AC/DC prediction block is proposed. As the result, the maximum utilization of hardware can be achieved, and power consumption can be minimized. The proposed design is operated on 27MHz clock. The experimental results show that the accuracy of DCT and IDCT meet the IEEE specification.

Abnormal sonar signal detection using recurrent neural network and vector quantization (순환신경망과 벡터 양자화를 이용한 비정상 소나 신호 탐지)

  • Kibae Lee;Guhn Hyeok Ko;Chong Hyun Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.500-510
    • /
    • 2023
  • Passive sonar signals mainly contain both normal and abnormal signals. The abnormal signals mixed with normal signals are primarily detected using an AutoEncoder (AE) that learns only normal signals. However, existing AEs may perform inaccurate detection by reconstructing distorted normal signals from mixed signal. To address these limitations, we propose an abnormal signal detection model based on a Recurrent Neural Network (RNN) and vector quantization. The proposed model generates a codebook representing the learned latent vectors and detects abnormal signals more accurately through the proposed search process of code vectors. In experiments using publicly available underwater acoustic data, the AE and Variational AutoEncoder (VAE) using the proposed method showed at least a 2.4 % improvement in the detection performance and at least a 9.2 % improvement in the extraction performance for abnormal signals than the existing models.

A Semi-fragile Image Watermarking Scheme Exploiting BTC Quantization Data

  • Zhao, Dongning;Xie, Weixin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.4
    • /
    • pp.1499-1513
    • /
    • 2014
  • This paper proposes a novel blind image watermarking scheme exploiting Block Truncation Coding (BTC). Most of existing BTC-based watermarking or data hiding methods embed information in BTC compressed images by modifying the BTC encoding stage or BTC-compressed data, resulting in watermarked images with bad quality. Other than existing BTC-based watermarking schemes, our scheme does not really perform the BTC compression on images during the embedding process but uses the parity of BTC quantization data to guide the watermark embedding and extraction processes. In our scheme, we use a binary image as the original watermark. During the embedding process, the original cover image is first partitioned into non-overlapping $4{\times}4$ blocks. Then, BTC is performed on each block to obtain its BTC quantized high mean and low mean. According to the parity of high mean and the parity of low mean, two watermark bits are embedded in each block by modifying the pixel values in the block to make sure that the parity of high mean and the parity of low mean in the modified block are equal to the two watermark bits. During the extraction process, BTC is first performed on each block to obtain its high mean and low mean. By checking the parity of high mean and the parity of low mean, we can extract the two watermark bits in each block. The experimental results show that the proposed watermarking method is fragile to most image processing operations and various kinds of attacks while preserving the invisibility very well, thus the proposed scheme can be used for image authentication.

A New ROM Compression Method for Continuous Data (연속된 데이터를 위한 새로운 롬 압축 방식)

  • 양병도;김이섭
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.5
    • /
    • pp.354-360
    • /
    • 2003
  • A new ROM compression method for continuous data is proposed. The ROM compression method is based on two proposed ROM compression algorithms. The first one is a region select ROM compression algorithm that stores only regions including data after dividing data into many small regions by magnitude and address. The second is a quantization ROM and error ROM compression algorithm that divides data into quantized data and their errors. Using these algorithms, 40~60% ROM size reductions aye achieved for various continuous data.