• Title/Summary/Keyword: Data Codebook

Search Result 80, Processing Time 0.021 seconds

A zeroblock coding algorithm for subband image compression

  • Park, Sahng-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.11
    • /
    • pp.2375-2380
    • /
    • 1997
  • The need for developing effective coding techniques for various multimedia services is increasing in order to meet the demand for image data. In this paper, a zeroblock coding algorithm is proposed for progressive transmission of images. The zeroblock coding algorithm is constructed as an embedded coding so that the encoding and decoding process can be terminated at any point and allowing reasonable image quality. Some features of zeroblock coding algorithm are 1) coding of subband images by prediction of the insignificance of blocks across subband leels, 2) aset of sate transition rules for representing the significance map of blocks, and 3) block coding by vector quantization using a multiband codebook consisting of several subcodebooks dedicated for each subband at a given threshold.

  • PDF

Speech Recognition Using HMM Based on Fuzzy (피지에 기초를 둔 HMM을 이용한 음성 인식)

  • 안태옥;김순협
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.12
    • /
    • pp.68-74
    • /
    • 1991
  • This paper proposes a HMM model based on fuzzy, as a method on the speech recognition of speaker-independent. In this recognition method, multi-observation sequences which give proper probabilities by fuzzy rule according to order of short distance from VQ codebook are obtained. Thereafter, the HMM model using this multi-observation sequences is generated, and in case of recognition, a word that has the most highest probability is selected as a recognized word. The vocabularies for recognition experiment are 146 DDD are names, and the feature parameter is 10S0thT LPC cepstrum coefficients. Besides the speech recognition experiments of proposed model, for comparison with it, we perform the experiments by DP, MSVQ and general HMM under same condition and data. Through the experiment results, it is proved that HMM model using fuzzy proposed in this paper is superior to DP method, MSVQ and general HMM model in recognition rate and computational time.

  • PDF

Data Cleaning and Integration of Multi-year Dietary Survey in the Korea National Health and Nutrition Examination Survey (KNHANES) using Database Normalization Theory (데이터베이스 정규화 이론을 이용한 국민건강영양조사 중 다년도 식이조사 자료 정제 및 통합)

  • Kwon, Namji;Suh, Jihye;Lee, Hunjoo
    • Journal of Environmental Health Sciences
    • /
    • v.43 no.4
    • /
    • pp.298-306
    • /
    • 2017
  • Objectives: Since 1998, the Korea National Health and Nutrition Examination Survey (KNHANES) has been conducted in order to investigate the health and nutritional status of Koreans. The food intake data of individuals in the KNHANES has also been utilized as source dataset for risk assessment of chemicals via food. To improve the reliability of intake estimation and prevent missing data for less-responded foods, the structure of integrated long-standing datasets is significant. However, it is difficult to merge multi-year survey datasets due to ineffective cleaning processes for handling extensive numbers of codes for each food item along with changes in dietary habits over time. Therefore, this study aims at 1) cleaning the process of abnormal data 2) generation of integrated long-standing raw data, and 3) contributing to the production of consistent dietary exposure factors. Methods: Codebooks, the guideline book, and raw intake data from KNHANES V and VI were used for analysis. The violation of the primary key constraint and the $1^{st}-3rd$ normal form in relational database theory were tested for the codebook and the structure of the raw data, respectively. Afterwards, the cleaning process was executed for the raw data by using these integrated codes. Results: Duplication of key records and abnormality in table structures were observed. However, after adjusting according to the suggested method above, the codes were corrected and integrated codes were newly created. Finally, we were able to clean the raw data provided by respondents to the KNHANES survey. Conclusion: The results of this study will contribute to the integration of the multi-year datasets and help improve the data production system by clarifying, testing, and verifying the primary key, integrity of the code, and primitive data structure according to the database normalization theory in the national health data.

A Single Feedback Based Interference Alignment for Three-User MIMO Interference Channels with Limited Feedback

  • Chae, Hyukjin;Kim, Kiyeon;Ran, Rong;Kim, Dong Ku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.692-710
    • /
    • 2013
  • Conventional interference alignment (IA) for a MIMO interference channel (IFC) requires global and perfect channel state information at transmitter (CSIT) to achieve the optimal degrees of freedom (DoF), which prohibits practical implementation. In order to alleviate the global CSIT requirement caused by the coupled relation among all of IA equations, we propose an IA scheme with a single feedback link of each receiver in a limited feedback environment for a three-user MIMO IFC. The main feature of the proposed scheme is that one of users takes out a fraction of its maximum number of data streams to decouple IA equations for three-user MIMO IFC, which results in a single link feedback structure at each receiver. While for the conventional IA each receiver has to feed back to all transmitters for transmitting the maximum number of data streams. With the assumption of a random codebook, we analyze the upper bound of the average throughput loss caused by quantized channel knowledge as a function of feedback bits. Analytic results show that the proposed scheme outperforms the conventional IA scheme in term of the feedback overhead and the sum rate as well.

Visual Semantic Based 3D Video Retrieval System Using HDFS

  • Ranjith Kumar, C.;Suguna, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3806-3825
    • /
    • 2016
  • This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose we intent to hitch on BOVW and Mapreduce in 3D framework. Here, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and produce results .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we fiture the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

Fast Fractal Image Compression Using DCT Coefficients and Its Applications into Video Steganography (DCT계수를 이용한 고속 프랙탈 압축 기법과 화상 심층암호에의 응용)

  • Lee, Hye-Joo;Park, Ji-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.11-22
    • /
    • 1997
  • The fractal image compression partitions an original image into blocks of equal size and searches a do-main block having self-similarity. This method of compression achieves high compression ratio because it is unnecessary to transmit the additional codebook to receiver and it provides good quality of reconstructed images. In spite of these advantages, this method has a drawback in which encoding time increase due to a complicated linear transformation for determining a similar-domain block. In this paper, a fast fractal image compression method is proposed by decreasing the number of transformation usings AC(alternating current) coefficients of block. The proposed method also has a good quality as compared with the well-known fractal codings. Furthermore, method also has a good quality as apply the video steganography that can conceal an important secret data.

  • PDF

Image Compression with Edge Directions based on DCT-VQ (DCT-VQ를 기반으로 한 에지의 방향성을 갖는 영상압축)

  • 김진태;김동욱;임한규
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.2
    • /
    • pp.194-203
    • /
    • 1998
  • In this paper, a new DCT-VQ method is proposed which can solve the problems of VQ such as the degradation of edge and enormous calculations. VQ is carried in DCT domain but spatial domain in order to protect the degradation of edge. DCT makes high correlated image data decorrelated and the energy concentrated on a few coefficients. In DCT domain, the DC coefficient is quantized with 8 bits uniform scalar quantizer and the AC coefficients are divided to three regions and coded with vector qiantizer for considering edge components. For the decrease of the calculation and memory, the vectors for three region have small dimension of $1{\times}7$ and use the same codebook. Thus, the proposed method can fully express the edge components by considering AC coefficients in DCT domain and decrease the calculation and memory be reducing the dimension of vectors.

  • PDF

A Study on an Image Classifier using Multi-Neural Networks (다중 신경망을 이용한 영상 분류기에 관한 연구)

  • Park, Soo-Bong;Park, Jong-An
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.1
    • /
    • pp.13-21
    • /
    • 1995
  • In this paper, we improve an image classifier algorithm based on neural network learning. It consists of two steps. The first is input pattern generation and the second, the global neural network implementation using an improved back-propagation algorithm. The feature vector for pattern recognition consists of the codebook data obtained from self-organization feature map learning. It decreases the input neuron number as well as the computational cost. The global neural network algorithm which is used in classifier inserts a control part and an address memory part to the back-propagation algorithm to control weights and unit-offsets. The simulation results show that it does not fall into the local minima and can implement easily the large-scale neural network. And it decreases largely the learning time.

  • PDF

On a Multiband Nonuniform Samping Technique with a Gaussian Noise Codebook for Speech Coding (가우시안 코드북을 갖는 다중대역 비균일 음성 표본화법)

  • Chung, Hyung-Goue;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.6
    • /
    • pp.110-114
    • /
    • 1997
  • When applying the nonuniform sampling to noisy speech signal, the required data rate increases to be comparable to or more than that by uniform sampling such as PCM. To solve this problem, we have proposed the waveform coding method, multiband nonuniform waveform coding(MNWC), applying the nonuniform sampling to band-separated speech signal[7]. However, the speech quality is deteriorated when it is compared to the uniform sampling method, since the high band is simply modeled as a Gaussian noise with average level. In this paper, as a good method to overcome this drawback, the high band is modeled as one of 16 codewords having different center frequencies. By doing this, with maintaining high speech quality as MOS score of average 3.16, the proposed method achieves 1.5 times higher compression ratio than that of the conventional nonuniform sampling method(CNSM).

  • PDF

Proposed Efficient Architectures and Design Choices in SoPC System for Speech Recognition

  • Trang, Hoang;Hoang, Tran Van
    • Journal of IKEEE
    • /
    • v.17 no.3
    • /
    • pp.241-247
    • /
    • 2013
  • This paper presents the design of a System on Programmable Chip (SoPC) based on Field Programmable Gate Array (FPGA) for speech recognition in which Mel-Frequency Cepstral Coefficients (MFCC) for speech feature extraction and Vector Quantization for recognition are used. The implementing process of the speech recognition system undergoes the following steps: feature extraction, training codebook, recognition. In the first step of feature extraction, the input voice data will be transformed into spectral components and extracted to get the main features by using MFCC algorithm. In the recognition step, the obtained spectral features from the first step will be processed and compared with the trained components. The Vector Quantization (VQ) is applied in this step. In our experiment, Altera's DE2 board with Cyclone II FPGA is used to implement the recognition system which can recognize 64 words. The execution speed of the blocks in the speech recognition system is surveyed by calculating the number of clock cycles while executing each block. The recognition accuracies are also measured in different parameters of the system. These results in execution speed and recognition accuracy could help the designer to choose the best configurations in speech recognition on SoPC.