• Title/Summary/Keyword: Embedding Techniques

Search Result 144, Processing Time 0.021 seconds

The Performance Analysis of Digital Watermarking based on Merging Techniques

  • Ariunzaya, Batgerel;Chu, Hyung-Suk;An, Chong-Koo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.3
    • /
    • pp.176-180
    • /
    • 2011
  • Even though algorithms for watermark embedding and extraction step are important issue for digital watermarking, watermark selection and post-processing can give us an opportunity to improve our algorithms and achieve higher performance. For this reason, we summarized the possibilities of improvements for digital watermarking by referring to the watermark merging techniques rather than embedding and extraction algorithms in this paper. We chose Cox's function as main embedding and extraction algorithm, and multiple barcode watermarks as a watermark. Each bit of the multiple copies of barcode watermark was embedded into a gray-scale image with Cox's embedding function. After extracting the numbers of watermark, we applied the watermark merging techniques; including the simple merging, N-step iterated merging, recover merging and combination of iterated-recover merging. Main consequence of our paper was the fact of finding out how multiple barcode watermarks and merging techniques can give us opportunities to improve the performance of algorithm.

Enhanced robust data embedding techniques (내성을 강화한 data embedding기법)

  • 정인식;권오진
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.247-250
    • /
    • 2002
  • Data embedding has recently become important for protecting authority. In this paper, we Propose a robust data embedding technique for images. Our techniques are based on the convolution between message image and a random phase carrier. We add extra bits with carrier image to improve precision of detecting rate, moreover, we use block by block based cyclic correlation for the compensation of distortion. In experiment, we show that the proposed a1gorithm is robust to Stirmark 3.1. attacks.

  • PDF

A Graph Embedding Technique for Weighted Graphs Based on LSTM Autoencoders

  • Seo, Minji;Lee, Ki Yong
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1407-1423
    • /
    • 2020
  • A graph is a data structure consisting of nodes and edges between these nodes. Graph embedding is to generate a low dimensional vector for a given graph that best represents the characteristics of the graph. Recently, there have been studies on graph embedding, especially using deep learning techniques. However, until now, most deep learning-based graph embedding techniques have focused on unweighted graphs. Therefore, in this paper, we propose a graph embedding technique for weighted graphs based on long short-term memory (LSTM) autoencoders. Given weighted graphs, we traverse each graph to extract node-weight sequences from the graph. Each node-weight sequence represents a path in the graph consisting of nodes and the weights between these nodes. We then train an LSTM autoencoder on the extracted node-weight sequences and encode each nodeweight sequence into a fixed-length vector using the trained LSTM autoencoder. Finally, for each graph, we collect the encoding vectors obtained from the graph and combine them to generate the final embedding vector for the graph. These embedding vectors can be used to classify weighted graphs or to search for similar weighted graphs. The experiments on synthetic and real datasets show that the proposed method is effective in measuring the similarity between weighted graphs.

Preliminary Studies on Embedding Qualitative Reasoning into Qualitative Analysis and Laboratory Simulation

  • Pang, Jen-Sen;Syed Mustapha, S.M.F.D;Mohd.Zain, Sharifuddin
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.230-236
    • /
    • 2001
  • In this paper, we explored the possibilities of embedding Qualitative Reasoning techniques, the Qualitative Process Theory (QPT), and its implementation in the field of inorganic chemistry. The target field of implementation is Qualitative Chemical Analysis and Laboratory Simulation. By embedding such technique in this education software we aim to combine theory and practice into a single package. The system, are able to generate reasoning and explanation based on chemical theories, helping student in mastering basic chemistry knowledge and practical skill as well. We also review the suitability of embedding QPT techniques into chemistry in general, by comparing some examples from both fields.

  • PDF

Impact of Word Embedding Methods on Performance of Sentiment Analysis with Machine Learning Techniques

  • Park, Hoyeon;Kim, Kyoung-jae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.8
    • /
    • pp.181-188
    • /
    • 2020
  • In this study, we propose a comparative study to confirm the impact of various word embedding techniques on the performance of sentiment analysis. Sentiment analysis is one of opinion mining techniques to identify and extract subjective information from text using natural language processing and can be used to classify the sentiment of product reviews or comments. Since sentiment can be classified as either positive or negative, it can be considered one of the general classification problems. For sentiment analysis, the text must be converted into a language that can be recognized by a computer. Therefore, text such as a word or document is transformed into a vector in natural language processing called word embedding. Various techniques, such as Bag of Words, TF-IDF, and Word2Vec are used as word embedding techniques. Until now, there have not been many studies on word embedding techniques suitable for emotional analysis. In this study, among various word embedding techniques, Bag of Words, TF-IDF, and Word2Vec are used to compare and analyze the performance of movie review sentiment analysis. The research data set for this study is the IMDB data set, which is widely used in text mining. As a result, it was found that the performance of TF-IDF and Bag of Words was superior to that of Word2Vec and TF-IDF performed better than Bag of Words, but the difference was not very significant.

A Study on the Application of Natural Language Processing in Health Care Big Data: Focusing on Word Embedding Methods (보건의료 빅데이터에서의 자연어처리기법 적용방안 연구: 단어임베딩 방법을 중심으로)

  • Kim, Hansang;Chung, Yeojin
    • Health Policy and Management
    • /
    • v.30 no.1
    • /
    • pp.15-25
    • /
    • 2020
  • While healthcare data sets include extensive information about patients, many researchers have limitations in analyzing them due to their intrinsic characteristics such as heterogeneity, longitudinal irregularity, and noise. In particular, since the majority of medical history information is recorded in text codes, the use of such information has been limited due to the high dimensionality of explanatory variables. To address this problem, recent studies applied word embedding techniques, originally developed for natural language processing, and derived positive results in terms of dimensional reduction and accuracy of the prediction model. This paper reviews the deep learning-based natural language processing techniques (word embedding) and summarizes research cases that have used those techniques in the health care field. Then we finally propose a research framework for applying deep learning-based natural language process in the analysis of domestic health insurance data.

Comparison between Word Embedding Techniques in Traditional Korean Medicine for Data Analysis: Implementation of a Natural Language Processing Method (한의학 고문헌 데이터 분석을 위한 단어 임베딩 기법 비교: 자연어처리 방법을 적용하여)

  • Oh, Junho
    • Journal of Korean Medical classics
    • /
    • v.32 no.1
    • /
    • pp.61-74
    • /
    • 2019
  • Objectives : The purpose of this study is to help select an appropriate word embedding method when analyzing East Asian traditional medicine texts as data. Methods : Based on prescription data that imply traditional methods in traditional East Asian medicine, we have examined 4 count-based word embedding and 2 prediction-based word embedding methods. In order to intuitively compare these word embedding methods, we proposed a "prescription generating game" and compared its results with those from the application of the 6 methods. Results : When the adjacent vectors are extracted, the count-based word embedding method derives the main herbs that are frequently used in conjunction with each other. On the other hand, in the prediction-based word embedding method, the synonyms of the herbs were derived. Conclusions : Counting based word embedding methods seems to be more effective than prediction-based word embedding methods in analyzing the use of domesticated herbs. Among count-based word embedding methods, the TF-vector method tends to exaggerate the frequency effect, and hence the TF-IDF vector or co-word vector may be a more reasonable choice. Also, the t-score vector may be recommended in search for unusual information that could not be found in frequency. On the other hand, prediction-based embedding seems to be effective when deriving the bases of similar meanings in context.

Text Classification Using Parallel Word-level and Character-level Embeddings in Convolutional Neural Networks

  • Geonu Kim;Jungyeon Jang;Juwon Lee;Kitae Kim;Woonyoung Yeo;Jong Woo Kim
    • Asia pacific journal of information systems
    • /
    • v.29 no.4
    • /
    • pp.771-788
    • /
    • 2019
  • Deep learning techniques such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) show superior performance in text classification than traditional approaches such as Support Vector Machines (SVMs) and Naïve Bayesian approaches. When using CNNs for text classification tasks, word embedding or character embedding is a step to transform words or characters to fixed size vectors before feeding them into convolutional layers. In this paper, we propose a parallel word-level and character-level embedding approach in CNNs for text classification. The proposed approach can capture word-level and character-level patterns concurrently in CNNs. To show the usefulness of proposed approach, we perform experiments with two English and three Korean text datasets. The experimental results show that character-level embedding works better in Korean and word-level embedding performs well in English. Also the experimental results reveal that the proposed approach provides better performance than traditional CNNs with word-level embedding or character-level embedding in both Korean and English documents. From more detail investigation, we find that the proposed approach tends to perform better when there is relatively small amount of data comparing to the traditional embedding approaches.

Sentence model based subword embeddings for a dialog system

  • Chung, Euisok;Kim, Hyun Woo;Song, Hwa Jeon
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.599-612
    • /
    • 2022
  • This study focuses on improving a word embedding model to enhance the performance of downstream tasks, such as those of dialog systems. To improve traditional word embedding models, such as skip-gram, it is critical to refine the word features and expand the context model. In this paper, we approach the word model from the perspective of subword embedding and attempt to extend the context model by integrating various sentence models. Our proposed sentence model is a subword-based skip-thought model that integrates self-attention and relative position encoding techniques. We also propose a clustering-based dialog model for downstream task verification and evaluate its relationship with the sentence-model-based subword embedding technique. The proposed subword embedding method produces better results than previous methods in evaluating word and sentence similarity. In addition, the downstream task verification, a clustering-based dialog system, demonstrates an improvement of up to 4.86% over the results of FastText in previous research.

High capacity multi-bit data hiding based on modified histogram shifting technique

  • Sivasubramanian, Nandhini;Konganathan, Gunaseelan;Rao, Yeragudipati Venkata Ramana
    • ETRI Journal
    • /
    • v.40 no.5
    • /
    • pp.677-686
    • /
    • 2018
  • A novel data hiding technique based on modified histogram shifting that incorporates multi-bit secret data hiding is proposed. The proposed technique divides the image pixel values into embeddable and nonembeddable pixel values. Embeddable pixel values are those that are within a specified limit interval surrounding the peak value of an image. The limit interval is calculated from the number of secret bits to be embedded into each embeddable pixel value. The embedded secret bits can be perfectly extracted from the stego image at the receiver side without any overhead bits. From the simulation, it is found that the proposed technique produces a better quality stego image compared to other data hiding techniques, for the same embedding rate. Since the proposed technique only embeds the secret bits in a limited number of pixel values, the change in the visual quality of the stego image is negligible when compared to other data hiding techniques.