• 제목/요약/키워드: Embedding Techniques

검색결과 144건 처리시간 0.02초

The Performance Analysis of Digital Watermarking based on Merging Techniques

  • Ariunzaya, Batgerel;Chu, Hyung-Suk;An, Chong-Koo
    • 융합신호처리학회논문지
    • /
    • 제12권3호
    • /
    • pp.176-180
    • /
    • 2011
  • Even though algorithms for watermark embedding and extraction step are important issue for digital watermarking, watermark selection and post-processing can give us an opportunity to improve our algorithms and achieve higher performance. For this reason, we summarized the possibilities of improvements for digital watermarking by referring to the watermark merging techniques rather than embedding and extraction algorithms in this paper. We chose Cox's function as main embedding and extraction algorithm, and multiple barcode watermarks as a watermark. Each bit of the multiple copies of barcode watermark was embedded into a gray-scale image with Cox's embedding function. After extracting the numbers of watermark, we applied the watermark merging techniques; including the simple merging, N-step iterated merging, recover merging and combination of iterated-recover merging. Main consequence of our paper was the fact of finding out how multiple barcode watermarks and merging techniques can give us opportunities to improve the performance of algorithm.

내성을 강화한 data embedding기법 (Enhanced robust data embedding techniques)

  • 정인식;권오진
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.247-250
    • /
    • 2002
  • Data embedding has recently become important for protecting authority. In this paper, we Propose a robust data embedding technique for images. Our techniques are based on the convolution between message image and a random phase carrier. We add extra bits with carrier image to improve precision of detecting rate, moreover, we use block by block based cyclic correlation for the compensation of distortion. In experiment, we show that the proposed a1gorithm is robust to Stirmark 3.1. attacks.

  • PDF

A Graph Embedding Technique for Weighted Graphs Based on LSTM Autoencoders

  • Seo, Minji;Lee, Ki Yong
    • Journal of Information Processing Systems
    • /
    • 제16권6호
    • /
    • pp.1407-1423
    • /
    • 2020
  • A graph is a data structure consisting of nodes and edges between these nodes. Graph embedding is to generate a low dimensional vector for a given graph that best represents the characteristics of the graph. Recently, there have been studies on graph embedding, especially using deep learning techniques. However, until now, most deep learning-based graph embedding techniques have focused on unweighted graphs. Therefore, in this paper, we propose a graph embedding technique for weighted graphs based on long short-term memory (LSTM) autoencoders. Given weighted graphs, we traverse each graph to extract node-weight sequences from the graph. Each node-weight sequence represents a path in the graph consisting of nodes and the weights between these nodes. We then train an LSTM autoencoder on the extracted node-weight sequences and encode each nodeweight sequence into a fixed-length vector using the trained LSTM autoencoder. Finally, for each graph, we collect the encoding vectors obtained from the graph and combine them to generate the final embedding vector for the graph. These embedding vectors can be used to classify weighted graphs or to search for similar weighted graphs. The experiments on synthetic and real datasets show that the proposed method is effective in measuring the similarity between weighted graphs.

Preliminary Studies on Embedding Qualitative Reasoning into Qualitative Analysis and Laboratory Simulation

  • Pang, Jen-Sen;Syed Mustapha, S.M.F.D;Mohd.Zain, Sharifuddin
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2001년도 The Pacific Aisan Confrence On Intelligent Systems 2001
    • /
    • pp.230-236
    • /
    • 2001
  • In this paper, we explored the possibilities of embedding Qualitative Reasoning techniques, the Qualitative Process Theory (QPT), and its implementation in the field of inorganic chemistry. The target field of implementation is Qualitative Chemical Analysis and Laboratory Simulation. By embedding such technique in this education software we aim to combine theory and practice into a single package. The system, are able to generate reasoning and explanation based on chemical theories, helping student in mastering basic chemistry knowledge and practical skill as well. We also review the suitability of embedding QPT techniques into chemistry in general, by comparing some examples from both fields.

  • PDF

Impact of Word Embedding Methods on Performance of Sentiment Analysis with Machine Learning Techniques

  • Park, Hoyeon;Kim, Kyoung-jae
    • 한국컴퓨터정보학회논문지
    • /
    • 제25권8호
    • /
    • pp.181-188
    • /
    • 2020
  • 본 연구에서는 다양한 워드 임베딩 기법이 감성분석의 성과에 미치는 영향을 확인하기 위한 비교연구를 제안한다. 감성분석은 자연어 처리를 사용하여 텍스트 문서에서 주관적인 정보를 식별하고 추출하는 오피니언 마이닝 기법 중 하나이며, 상품평이나 댓글의 감성을 분류하는데 사용될 수 있다. 감성은 긍정적이거나 부정적인 것으로 분류될 수 있기 때문에 일반적인 분류문제 중 하나로 생각할 수 있으며, 이의 분류를 위해서는 텍스트를 컴퓨터가 인식할 수 있는 언어로 변환하여야 한다. 따라서 단어나 문서와 같은 텍스트를 자연어 처리에서 벡터로 변형하여 진행하는데 이를 워드 임베딩이라고 한다. 워드 임베딩 기법은 Bag of Words, TF-IDF, Word2Vec 등 다양한 기법이 사용되고 있는데 지금까지 감성분석에 적합한 워드 임베딩 기법에 대한 연구는 많이 진행되지 않았다. 본 연구에서는 영화 리뷰의 감성분석을 위해 다양한 워드 임베딩 기법 중 Bag of Words, TF-IDF, Word2Vec을 사용하여 그 성과를 비교 분석한다. 분석에 사용할 연구용 데이터 셋은 텍스트 마이닝에서 많이 활용되고 있는 IMDB 데이터 셋을 사용하였다. 분석 결과, TF-IDF와 Bag of Words의 성과가 Word2Vec보다 우수한 것으로 나타났으며 TF-IDF는 Bag of Words보다 성과가 우수하였으나 그 차이가 매우 크지는 않았다.

보건의료 빅데이터에서의 자연어처리기법 적용방안 연구: 단어임베딩 방법을 중심으로 (A Study on the Application of Natural Language Processing in Health Care Big Data: Focusing on Word Embedding Methods)

  • 김한상;정여진
    • 보건행정학회지
    • /
    • 제30권1호
    • /
    • pp.15-25
    • /
    • 2020
  • While healthcare data sets include extensive information about patients, many researchers have limitations in analyzing them due to their intrinsic characteristics such as heterogeneity, longitudinal irregularity, and noise. In particular, since the majority of medical history information is recorded in text codes, the use of such information has been limited due to the high dimensionality of explanatory variables. To address this problem, recent studies applied word embedding techniques, originally developed for natural language processing, and derived positive results in terms of dimensional reduction and accuracy of the prediction model. This paper reviews the deep learning-based natural language processing techniques (word embedding) and summarizes research cases that have used those techniques in the health care field. Then we finally propose a research framework for applying deep learning-based natural language process in the analysis of domestic health insurance data.

한의학 고문헌 데이터 분석을 위한 단어 임베딩 기법 비교: 자연어처리 방법을 적용하여 (Comparison between Word Embedding Techniques in Traditional Korean Medicine for Data Analysis: Implementation of a Natural Language Processing Method)

  • 오준호
    • 대한한의학원전학회지
    • /
    • 제32권1호
    • /
    • pp.61-74
    • /
    • 2019
  • Objectives : The purpose of this study is to help select an appropriate word embedding method when analyzing East Asian traditional medicine texts as data. Methods : Based on prescription data that imply traditional methods in traditional East Asian medicine, we have examined 4 count-based word embedding and 2 prediction-based word embedding methods. In order to intuitively compare these word embedding methods, we proposed a "prescription generating game" and compared its results with those from the application of the 6 methods. Results : When the adjacent vectors are extracted, the count-based word embedding method derives the main herbs that are frequently used in conjunction with each other. On the other hand, in the prediction-based word embedding method, the synonyms of the herbs were derived. Conclusions : Counting based word embedding methods seems to be more effective than prediction-based word embedding methods in analyzing the use of domesticated herbs. Among count-based word embedding methods, the TF-vector method tends to exaggerate the frequency effect, and hence the TF-IDF vector or co-word vector may be a more reasonable choice. Also, the t-score vector may be recommended in search for unusual information that could not be found in frequency. On the other hand, prediction-based embedding seems to be effective when deriving the bases of similar meanings in context.

Text Classification Using Parallel Word-level and Character-level Embeddings in Convolutional Neural Networks

  • Geonu Kim;Jungyeon Jang;Juwon Lee;Kitae Kim;Woonyoung Yeo;Jong Woo Kim
    • Asia pacific journal of information systems
    • /
    • 제29권4호
    • /
    • pp.771-788
    • /
    • 2019
  • Deep learning techniques such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) show superior performance in text classification than traditional approaches such as Support Vector Machines (SVMs) and Naïve Bayesian approaches. When using CNNs for text classification tasks, word embedding or character embedding is a step to transform words or characters to fixed size vectors before feeding them into convolutional layers. In this paper, we propose a parallel word-level and character-level embedding approach in CNNs for text classification. The proposed approach can capture word-level and character-level patterns concurrently in CNNs. To show the usefulness of proposed approach, we perform experiments with two English and three Korean text datasets. The experimental results show that character-level embedding works better in Korean and word-level embedding performs well in English. Also the experimental results reveal that the proposed approach provides better performance than traditional CNNs with word-level embedding or character-level embedding in both Korean and English documents. From more detail investigation, we find that the proposed approach tends to perform better when there is relatively small amount of data comparing to the traditional embedding approaches.

Sentence model based subword embeddings for a dialog system

  • Chung, Euisok;Kim, Hyun Woo;Song, Hwa Jeon
    • ETRI Journal
    • /
    • 제44권4호
    • /
    • pp.599-612
    • /
    • 2022
  • This study focuses on improving a word embedding model to enhance the performance of downstream tasks, such as those of dialog systems. To improve traditional word embedding models, such as skip-gram, it is critical to refine the word features and expand the context model. In this paper, we approach the word model from the perspective of subword embedding and attempt to extend the context model by integrating various sentence models. Our proposed sentence model is a subword-based skip-thought model that integrates self-attention and relative position encoding techniques. We also propose a clustering-based dialog model for downstream task verification and evaluate its relationship with the sentence-model-based subword embedding technique. The proposed subword embedding method produces better results than previous methods in evaluating word and sentence similarity. In addition, the downstream task verification, a clustering-based dialog system, demonstrates an improvement of up to 4.86% over the results of FastText in previous research.

High capacity multi-bit data hiding based on modified histogram shifting technique

  • Sivasubramanian, Nandhini;Konganathan, Gunaseelan;Rao, Yeragudipati Venkata Ramana
    • ETRI Journal
    • /
    • 제40권5호
    • /
    • pp.677-686
    • /
    • 2018
  • A novel data hiding technique based on modified histogram shifting that incorporates multi-bit secret data hiding is proposed. The proposed technique divides the image pixel values into embeddable and nonembeddable pixel values. Embeddable pixel values are those that are within a specified limit interval surrounding the peak value of an image. The limit interval is calculated from the number of secret bits to be embedded into each embeddable pixel value. The embedded secret bits can be perfectly extracted from the stego image at the receiver side without any overhead bits. From the simulation, it is found that the proposed technique produces a better quality stego image compared to other data hiding techniques, for the same embedding rate. Since the proposed technique only embeds the secret bits in a limited number of pixel values, the change in the visual quality of the stego image is negligible when compared to other data hiding techniques.