• Title/Summary/Keyword: Embedding Techniques

Search Result 141, Processing Time 0.031 seconds

Selective Shuffling for Hiding Hangul Messages in Steganography (스테가노그래피에서 한글 메시지 은닉을 위한 선택적 셔플링)

  • Ji, Seon-su
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.3
    • /
    • pp.211-216
    • /
    • 2022
  • Steganography technology protects the existence of hidden information by embedding a secret message in a specific location on the cover medium. Security and resistance are strengthened by applying various hybrid methods based on encryption and steganography. In particular, techniques to increase chaos and randomness are needed to improve security. In fact, the case where the shuffling method is applied based on the discrete cosine transform(DCT) and the least significant bit(LSB) is an area that needs to be studied. I propose a new approach to hide the bit information of Hangul messages by integrating the selective shuffling method that can add the complexity of message hiding and applying the spatial domain technique to steganography. Inverse shuffling is applied when extracting messages. In this paper, the Hangul message to be inserted is decomposed into the choseong, jungseong and jongseong. It improves security and chaos by applying a selective shuffling process based on the corresponding information. The correlation coefficient and PSNR were used to confirm the performance of the proposed method. It was confirmed that the PSNR value of the proposed method was appropriate when compared with the reference value.

Spam Image Detection Model based on Deep Learning for Improving Spam Filter

  • Seong-Guk Nam;Dong-Gun Lee;Yeong-Seok Seo
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.289-301
    • /
    • 2023
  • Due to the development and dissemination of modern technology, anyone can easily communicate using services such as social network service (SNS) through a personal computer (PC) or smartphone. The development of these technologies has caused many beneficial effects. At the same time, bad effects also occurred, one of which was the spam problem. Spam refers to unwanted or rejected information received by unspecified users. The continuous exposure of such information to service users creates inconvenience in the user's use of the service, and if filtering is not performed correctly, the quality of service deteriorates. Recently, spammers are creating more malicious spam by distorting the image of spam text so that optical character recognition (OCR)-based spam filters cannot easily detect it. Fortunately, the level of transformation of image spam circulated on social media is not serious yet. However, in the mail system, spammers (the person who sends spam) showed various modifications to the spam image for neutralizing OCR, and therefore, the same situation can happen with spam images on social media. Spammers have been shown to interfere with OCR reading through geometric transformations such as image distortion, noise addition, and blurring. Various techniques have been studied to filter image spam, but at the same time, methods of interfering with image spam identification using obfuscated images are also continuously developing. In this paper, we propose a deep learning-based spam image detection model to improve the existing OCR-based spam image detection performance and compensate for vulnerabilities. The proposed model extracts text features and image features from the image using four sub-models. First, the OCR-based text model extracts the text-related features, whether the image contains spam words, and the word embedding vector from the input image. Then, the convolution neural network-based image model extracts image obfuscation and image feature vectors from the input image. The extracted feature is determined whether it is a spam image by the final spam image classifier. As a result of evaluating the F1-score of the proposed model, the performance was about 14 points higher than the OCR-based spam image detection performance.

Development of Multimedia Annotation and Retrieval System using MPEG-7 based Semantic Metadata Model (MPEG-7 기반 의미적 메타데이터 모델을 이용한 멀티미디어 주석 및 검색 시스템의 개발)

  • An, Hyoung-Geun;Koh, Jae-Jin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.573-584
    • /
    • 2007
  • As multimedia information recently increases fast, various types of retrieval of multimedia data are becoming issues of great importance. For the efficient multimedia data processing, semantics based retrieval techniques are required that can extract the meaning contents of multimedia data. Existing retrieval methods of multimedia data are annotation-based retrieval, feature-based retrieval and annotation and feature integration based retrieval. These systems take annotator a lot of efforts and time and we should perform complicated calculation for feature extraction. In addition. created data have shortcomings that we should go through static search that do not change. Also, user-friendly and semantic searching techniques are not supported. This paper proposes to develop S-MARS(Semantic Metadata-based Multimedia Annotation and Retrieval System) which can represent and extract multimedia data efficiently using MPEG-7. The system provides a graphical user interface for annotating, searching, and browsing multimedia data. It is implemented on the basis of the semantic metadata model to represent multimedia information. The semantic metadata about multimedia data is organized on the basis of multimedia description schema using XML schema that basically comply with the MPEG-7 standard. In conclusion. the proposed scheme can be easily implemented on any multimedia platforms supporting XML technology. It can be utilized to enable efficient semantic metadata sharing between systems, and it will contribute to improving the retrieval correctness and the user's satisfaction on embedding based multimedia retrieval algorithm method.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Hybrid Collaborative Filtering-based Product Recommender System using Search Keywords (검색 키워드를 활용한 하이브리드 협업필터링 기반 상품 추천 시스템)

  • Lee, Yunju;Won, Haram;Shim, Jaeseung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.151-166
    • /
    • 2020
  • A recommender system is a system that recommends products or services that best meet the preferences of each customer using statistical or machine learning techniques. Collaborative filtering (CF) is the most commonly used algorithm for implementing recommender systems. However, in most cases, it only uses purchase history or customer ratings, even though customers provide numerous other data that are available. E-commerce customers frequently use a search function to find the products in which they are interested among the vast array of products offered. Such search keyword data may be a very useful information source for modeling customer preferences. However, it is rarely used as a source of information for recommendation systems. In this paper, we propose a novel hybrid CF model based on the Doc2Vec algorithm using search keywords and purchase history data of online shopping mall customers. To validate the applicability of the proposed model, we empirically tested its performance using real-world online shopping mall data from Korea. As the number of recommended products increases, the recommendation performance of the proposed CF (or, hybrid CF based on the customer's search keywords) is improved. On the other hand, the performance of a conventional CF gradually decreased as the number of recommended products increased. As a result, we found that using search keyword data effectively represents customer preferences and might contribute to an improvement in conventional CF recommender systems.

FPGA Mapping Incorporated with Multiplexer Tree Synthesis (멀티플렉서 트리 합성이 통합된 FPGA 매핑)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.4
    • /
    • pp.37-47
    • /
    • 2016
  • The practical constraints on the commercial FPGAs which contain dedicated wide function multiplexers in their slice structure are incorporated with one of the most advanced FPGA mapping algorithms based on the AIG (And-Inverter Graph), one of the best logic representations in academia. As the first step of the mapping process, cuts are enumerated as intermediate structures. And then, the cuts which can be mapped to the multiplexers are recognized. Without any increased complexity, the delay and area of multiplexers as well as LUTs are calculated after checking the requirements for the tree construction such as symmetry and depth limit against dynamically changing mapping of neighboring nodes. Besides, the root positions of multiplexer trees are identified from the RTL code, and annotated to the AIG as AOs (Auxiliary Outputs). A new AIG embedding the multiplexer tree structures which are intentionally synthesized by Shannon expansion at the AOs, is overlapped with the optimized AIG. The lossless synthesis technique which employs FRAIG (Functionally Reduced AIG) is applied to this approach. The proposed approach and techniques are validated by implementing and applying them to two RISC processor examples, which yielded 13~30% area reduction, and up to 32% delay reduction. The research will be extended to take into account the constraints on the dedicated hardware for carry chains.

Developing a Portable Intelligent Projection System (휴대형 지능형 프로젝션 시스템 개발)

  • Park, Han-Hoon;Seo, Byung-Kuk;Jin, Yoon-Jong;Oh, Ji-Hyun;Park, Jong-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.4 s.316
    • /
    • pp.26-34
    • /
    • 2007
  • Intelligent projection system indicates a system that displays desired images on an arbitrary screen in an arbitrary environment using projector without noticeable image distortion. In recent years, projectors have become widespread and ubiquitous due to their increasing capabilities and declining cost. Moreover, the size of projectors is getting smaller and handhold projectors are emerging. Thanks to these advances, the demand for intelligent projection system has been significantly increased and the demand has led to remarkable progress of the related techniques or technologies to intelligent projection system However, there are still some environments (or conditions, mainly dynamic ones) that intelligent projections systems cannot handle and they have limited the application area of intelligent projection systems. This paper exemplifies such environments (e.g. specular screen, dynamic screen) and propose effective solutions (i.e. multiple overlapping projectors, complementary pattern embedding) for thor And the usefulness of the solutions is verified through experimental results and user evaluation. Notice that the environments are considered not simultaneously but independently because it is impossible to consider them simultaneously by simply combining the solutions for each. Acually, a totally different solution would be necessary to consider them simultaneously. Therefore, we expect that the proposed methods would largely extend the application area of intelligent projection systems except for severely arbitrary environment.

Automatic Word Spacing of the Korean Sentences by Using End-to-End Deep Neural Network (종단 간 심층 신경망을 이용한 한국어 문장 자동 띄어쓰기)

  • Lee, Hyun Young;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.11
    • /
    • pp.441-448
    • /
    • 2019
  • Previous researches on automatic spacing of Korean sentences has been researched to correct spacing errors by using n-gram based statistical techniques or morpheme analyzer to insert blanks in the word boundary. In this paper, we propose an end-to-end automatic word spacing by using deep neural network. Automatic word spacing problem could be defined as a tag classification problem in unit of syllable other than word. For contextual representation between syllables, Bi-LSTM encodes the dependency relationship between syllables into a fixed-length vector of continuous vector space using forward and backward LSTM cell. In order to conduct automatic word spacing of Korean sentences, after a fixed-length contextual vector by Bi-LSTM is classified into auto-spacing tag(B or I), the blank is inserted in the front of B tag. For tag classification method, we compose three types of classification neural networks. One is feedforward neural network, another is neural network language model and the other is linear-chain CRF. To compare our models, we measure the performance of automatic word spacing depending on the three of classification networks. linear-chain CRF of them used as classification neural network shows better performance than other models. We used KCC150 corpus as a training and testing data.

Card Transaction Data-based Deep Tourism Recommendation Study (카드 데이터 기반 심층 관광 추천 연구)

  • Hong, Minsung;Kim, Taekyung;Chung, Namho
    • Knowledge Management Research
    • /
    • v.23 no.2
    • /
    • pp.277-299
    • /
    • 2022
  • The massive card transaction data generated in the tourism industry has become an important resource that implies tourist consumption behaviors and patterns. Based on the transaction data, developing a smart service system becomes one of major goals in both tourism businesses and knowledge management system developer communities. However, the lack of rating scores, which is the basis of traditional recommendation techniques, makes it hard for system designers to evaluate a learning process. In addition, other auxiliary factors such as temporal, spatial, and demographic information are needed to increase the performance of a recommendation system; but, gathering those are not easy in the card transaction context. In this paper, we introduce CTDDTR, a novel approach using card transaction data to recommend tourism services. It consists of two main components: i) Temporal preference Embedding (TE) represents tourist groups and services into vectors through Doc2Vec. And ii) Deep tourism Recommendation (DR) integrates the vectors and the auxiliary factors from a tourism RDF (resource description framework) through MLP (multi-layer perceptron) to provide services to tourist groups. In addition, we adopt RFM analysis from the field of knowledge management to generate explicit feedback (i.e., rating scores) used in the DR part. To evaluate CTDDTR, the card transactions data that happened over eight years on Jeju island is used. Experimental results demonstrate that the proposed method is more positive in effectiveness and efficacies.

Research Trends in Record Management Using Unstructured Text Data Analysis (비정형 텍스트 데이터 분석을 활용한 기록관리 분야 연구동향)

  • Deokyong Hong;Junseok Heo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.4
    • /
    • pp.73-89
    • /
    • 2023
  • This study aims to analyze the frequency of keywords used in Korean abstracts, which are unstructured text data in the domestic record management research field, using text mining techniques to identify domestic record management research trends through distance analysis between keywords. To this end, 1,157 keywords of 77,578 journals were visualized by extracting 1,157 articles from 7 journal types (28 types) searched by major category (complex study) and middle category (literature informatics) from the institutional statistics (registered site, candidate site) of the Korean Citation Index (KCI). Analysis of t-Distributed Stochastic Neighbor Embedding (t-SNE) and Scattertext using Word2vec was performed. As a result of the analysis, first, it was confirmed that keywords such as "record management" (889 times), "analysis" (888 times), "archive" (742 times), "record" (562 times), and "utilization" (449 times) were treated as significant topics by researchers. Second, Word2vec analysis generated vector representations between keywords, and similarity distances were investigated and visualized using t-SNE and Scattertext. In the visualization results, the research area for record management was divided into two groups, with keywords such as "archiving," "national record management," "standardization," "official documents," and "record management systems" occurring frequently in the first group (past). On the other hand, keywords such as "community," "data," "record information service," "online," and "digital archives" in the second group (current) were garnering substantial focus.