• Title/Summary/Keyword: intelligence embedding

Search Result 80, Processing Time 0.025 seconds

A study on the Extraction of Similar Information using Knowledge Base Embedding for Battlefield Awareness

  • Kim, Sang-Min;Jin, So-Yeon;Lee, Woo-Sin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.33-40
    • /
    • 2021
  • Due to advanced complex strategies, the complexity of information that a commander must analyze is increasing. An intelligent service that can analyze battlefield is needed for the commander's timely judgment. This service consists of extracting knowledge from battlefield information, building a knowledge base, and analyzing the battlefield information from the knowledge base. This paper extract information similar to an input query by embedding the knowledge base built in the 2nd step. The transformation model is needed to generate the embedded knowledge base and uses the random-walk algorithm. The transformed information is embedding using Word2Vec, and Similar information is extracted through cosine similarity. In this paper, 980 sentences are generated from the open knowledge base and embedded as a 100-dimensional vector and it was confirmed that similar entities were extracted through cosine similarity.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Fine-grained Named Entity Recognition using Hierarchical Label Embedding (계층적 레이블 임베딩을 이용한 세부 분류 개체명 인식)

  • Kim, Hong-Jin;Kim, Hark-Soo
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.251-256
    • /
    • 2021
  • 개체명 인식은 정보 추출의 하위 작업으로, 문서에서 개체명에 해당하는 단어를 찾아 알맞은 개체명을 분류하는 자연어처리 기술이다. 질의 응답, 관계 추출 등과 같은 자연어처리 작업에 대한 관심이 높아짐에 따라 세부 분류 개체명 인식에 대한 수요가 증가했다. 그러나 기존 개체명 인식 성능에 비해 세부 분류 개체명 인식의 성능이 낮다. 이러한 성능 차이의 원인은 세부 분류 개체명 데이터가 불균형하기 때문이다. 본 논문에서는 이러한 데이터 불균형 문제를 해결하기 위해 대분류 개체명 정보를 활용하여 세부 분류 개체명 인식을 수행하는 방법과 대분류 개체명 인식의 오류 전파를 완화하기 위한 2단계 학습 방법을 제안한다. 또한 레이블 주의집중 네트워크 기반의 구조에서 레이블의 공통 요소를 공유하여 세부 분류 개체명 인식에 효과적인 레이블 임베딩 구성 방법을 제안한다.

  • PDF

Ethereum Phishing Scam Detection Based on Graph Embedding (그래프 임베딩 기반의 이더리움 피싱 스캠 탐지 연구)

  • Cheong, Yoo-Young;Kim, Gyoung-Tae;Im, Dong-Hyuk
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.266-268
    • /
    • 2022
  • 최근 블록체인 기술이 부상하면서 이를 이용한 암호화폐가 범죄의 대상이 되고 있다. 특히 피싱 스캠은 이더리움 사이버 범죄의 과반수 이상을 차지하며 주요 보안 위협원으로 여겨지고 있다. 따라서 효과적인 피싱 스캠 탐지 방법이 시급하다. 그러나 전체 노드에서 라벨링된 피싱 주소의 부족으로 인한 데이터 불균형으로 인하여 지도학습에 충분한 데이터 제공이 어려운 상황이다. 이를 해결하기 위해 본 논문에서는 이더리움 트랜잭션 네트워크를 고려한 효율적인 네트워크 임베딩 기법인 trans2vec 과 준지도 학습 모델 tri-training 을 함께 사용하여 라벨링된 데이터뿐만 아니라 라벨링되지 않은 데이터도 최대한 활용하는 피싱 스캠 탐지 방법을 제안한다.

Graph Implicit Neural Representations Using Spatial Graph Embeddings (공간적 그래프 임베딩을 활용한 그래프 암시적 신경 표현)

  • Jinho Park;Dongwoo Kim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.23-26
    • /
    • 2024
  • 본 논문에서는 그래프 구조의 데이터에서 각 노드의 신호를 예측하는 연구를 진행하였다. 이를 위해 분석하고자 하는 그래프에 대해 연결 관계를 기반으로 각 노드에 비-유클리드 공간 상에서의 좌표를 부여하여 그래프의 공간적 임베딩을 얻은 뒤, 각 노드의 공간적 임베딩을 입력으로 받고 해당 노드의 신호를 예측하는 그래프 암시적 신경 표현 모델을 제안 하였다. 제안된 모델의 검증을 위해 네트워크형 데이터와 3차원 메시 데이터 두 종류의 그래프 데이터에 대하여 신호 학습, 신호 예측 및 메시 데이터의 초해상도 과정 실험들을 진행하였다. 전반적으로 기존의 그래프 암시적 신경 표현 모델과 비교하였을 때 비슷하거나 더 우수한 성능을 보였으며, 특히 네트워크형 그래프 데이터 신호 예측 실험에서 큰 성능 향상을 보였다.

  • PDF

Additive Manufacturing for Sensor Integrated Components (센서 융합형 지능형 부품 제조를 위한 적층 제조 기술 연구)

  • Jung, Im Doo;Lee, Min Sik;Woo, Young Jin;Kim, Kyung Tae;Yu, Ji-Hun
    • Journal of Powder Materials
    • /
    • v.27 no.2
    • /
    • pp.111-118
    • /
    • 2020
  • The convergence of artificial intelligence with smart factories or smart mechanical systems has been actively studied to maximize the efficiency and safety. Despite the high improvement of artificial neural networks, their application in the manufacturing industry has been difficult due to limitations in obtaining meaningful data from factories or mechanical systems. Accordingly, there have been active studies on manufacturing components with sensor integration allowing them to generate important data from themselves. Additive manufacturing enables the fabrication of a net shaped product with various materials including plastic, metal, or ceramic parts. With the principle of layer-by-layer adhesion of material, there has been active research to utilize this multi-step manufacturing process, such as changing the material at a certain step of adhesion or adding sensor components in the middle of the additive manufacturing process. Particularly for smart parts manufacturing, researchers have attempted to embed sensors or integrated circuit boards within a three-dimensional component during the additive manufacturing process. While most of the sensor embedding additive manufacturing was based on polymer material, there have also been studies on sensor integration within metal or ceramic materials. This study reviews the additive manufacturing technology for sensor integration into plastic, ceramic, and metal materials.

Semantic Visualization of Dynamic Topic Modeling (다이내믹 토픽 모델링의 의미적 시각화 방법론)

  • Yeon, Jinwook;Boo, Hyunkyung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.131-154
    • /
    • 2022
  • Recently, researches on unstructured data analysis have been actively conducted with the development of information and communication technology. In particular, topic modeling is a representative technique for discovering core topics from massive text data. In the early stages of topic modeling, most studies focused only on topic discovery. As the topic modeling field matured, studies on the change of the topic according to the change of time began to be carried out. Accordingly, interest in dynamic topic modeling that handle changes in keywords constituting the topic is also increasing. Dynamic topic modeling identifies major topics from the data of the initial period and manages the change and flow of topics in a way that utilizes topic information of the previous period to derive further topics in subsequent periods. However, it is very difficult to understand and interpret the results of dynamic topic modeling. The results of traditional dynamic topic modeling simply reveal changes in keywords and their rankings. However, this information is insufficient to represent how the meaning of the topic has changed. Therefore, in this study, we propose a method to visualize topics by period by reflecting the meaning of keywords in each topic. In addition, we propose a method that can intuitively interpret changes in topics and relationships between or among topics. The detailed method of visualizing topics by period is as follows. In the first step, dynamic topic modeling is implemented to derive the top keywords of each period and their weight from text data. In the second step, we derive vectors of top keywords of each topic from the pre-trained word embedding model. Then, we perform dimension reduction for the extracted vectors. Then, we formulate a semantic vector of each topic by calculating weight sum of keywords in each vector using topic weight of each keyword. In the third step, we visualize the semantic vector of each topic using matplotlib, and analyze the relationship between or among the topics based on the visualized result. The change of topic can be interpreted in the following manners. From the result of dynamic topic modeling, we identify rising top 5 keywords and descending top 5 keywords for each period to show the change of the topic. Existing many topic visualization studies usually visualize keywords of each topic, but our approach proposed in this study differs from previous studies in that it attempts to visualize each topic itself. To evaluate the practical applicability of the proposed methodology, we performed an experiment on 1,847 abstracts of artificial intelligence-related papers. The experiment was performed by dividing abstracts of artificial intelligence-related papers into three periods (2016-2017, 2018-2019, 2020-2021). We selected seven topics based on the consistency score, and utilized the pre-trained word embedding model of Word2vec trained with 'Wikipedia', an Internet encyclopedia. Based on the proposed methodology, we generated a semantic vector for each topic. Through this, by reflecting the meaning of keywords, we visualized and interpreted the themes by period. Through these experiments, we confirmed that the rising and descending of the topic weight of a keyword can be usefully used to interpret the semantic change of the corresponding topic and to grasp the relationship among topics. In this study, to overcome the limitations of dynamic topic modeling results, we used word embedding and dimension reduction techniques to visualize topics by era. The results of this study are meaningful in that they broadened the scope of topic understanding through the visualization of dynamic topic modeling results. In addition, the academic contribution can be acknowledged in that it laid the foundation for follow-up studies using various word embeddings and dimensionality reduction techniques to improve the performance of the proposed methodology.

AI-based system for automatically detecting food risk information from news data (뉴스 데이터로부터 식품위해정보 자동 추출을 위한 인공지능 기술)

  • Baek, Yujin;Lee, Jihyeon;Kim, Nam Hee;Lee, Hunjoo;Choo, Jaegul
    • Food Science and Industry
    • /
    • v.54 no.3
    • /
    • pp.160-170
    • /
    • 2021
  • A recent advance in communication technologies accelerates the spread of food safety issues once presented by the news media. To respond to those safety issues and take steps in a timely manner, automatically detecting related information from the news data matters. This work presents an AI-based system that detects risk information within a food-related news article. Experts in food safety areas participated in labeling risk information from the food-related news articles; we acquired 43,527 articles in which food names and risk information are marked as labels. Based on the news document, our system automatically detects food names and risk information by analyzing similarities between words within a text by leveraging learned word embedding vectors. Our AI-based system shows higher detection accuracy scores over a non-AI rule-based system: achieving an absolute gain of +32.94% in F1 for the food name category and +41.53% for the risk information category.

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

Web Attack Classification Model Based on Payload Embedding Pre-Training (페이로드 임베딩 사전학습 기반의 웹 공격 분류 모델)

  • Kim, Yeonsu;Ko, Younghun;Euom, Ieckchae;Kim, Kyungbaek
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.669-677
    • /
    • 2020
  • As the number of Internet users exploded, attacks on the web increased. In addition, the attack patterns have been diversified to bypass existing defense techniques. Traditional web firewalls are difficult to detect attacks of unknown patterns.Therefore, the method of detecting abnormal behavior by artificial intelligence has been studied as an alternative. Specifically, attempts have been made to apply natural language processing techniques because the type of script or query being exploited consists of text. However, because there are many unknown words in scripts and queries, natural language processing requires a different approach. In this paper, we propose a new classification model which uses byte pair encoding (BPE) technology to learn the embedding vector, that is often used for web attack payloads, and uses an attention mechanism-based Bi-GRU neural network to extract a set of tokens that learn their order and importance. For major web attacks such as SQL injection, cross-site scripting, and command injection attacks, the accuracy of the proposed classification method is about 0.9990 and its accuracy outperforms the model suggested in the previous study.