• Title/Summary/Keyword: Graph Feature Extraction

Search Result 27, Processing Time 0.02 seconds

Feature extraction method using graph Laplacian for LCD panel defect classification (LCD 패널 상의 불량 검출을 위한 스펙트럴 그래프 이론에 기반한 특성 추출 방법)

  • Kim, Gyu-Dong;Yoo, Suk-I.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.522-524
    • /
    • 2012
  • For exact classification of the defect, good feature selection and classifier is necessary. In this paper, various features such as brightness features, shape features and statistical features are stated and Bayes classifier using Gaussian mixture model is used as classifier. Also feature extraction method based on spectral graph theory is presented. Experimental result shows that feature extraction method using graph Laplacian result in better performance than the result using PCA.

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.

A Dependency Graph-Based Keyphrase Extraction Method Using Anti-patterns

  • Batsuren, Khuyagbaatar;Batbaatar, Erdenebileg;Munkhdalai, Tsendsuren;Li, Meijing;Namsrai, Oyun-Erdene;Ryu, Keun Ho
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1254-1271
    • /
    • 2018
  • Keyphrase extraction is one of fundamental natural language processing (NLP) tools to improve many text-mining applications such as document summarization and clustering. In this paper, we propose to use two novel techniques on the top of the state-of-the-art keyphrase extraction methods. First is the anti-patterns that aim to recognize non-keyphrase candidates. The state-of-the-art methods often used the rich feature set to identify keyphrases while those rich feature set cover only some of all keyphrases because keyphrases share very few similar patterns and stylistic features while non-keyphrase candidates often share many similar patterns and stylistic features. Second one is to use the dependency graph instead of the word co-occurrence graph that could not connect two words that are syntactically related and placed far from each other in a sentence while the dependency graph can do so. In experiments, we have compared the performances with different settings of the graphs (co-occurrence and dependency), and with the existing method results. Finally, we discovered that the combination method of dependency graph and anti-patterns outperform the state-of-the-art performances.

A Novel Two-Stage Training Method for Unbiased Scene Graph Generation via Distribution Alignment

  • Dongdong Jia;Meili Zhou;Wei WEI;Dong Wang;Zongwen Bai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3383-3397
    • /
    • 2023
  • Scene graphs serve as semantic abstractions of images and play a crucial role in enhancing visual comprehension and reasoning. However, the performance of Scene Graph Generation is often compromised when working with biased data in real-world situations. While many existing systems focus on a single stage of learning for both feature extraction and classification, some employ Class-Balancing strategies, such as Re-weighting, Data Resampling, and Transfer Learning from head to tail. In this paper, we propose a novel approach that decouples the feature extraction and classification phases of the scene graph generation process. For feature extraction, we leverage a transformer-based architecture and design an adaptive calibration function specifically for predicate classification. This function enables us to dynamically adjust the classification scores for each predicate category. Additionally, we introduce a Distribution Alignment technique that effectively balances the class distribution after the feature extraction phase reaches a stable state, thereby facilitating the retraining of the classification head. Importantly, our Distribution Alignment strategy is model-independent and does not require additional supervision, making it applicable to a wide range of SGG models. Using the scene graph diagnostic toolkit on Visual Genome and several popular models, we achieved significant improvements over the previous state-of-the-art methods with our model. Compared to the TDE model, our model improved mR@100 by 70.5% for PredCls, by 84.0% for SGCls, and by 97.6% for SGDet tasks.

Improving Embedding Model for Triple Knowledge Graph Using Neighborliness Vector (인접성 벡터를 이용한 트리플 지식 그래프의 임베딩 모델 개선)

  • Cho, Sae-rom;Kim, Han-joon
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.67-80
    • /
    • 2021
  • The node embedding technique for learning graph representation plays an important role in obtaining good quality results in graph mining. Until now, representative node embedding techniques have been studied for homogeneous graphs, and thus it is difficult to learn knowledge graphs with unique meanings for each edge. To resolve this problem, the conventional Triple2Vec technique builds an embedding model by learning a triple graph having a node pair and an edge of the knowledge graph as one node. However, the Triple2 Vec embedding model has limitations in improving performance because it calculates the relationship between triple nodes as a simple measure. Therefore, this paper proposes a feature extraction technique based on a graph convolutional neural network to improve the Triple2Vec embedding model. The proposed method extracts the neighborliness vector of the triple graph and learns the relationship between neighboring nodes for each node in the triple graph. We proves that the embedding model applying the proposed method is superior to the existing Triple2Vec model through category classification experiments using DBLP, DBpedia, and IMDB datasets.

Manufacturing Feature Extraction for Sculptured Pocket Machining (Sculptured 포켓 가공을 위한 가공특징형상 추출)

  • 주재구;조현보
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.04a
    • /
    • pp.455-459
    • /
    • 1997
  • A methodology which supports the feature used from design to manufacturing for sculptured pocket is newly devlored and present. The information contents in a feature can be easily conveyed from one application to another in the manufacturing domain. However, the feature generated in one application may not be directly suitable for another whitout being modified with more information. Theobjective of the paper is to parsent the methodology of decomposing a bulky feature of sculptured pocket to be removed into compact features to be efficiently machined. In particular, the paper focuses on the two task: 1) to segment horizontally a bulky feature into intermediate features by determining the adequate depth of cut and cutter size and to generate the temporal precedence graph of the intermediate features and 2)to further decompose each intermediate feature vertical into smaller manufacturing features and to apply the variable feed rate to each small feature. The proposed method will provid better efficiency in machining time and cost than the classical method which uses a long string of NC codes necessary to remove a bulky fecture.

  • PDF

A Comparison of Global Feature Extraction Technologies and Their Performance for Image Identification (영상 식별을 위한 전역 특징 추출 기술과 그 성능 비교)

  • Yang, Won-Keun;Cho, A-Young;Jeong, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.1-14
    • /
    • 2011
  • While the circulation of images become active, various requirements to manage increasing database are raised. The content-based technology is one of methods to satisfy these requirements. The image is represented by feature vectors extracted by various methods in the content-based technology. The global feature method insures fast matching speed because the feature vector extracted by the global feature method is formed into a standard shape. The global feature extraction methods are classified into two categories, the spatial feature extraction and statistical feature extraction. And each group is divided by what kind of information is used, color feature or gray scale feature. In this paper, we introduce various global feature extraction technologies and compare their performance by accuracy, recall-precision graph, ANMRR, feature vector size and matching time. According to the experiments, the spatial features show good performance in non-geometrical modifications, and the extraction technologies that use color and histogram feature show the best performance.

Finger Vein Recognition Based on Multi-Orientation Weighted Symmetric Local Graph Structure

  • Dong, Song;Yang, Jucheng;Chen, Yarui;Wang, Chao;Zhang, Xiaoyuan;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4126-4142
    • /
    • 2015
  • Finger vein recognition is a biometric technology using finger veins to authenticate a person, and due to its high degree of uniqueness, liveness, and safety, it is widely used. The traditional Symmetric Local Graph Structure (SLGS) method only considers the relationship between the image pixels as a dominating set, and uses the relevant theories to tap image features. In order to better extract finger vein features, taking into account location information and direction information between the pixels of the image, this paper presents a novel finger vein feature extraction method, Multi-Orientation Weighted Symmetric Local Graph Structure (MOW-SLGS), which assigns weight to each edge according to the positional relationship between the edge and the target pixel. In addition, we use the Extreme Learning Machine (ELM) classifier to train and classify the vein feature extracted by the MOW-SLGS method. Experiments show that the proposed method has better performance than traditional methods.

Analysis of Weights and Feature Patterns in Popular 2D Deep Neural Networks Models for MRI Image Classification

  • Khagi, Bijen;Kwon, Goo-Rak
    • Journal of Multimedia Information System
    • /
    • v.9 no.3
    • /
    • pp.177-182
    • /
    • 2022
  • A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple 'dimensions' depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where 'N' represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models' weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.