• Title/Summary/Keyword: Graph Dataset

Search Result 73, Processing Time 0.027 seconds

KG_VCR: A Visual Commonsense Reasoning Model Using Knowledge Graph (KG_VCR: 지식 그래프를 이용하는 영상 기반 상식 추론 모델)

  • Lee, JaeYun;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.91-100
    • /
    • 2020
  • Unlike the existing Visual Question Answering(VQA) problems, the new Visual Commonsense Reasoning(VCR) problems require deep common sense reasoning for answering questions: recognizing specific relationship between two objects in the image, presenting the rationale of the answer. In this paper, we propose a novel deep neural network model, KG_VCR, for VCR problems. In addition to make use of visual relations and contextual information between objects extracted from input data (images, natural language questions, and response lists), the KG_VCR also utilizes commonsense knowledge embedding extracted from an external knowledge base called ConceptNet. Specifically the proposed model employs a Graph Convolutional Neural Network(GCN) module to obtain commonsense knowledge embedding from the retrieved ConceptNet knowledge graph. By conducting a series of experiments with the VCR benchmark dataset, we show that the proposed KG_VCR model outperforms both the state of the art(SOTA) VQA model and the R2C VCR model.

A Novel Two-Stage Training Method for Unbiased Scene Graph Generation via Distribution Alignment

  • Dongdong Jia;Meili Zhou;Wei WEI;Dong Wang;Zongwen Bai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3383-3397
    • /
    • 2023
  • Scene graphs serve as semantic abstractions of images and play a crucial role in enhancing visual comprehension and reasoning. However, the performance of Scene Graph Generation is often compromised when working with biased data in real-world situations. While many existing systems focus on a single stage of learning for both feature extraction and classification, some employ Class-Balancing strategies, such as Re-weighting, Data Resampling, and Transfer Learning from head to tail. In this paper, we propose a novel approach that decouples the feature extraction and classification phases of the scene graph generation process. For feature extraction, we leverage a transformer-based architecture and design an adaptive calibration function specifically for predicate classification. This function enables us to dynamically adjust the classification scores for each predicate category. Additionally, we introduce a Distribution Alignment technique that effectively balances the class distribution after the feature extraction phase reaches a stable state, thereby facilitating the retraining of the classification head. Importantly, our Distribution Alignment strategy is model-independent and does not require additional supervision, making it applicable to a wide range of SGG models. Using the scene graph diagnostic toolkit on Visual Genome and several popular models, we achieved significant improvements over the previous state-of-the-art methods with our model. Compared to the TDE model, our model improved mR@100 by 70.5% for PredCls, by 84.0% for SGCls, and by 97.6% for SGDet tasks.

Proposing the Methods for Accelerating Computational Time of Large-Scale Commute Time Embedding (대용량 컴뮤트 타임 임베딩을 위한 연산 속도 개선 방식 제안)

  • Hahn, Hee-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.2
    • /
    • pp.162-170
    • /
    • 2015
  • Commute time embedding involves computing the spectral decomposition of the graph Laplacian. It requires the computational burden proportional to $o(n^3)$, not suitable for large scale dataset. Many methods have been proposed to accelerate the computational time, which usually employ the Nystr${\ddot{o}}$m methods to approximate the spectral decomposition of the reduced graph Laplacian. They suffer from the lost of information by dint of sampling process. This paper proposes to reduce the errors by approximating the spectral decomposition of the graph Laplacian using that of the affinity matrix. However, this can not be applied as the data size increases, because it also requires spectral decomposition. Another method called approximate commute time embedding is implemented, which does not require spectral decomposition. The performance of the proposed algorithms is analyzed by computing the commute time on the patch graph.

Recognition of Multi Label Fashion Styles based on Transfer Learning and Graph Convolution Network (전이학습과 그래프 합성곱 신경망 기반의 다중 패션 스타일 인식)

  • Kim, Sunghoon;Choi, Yerim;Park, Jonghyuk
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.1
    • /
    • pp.29-41
    • /
    • 2021
  • Recently, there are increasing attempts to utilize deep learning methodology in the fashion industry. Accordingly, research dealing with various fashion-related problems have been proposed, and superior performances have been achieved. However, the studies for fashion style classification have not reflected the characteristics of the fashion style that one outfit can include multiple styles simultaneously. Therefore, we aim to solve the multi-label classification problem by utilizing the dependencies between the styles. A multi-label recognition model based on a graph convolution network is applied to detect and explore fashion styles' dependencies. Furthermore, we accelerate model training and improve the model's performance through transfer learning. The proposed model was verified by a dataset collected from social network services and outperformed baselines.

A Study on Conversational AI Agent based on Continual Learning

  • Chae-Lim, Park;So-Yeop, Yoo;Ok-Ran, Jeong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.27-38
    • /
    • 2023
  • In this paper, we propose a conversational AI agent based on continual learning that can continuously learn and grow with new data over time. A continual learning-based conversational AI agent consists of three main components: Task manager, User attribute extraction, and Auto-growing knowledge graph. When a task manager finds new data during a conversation with a user, it creates a new task with previously learned knowledge. The user attribute extraction model extracts the user's characteristics from the new task, and the auto-growing knowledge graph continuously learns the new external knowledge. Unlike the existing conversational AI agents that learned based on a limited dataset, our proposed method enables conversations based on continuous user attribute learning and knowledge learning. A conversational AI agent with continual learning technology can respond personally as conversations with users accumulate. And it can respond to new knowledge continuously. This paper validate the possibility of our proposed method through experiments on performance changes in dialogue generation models over time.

Supervised Model for Identifying Differentially Expressed Genes in DNA Microarray Gene Expression Dataset Using Biological Pathway Information

  • Chung, Tae Su;Kim, Keewon;Kim, Ju Han
    • Genomics & Informatics
    • /
    • v.3 no.1
    • /
    • pp.30-34
    • /
    • 2005
  • Microarray technology makes it possible to measure the expressions of tens of thousands of genes simultaneously under various experimental conditions. Identifying differentially expressed genes in each single experimental condition is one of the most common first steps in microarray gene expression data analysis. Reasonable choices of thresholds for determining differentially expressed genes are used for the next-stap-analysis with suitable statistical significances. We present a supervised model for identifying DEGs using pathway information based on the global connectivity structure. Pathway information can be regarded as a collection of biological knowledge, thus we are trying to determine the optimal threshold so that the consequential connectivity structure can be the most compatible with the existing pathway information. The significant feature of our model is that it uses established knowledge as a reference to determine the direction of analyzing microarray dataset. In the most of previous work, only intrinsic information in the miroarray is used for the identifying DEGs. We hope that our proposed method could contribute to construct biologically meaningful structure from microarray datasets.

Transformation Approach to Model Online Gaming Traffic

  • Shin, Kwang-Sik;Kim, Jin-Hyuk;Sohn, Kang-Min;Park, Chang-Joon;Choi, Sang-Bang
    • ETRI Journal
    • /
    • v.33 no.2
    • /
    • pp.219-229
    • /
    • 2011
  • In this paper, we propose a transformation scheme used to analyze online gaming traffic properties and develop a traffic model. We analyze the packet size and the inter departure time distributions of a popular first-person shooter game (Left 4 Dead) and a massively multiplayer online role-playing game (World of Warcraft) in order to compare them to the existing scheme. Recent online gaming traffic is erratically distributed, so it is very difficult to analyze. Therefore, our research focuses on a transformation scheme to obtain new smooth patterns from a messy dataset. It extracts relatively heavy-weighted density data and then transforms them into a corresponding dataset domain to obtain a simplified graph. We compare the analytical model histogram, the chi-square statistic, and the quantile-quantile plot of the proposed scheme to an existing scheme. The results show that the proposed scheme demonstrates a good fit in all parts. The chi-square statistic of our scheme for the Left 4 Dead packet size distribution is less than one ninth of the existing one when dealing with erratic traffic.

Transaction Mining for Fraud Detection in ERP Systems

  • Khan, Roheena;Corney, Malcolm;Clark, Andrew;Mohay, George
    • Industrial Engineering and Management Systems
    • /
    • v.9 no.2
    • /
    • pp.141-156
    • /
    • 2010
  • Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour, and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R/3, presently the predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.

Image-based Extraction of Histogram Index for Concrete Crack Analysis

  • Kim, Bubryur;Lee, Dong-Eun
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.912-919
    • /
    • 2022
  • The study is an image-based assessment that uses image processing techniques to determine the condition of concrete with surface cracks. The preparations of the dataset include resizing and image filtering to ensure statistical homogeneity and noise reduction. The image dataset is then segmented, making it more suited for extracting important features and easier to evaluate. The image is transformed into grayscale which removes the hue and saturation but retains the luminance. To create a clean edge map, the edge detection process is utilized to extract the major edge features of the image. The Otsu method is used to minimize intraclass variation between black and white pixels. Additionally, the median filter was employed to reduce noise while keeping the borders of the image. Image processing techniques are used to enhance the significant features of the concrete image, especially the defects. In this study, the tonal zones of the histogram and its properties are used to analyze the condition of the concrete. By examining the histogram, the viewer will be able to determine the information on the image through the number of pixels associated and each tonal characteristic on a graph. The features of the five tonal zones of the histogram which implies the qualities of the concrete image may be evaluated based on the quality of the contrast, brightness, highlights, shadow spikes, or the condition of the shadow region that corresponds to the foreground.

  • PDF

A Service Composition using Hierarchical Model in Multiple Service Environment

  • Tang, Jiamei;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.9
    • /
    • pp.1091-1097
    • /
    • 2015
  • Internet-of-Things (IoT) becomes one of the most promising future paradigms, which foresees enormous amounts of interoperable things and heterogeneous services. The goal of IoT is to enable all things connected and brings all kinds information and services to people. However, such a great deal of information may lead to cognitive overload or restrain in productivity of people. Thus, it is a necessity to build intelligent mechanisms to assist people in accessing the information or services they needed in a proactive manner. Most of previous related mechanisms are built on well-defined web services and lack of consideration of constrained resources. This paper suggests a services composition method by adapting a hierarchical model, which is a graph-based model composed of four layers: Context Layer, Event Layer, Service Layer and Device Layer. With a such multi-layer graph, service composition can be achieved by the iteration of layer by layer. Then, to evaluate the effectiveness of this proposed hierarchical model, a real-life emergency response dataset is applied and the experimental results are composed with the general probabilistic method and indicate that the proposed method is help for compositing multiple services while considering given context and constrained resources.