• Title/Summary/Keyword: Network graph

Search Result 703, Processing Time 0.036 seconds

Topological Properties and Broadcasting Algorithm of Hyper-Star Interconnection Network (하이퍼-스타 연결망의 위상적 성질과 방송 알고리즘)

  • Kim Jong-Seok;Oh Eun-seuk;Lee Hyeong-Ok
    • The KIPS Transactions:PartA
    • /
    • v.11A no.5
    • /
    • pp.341-346
    • /
    • 2004
  • Recently A Hyper-Star Graph HS(m, k) has been introduced as a new interconnection network of new topology for parallel processing. Hyper-Star Graph has properties of hypercube and star graph, further improve the network cost of a hypercube with the same number of nodes. In this paper, we show that Hyper-Star Graph HS(m, k) is subgraph of hypercube. And we also show that regular graph, Hyper-Star Graph HS(2n, n) is node-symmetric by introduced mapping algorithm. In addition, we introduce an efficient one-to-all broadcasting scheme - takes 2n-1 times - in Hyper-Star Graph HS(2n, n) based on a spanning tree with minimum height.

A Path Finding Algorithm based on an Abstract Graph Created by Homogeneous Node Elimination Technique (동일 특성 노드 제거를 통한 추상 그래프 기반의 경로 탐색 알고리즘)

  • Kim, Ji-Soo;Lee, Ji-Wan;Cho, Dea-Soo
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.4
    • /
    • pp.39-46
    • /
    • 2009
  • Generally, Path-finding algorithms which use heuristic function may occur a problem of the increase of exploring cost in case of that there is no way determined by heuristic function or there are 2 way more which have almost same cost. In this paper, we propose an abstract graph for path-finding with dynamic information. The abstract graph is a simple graph as real road network is abstracted. The abstract graph is created by fixed-size cells and real road network. Path-finding with the abstract graph is composed of two step searching, path-finding on the abstract graph and on the real road network. We performed path-finding algorithm with the abstract graph against A* algorithm based on fixed-size cells on road network that consists of 106,254 edges. In result of evaluation of performance, cost of exploring in path-finding with the abstract graph is about 3~30% less than A* algorithm based on fixed-size cells. Quality of path in path-finding with the abstract graph is, However, about 1.5~6.6% more than A* algorithm based on fixed-size cells because edges eliminated are not candidates for path-finding.

  • PDF

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.

TeGCN:Transformer-embedded Graph Neural Network for Thin-filer default prediction (TeGCN:씬파일러 신용평가를 위한 트랜스포머 임베딩 기반 그래프 신경망 구조 개발)

  • Seongsu Kim;Junho Bae;Juhyeon Lee;Heejoo Jung;Hee-Woong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.419-437
    • /
    • 2023
  • As the number of thin filers in Korea surpasses 12 million, there is a growing interest in enhancing the accuracy of assessing their credit default risk to generate additional revenue. Specifically, researchers are actively pursuing the development of default prediction models using machine learning and deep learning algorithms, in contrast to traditional statistical default prediction methods, which struggle to capture nonlinearity. Among these efforts, Graph Neural Network (GNN) architecture is noteworthy for predicting default in situations with limited data on thin filers. This is due to their ability to incorporate network information between borrowers alongside conventional credit-related data. However, prior research employing graph neural networks has faced limitations in effectively handling diverse categorical variables present in credit information. In this study, we introduce the Transformer embedded Graph Convolutional Network (TeGCN), which aims to address these limitations and enable effective default prediction for thin filers. TeGCN combines the TabTransformer, capable of extracting contextual information from categorical variables, with the Graph Convolutional Network, which captures network information between borrowers. Our TeGCN model surpasses the baseline model's performance across both the general borrower dataset and the thin filer dataset. Specially, our model performs outstanding results in thin filer default prediction. This study achieves high default prediction accuracy by a model structure tailored to characteristics of credit information containing numerous categorical variables, especially in the context of thin filers with limited data. Our study can contribute to resolving the financial exclusion issues faced by thin filers and facilitate additional revenue within the financial industry.

Embedding Mechanism between Pancake and Star, Macro-star Graph (팬케익 그래프와 스타(Star) 그래프, 매크로-스타(Macro-star) 그래프간의 임베딩 방법)

  • 최은복;이형옥
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.3
    • /
    • pp.556-564
    • /
    • 2003
  • A Star and Pancake graph also have such a good property of a hypercube and have a low network cost than the hypercube. A Macro-star graph which has the star graph as a basic module has the node symmetry, the maximum fault tolerance, and the hierarchical decomposition property. And, it is an interconnection network which improves the network cost against the Star graph. In this paper, we propose a method to embed between Star graph, Pancake graph, and Macro-star graph using the edge definition of graphs. We prove that the Star graph $S_n$ can be embedded into Pancake graph $P_n$ with dilation 4, and Macro-star graph MS(2,n) can be embedded into Pancake graph $P_{2n+1}$ with dilation 4. Also, we have a result that the embedding cost, a Pancake graph can be embedded into Star and Macro-star graph, is O(n).

  • PDF

GBGNN: Gradient Boosted Graph Neural Networks

  • Eunjo Jang;Ki Yong Lee
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.501-513
    • /
    • 2024
  • In recent years, graph neural networks (GNNs) have been extensively used to analyze graph data across various domains because of their powerful capabilities in learning complex graph-structured data. However, recent research has focused on improving the performance of a single GNN with only two or three layers. This is because stacking layers deeply causes the over-smoothing problem of GNNs, which degrades the performance of GNNs significantly. On the other hand, ensemble methods combine individual weak models to obtain better generalization performance. Among them, gradient boosting is a powerful supervised learning algorithm that adds new weak models in the direction of reducing the errors of the previously created weak models. After repeating this process, gradient boosting combines the weak models to produce a strong model with better performance. Until now, most studies on GNNs have focused on improving the performance of a single GNN. In contrast, improving the performance of GNNs using multiple GNNs has not been studied much yet. In this paper, we propose gradient boosted graph neural networks (GBGNN) that combine multiple shallow GNNs with gradient boosting. We use shallow GNNs as weak models and create new weak models using the proposed gradient boosting-based loss function. Our empirical evaluations on three real-world datasets demonstrate that GBGNN performs much better than a single GNN. Specifically, in our experiments using graph convolutional network (GCN) and graph attention network (GAT) as weak models on the Cora dataset, GBGNN achieves performance improvements of 12.3%p and 6.1%p in node classification accuracy compared to a single GCN and a single GAT, respectively.

Representation Method of Track Topologies using Railway Graph (선로그래프를 이용한 철도망 위상 표현방법)

  • 조동영
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.1
    • /
    • pp.114-119
    • /
    • 2002
  • Realtime assignment of railways is an important component in the railway control systems. To solve this problem, we must exactly represent the track topology. Graph is a proper data structure for representing general network topologies, but not Proper for track topologies. In this paper, we define a new data structure, railway graph, which can exactly represent topologies of railway networks. And we describe a path search algorithm in the defined railway graph, and a top-down approach for designing railway network by the Proposed graph.

  • PDF

Evolution and Maintenance of Proxy Networks for Location Transparent Mobile Agent and Formal Representation By Graph Transformation Rules

  • Kurihara, Masahito;Numazawa, Masanobu
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.151-155
    • /
    • 2001
  • Mobile agent technology has been the subject of much attention in the last few years, mainly due to the proliferation of distributed software technologies combined with the distributed AI research field. In this paper, we present a design of communication networks of agents that cooperate with each other for forwarding messages to the specific mobile agent in order to make the overall system location transparent. In order to make the material accessible to general intelligent system researchers, we present the general ideas abstractly in terms of the graph theory. In particular, a proxy network is defined as a directed acyclic graph satisfying some structural conditions. In turns out that the definition ensures some kind of reliability of the network, in the sense that as long as at most one proxy agent is abnormal, there agent exists a communication path, from every proxy agent to the target agent, without passing through the abnormal proxy. As the basis for the implementation of this scheme, an appropriate initial proxy network is specified and the dynamic nature of the network is represented by a set of graph transformation rules. It is shown that those rules are sound, in the sense that all graphs created from the initial proxy network by zero or more applications of the rules are guaranteed to be proxy networks. Finally, we will discuss some implementation issues.

  • PDF

Multimodal Context Embedding for Scene Graph Generation

  • Jung, Gayoung;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1250-1260
    • /
    • 2020
  • This study proposes a novel deep neural network model that can accurately detect objects and their relationships in an image and represent them as a scene graph. The proposed model utilizes several multimodal features, including linguistic features and visual context features, to accurately detect objects and relationships. In addition, in the proposed model, context features are embedded using graph neural networks to depict the dependencies between two related objects in the context feature vector. This study demonstrates the effectiveness of the proposed model through comparative experiments using the Visual Genome benchmark dataset.

Recent developments of constructing adjacency matrix in network analysis

  • Hong, Younghee;Kim, Choongrak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.5
    • /
    • pp.1107-1116
    • /
    • 2014
  • In this paper, we review recent developments in network analysis using the graph theory, and introduce ongoing research area with relevant theoretical results. In specific, we introduce basic notations in graph, and conditional and marginal approach in constructing the adjacency matrix. Also, we introduce the Marcenko-Pastur law, the Tracy-Widom law, the white Wishart distribution, and the spiked distribution. Finally, we mention the relationship between degrees and eigenvalues for the detection of hubs in a network.