• Title/Summary/Keyword: Network embedding

Search Result 247, Processing Time 0.024 seconds

Embedding Algorithm between Folded Hypercube and HFH Network (폴디드 하이퍼큐브와 HFH 네트워크 사이의 임베딩 알고리즘)

  • Kim, Jongseok;Lee, Hyeongok;Kim, Sung Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.4
    • /
    • pp.151-154
    • /
    • 2013
  • In this paper, we will analyze embedding between Folded Hypercube and HFH. We will show Folded Hypercube $FQ_{2n}$ can be embedded into HFH($C_n,C_n$) with dilation 4, expansion $\frac{(C_n)^2}{2^{2n}}$ and HFH($C_d,C_d$) can be embedded into $FQ_{4d-2}$ with dilation O(d).

A Comparative Study of Word Embedding Models for Arabic Text Processing

  • Assiri, Fatmah;Alghamdi, Nuha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.399-403
    • /
    • 2022
  • Natural texts are analyzed to obtain their intended meaning to be classified depending on the problem under study. One way to represent words is by generating vectors of real values to encode the meaning; this is called word embedding. Similarities between word representations are measured to identify text class. Word embeddings can be created using word2vec technique. However, recently fastText was implemented to provide better results when it is used with classifiers. In this paper, we will study the performance of well-known classifiers when using both techniques for word embedding with Arabic dataset. We applied them to real data collected from Wikipedia, and we found that both word2vec and fastText had similar accuracy with all used classifiers.

Group-based speaker embeddings for text-independent speaker verification (문장 독립 화자 검증을 위한 그룹기반 화자 임베딩)

  • Jung, Youngmoon;Eom, Youngsik;Lee, Yeonghyeon;Kim, Hoirin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.496-502
    • /
    • 2021
  • Recently, deep speaker embedding approach has been widely used in text-independent speaker verification, which shows better performance than the traditional i-vector approach. In this work, to improve the deep speaker embedding approach, we propose a novel method called group-based speaker embedding which incorporates group information. We cluster all speakers of the training data into a predefined number of groups in an unsupervised manner, so that a fixed-length group embedding represents the corresponding group. A Group Decision Network (GDN) produces a group weight, and an aggregated group embedding is generated from the weighted sum of the group embeddings and the group weights. Finally, we generate a group-based embedding by adding the aggregated group embedding to the deep speaker embedding. In this way, a speaker embedding can reduce the search space of the speaker identity by incorporating group information, and thereby can flexibly represent a significant number of speakers. We conducted experiments using the VoxCeleb1 database to show that our proposed approach can improve the previous approaches.

Lossless Data Hiding Using Modification of Histogram in Wavelet Domain (웨이블릿 영역에서 히스토그램 수정을 이용한 무손실 정보은닉)

  • Jeong Cheol-Ho;Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.27-36
    • /
    • 2006
  • Lossless data embedding is a method to insert information into a host image that guarantees complete restoration when the extraction has been done. In this paper, we propose a noble reversible data embedding algorithm for images in wavelet domain. The proposed embedding technique, which modifies histogram of wavelet coefficient, is composed of two inserting steps. Data is embedded to wavelet coefficient using modification of histogram in first embedding process. Second embedding step compensates the distortion caused by the first embedding process as well as hides more information. Hence we achieve higher inserting capacity. In view of the relationship between the embedding capacity and the PSNR value, our proposed method shows considerably higher performance than the current reversible data embedding methods.

Embedding Complete binary trees, Hypercube and Hyperpetersen Networks into Petersen-Torus(PT) Networks (정이진트리, 하이퍼큐브 및 하이퍼피터슨 네트워크를 피터슨-토러스(PT) 네트워크에 임베딩)

  • Seo, Jung-Hyun;Lee, Hyeong-Ok;Jang, Moon-Suk
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.8
    • /
    • pp.361-371
    • /
    • 2008
  • In this paper, the hypercube, hyperpetersen networks, whose degree is increasing in accordance with expansion of number of node and complete binary tree are one-to-one embedded into peterson-torus(PT) network which has fixed degree. The one-to-one embedding has less risk of overload or idle for the processor comparative to one-to-many and many-to-one embedding. For the algorithms which were developed on hypercube or hyperpetersen are used for PT network, it is one-to one embedded at expansion ${\doteqdot}1$, dilation 1.5n+2 and link congestion O(n) not to generate large numbers of idle processor. The complete binary tree is embedded into PT network with link congestion =1, expansion ${\doteqdot}5$ and dilation O(n) to avoid the bottleneck at the wormhole routing system which is not affected by the path length.

Performance Improvement of Context-Sensitive Spelling Error Correction Techniques using Knowledge Graph Embedding of Korean WordNet (alias. KorLex) (한국어 어휘 의미망(alias. KorLex)의 지식 그래프 임베딩을 이용한 문맥의존 철자오류 교정 기법의 성능 향상)

  • Lee, Jung-Hun;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.3
    • /
    • pp.493-501
    • /
    • 2022
  • This paper is a study on context-sensitive spelling error correction and uses the Korean WordNet (KorLex)[1] that defines the relationship between words as a graph to improve the performance of the correction[2] based on the vector information of the word embedded in the correction technique. The Korean WordNet replaced WordNet[3] developed at Princeton University in the United States and was additionally constructed for Korean. In order to learn a semantic network in graph form or to use it for learned vector information, it is necessary to transform it into a vector form by embedding learning. For transformation, we list the nodes (limited number) in a line format like a sentence in a graph in the form of a network before the training input. One of the learning techniques that use this strategy is Deepwalk[4]. DeepWalk is used to learn graphs between words in the Korean WordNet. The graph embedding information is used in concatenation with the word vector information of the learned language model for correction, and the final correction word is determined by the cosine distance value between the vectors. In this paper, In order to test whether the information of graph embedding affects the improvement of the performance of context- sensitive spelling error correction, a confused word pair was constructed and tested from the perspective of Word Sense Disambiguation(WSD). In the experimental results, the average correction performance of all confused word pairs was improved by 2.24% compared to the baseline correction performance.

Contextualized Embedding- and Character Embedding-based Pointer Network for Korean Coreference Resolution (문맥 표현과 음절 표현 기반 포인터 네트워크를 이용한 한국어 상호참조해결)

  • Park, Cheoneum;Lee, Changki;Ryu, Jihee;Kim, Hyunki
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.239-242
    • /
    • 2018
  • 문맥 표현은 Recurrent neural network (RNN)에 기반한 언어 모델을 학습하여 얻은 여러 층의 히든 스테이트(hidden state)를 가중치 합(weighted sum)을 하여 얻어낸 벡터이다. Convolution neural network (CNN)를 이용하여 음절 표현을 학습하는 경우, 데이터 내에서 발생하는 미등록어를 처리할 수 있다. 본 논문에서는 음절 표현 CNN 기반의 포인터 네트워크와 문맥 표현을 함께 이용하는 방법을 제안하고, 이를 상호참조해결에 적용한다. 실험 결과, 질의응답 데이터셋에서 CoNLL F1 57.88%로 규칙기반에 비하여 11.09% 더 좋은 성능을 보였다.

  • PDF

Korean Sentiment Analysis Using Natural Network: Based on IKEA Review Data

  • Sim, YuJeong;Yun, Dai Yeol;Hwang, Chi-gon;Moon, Seok-Jae
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.173-178
    • /
    • 2021
  • In this paper, we find a suitable methodology for Korean Sentiment Analysis through a comparative experiment in which methods of embedding and natural network models are learned at the highest accuracy and fastest speed. The embedding method compares word embeddeding and Word2Vec. The model compares and experiments representative neural network models CNN, RNN, LSTM, GRU, Bi-LSTM and Bi-GRU with IKEA review data. Experiments show that Word2Vec and BiGRU had the highest accuracy and second fastest speed with 94.23% accuracy and 42.30 seconds speed. Word2Vec and GRU were found to have the third highest accuracy and fastest speed with 92.53% accuracy and 26.75 seconds speed.

Embedding between Hypercube and HCN(n, n), HFN(n, n) (하이퍼큐브와 HCN(n, n), HFN(n, n) 사이의 임베딩)

  • Kim, Jong-Seok;Lee, Hyeong-Ok;Heo, Yeong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.9A no.2
    • /
    • pp.191-196
    • /
    • 2002
  • It is one of the important measures in the area of algorithm design that any interconnection network should be embedded into another interconnection network for the practical use of algorithm. A HCN(n, n), HFN(n, n) graph also has such a good properties of a hypercube and has a lower network cost than a hypercube. In this paper, we propose a method to embed between hypercube $Q_2n$ and HCN(n, n), HFN(n, n) graph. We show that hypercube $Q_2n$ can be embedded into an HCN(n, n) and KFN(n, n) with dilation 3, and average dilation is smaller than 2. Also, we has a result that the embedding cost, a HCN(n, n) and KFN(n, n) can be embedded into a hypercube, is O(n)

Design of a Recommendation System for Improving Deep Neural Network Performance

  • Juhyoung Sung;Kiwon Kwon;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.49-56
    • /
    • 2024
  • There have been emerging many use-cases applying recommendation systems especially in online platform. Although the performance of recommendation systems is affected by a variety of factors, selecting appropriate features is difficult since most of recommendation systems have sparse data. Conventional matrix factorization (MF) method is a basic way to handle with problems in the recommendation systems. However, the MF based scheme cannot reflect non-linearity characteristics well. As deep learning technology has been attracted widely, a deep neural network (DNN) framework based collaborative filtering (CF) was introduced to complement the non-linearity issue. However, there is still a problem related to feature embedding for use as input to the DNN. In this paper, we propose an effective method using singular value decomposition (SVD) based feature embedding for improving the DNN performance of recommendation algorithms. We evaluate the performance of recommendation systems using MovieLens dataset and show the proposed scheme outperforms the existing methods. Moreover, we analyze the performance according to the number of latent features in the proposed algorithm. We expect that the proposed scheme can be applied to the generalized recommendation systems.