• 제목/요약/키워드: Seq2seq Framework

검색결과 2건 처리시간 0.02초

Next Location Prediction with a Graph Convolutional Network Based on a Seq2seq Framework

  • Chen, Jianwei;Li, Jianbo;Ahmed, Manzoor;Pang, Junjie;Lu, Minchao;Sun, Xiufang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권5호
    • /
    • pp.1909-1928
    • /
    • 2020
  • Predicting human mobility has always been an important task in Location-based Social Network. Previous efforts fail to capture spatial dependence effectively, mainly reflected in weakening the location topology information. In this paper, we propose a neural network-based method which can capture spatial-temporal dependence to predict the next location of a person. Specifically, we involve a graph convolutional network (GCN) based on a seq2seq framework to capture the location topology information and temporal dependence, respectively. The encoder of the seq2seq framework first generates the hidden state and cell state of the historical trajectories. The GCN is then used to generate graph embeddings of the location topology graph. Finally, we predict future trajectories by aggregated temporal dependence and graph embeddings in the decoder. For evaluation, we leverage two real-world datasets, Foursquare and Gowalla. The experimental results demonstrate that our model has a better performance than the compared models.

PC-SAN: Pretraining-Based Contextual Self-Attention Model for Topic Essay Generation

  • Lin, Fuqiang;Ma, Xingkong;Chen, Yaofeng;Zhou, Jiajun;Liu, Bo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3168-3186
    • /
    • 2020
  • Automatic topic essay generation (TEG) is a controllable text generation task that aims to generate informative, diverse, and topic-consistent essays based on multiple topics. To make the generated essays of high quality, a reasonable method should consider both diversity and topic-consistency. Another essential issue is the intrinsic link of the topics, which contributes to making the essays closely surround the semantics of provided topics. However, it remains challenging for TEG to fill the semantic gap between source topic words and target output, and a more powerful model is needed to capture the semantics of given topics. To this end, we propose a pretraining-based contextual self-attention (PC-SAN) model that is built upon the seq2seq framework. For the encoder of our model, we employ a dynamic weight sum of layers from BERT to fully utilize the semantics of topics, which is of great help to fill the gap and improve the quality of the generated essays. In the decoding phase, we also transform the target-side contextual history information into the query layers to alleviate the lack of context in typical self-attention networks (SANs). Experimental results on large-scale paragraph-level Chinese corpora verify that our model is capable of generating diverse, topic-consistent text and essentially makes improvements as compare to strong baselines. Furthermore, extensive analysis validates the effectiveness of contextual embeddings from BERT and contextual history information in SANs.