References
- Artetxe, M., G. Labaka, and E. Agirre, "Learning Principled Bilingual Mappings of Word Embeddings while Preserving Monolingual Invariance," In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, (2016), 2289-2294.
- Ashish, K., P. Jain, and R. Viswanathan, "Multilabel Classification using Bayesian Compressed Sensing," Advances in Neural Information Processing Systems, (2012).
- Biesialska, M. and M. R. Costa-jussa, "Refinement of Unsupervised Cross-lingual Word Embeddings," arXiv:2002.09213, (2020).
- Bingyu, W., L. Chen, W. Sun, K. Qin, K. Li, and H. Zhou, "Ranking-based Autoencoder for Extreme Multi-label Classification," arXiv: 1904.05937, (2019).
- Brown, T., B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, ... and D. Amodei, (2020). "Language Models Are Few-shot Learners," Advances in Neural Information Processing Systems, Vol.33, (2020), 1877-1901.
- Deerwester, S., S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman, "Indexing by Latent Semantic Analysis," Journal of the American Society for Information Science, Vol.41, No.6(1990), 391-407. https://doi.org/10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9
- Devlin, J., W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," arXiv:1810.04805, (2018).
- Farbound, T. and H-T. Lin, "Multilabel Classification with Principal Label Space Transformation," Neural Computation, Vol.24, No.9(2012), 2508-2542. https://doi.org/10.1162/NECO_a_00320
- Grave, E., A. Joulin, and Q. Berthet, "Unsupervised Alignment of Embeddings with Wasserstein Procrustes," In The 22nd International Conference on Artificial Intelligence and Statistics, (2019), 1880-1890.
- Gu, Y., K. Yang, S. Fu, S. Chen, X. Li, and I. Marsic, "Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-level Alignment," In Proceedings of the Conference Association for Computational Linguistics Meeting, (2018), 2225-2235.
- Guo, H., J. Tang, W. Zeng, X. Zhao, and L. Liu, "Multi-modal Entity Alignment in Hyperbolic Space," Neurocomputing, Vol.461, (2021), 598-607. https://doi.org/10.1016/j.neucom.2021.03.132
- Hermann, K. M. and P. Blunsom, "Multilingual Distributed Representations without Word Alignment," In Proceedings of ICLR, (2013)
- Jeffrey, P., S. Richard, and D. M. Christopher, "Glove: Global Vectors for Word Representation," In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, (2014), 1532-1543.
- Jorg, W., A. Tyukin, and S. Kramer, "A Nonlinear Label Compression and Transformation Method for Multi-label Classification Using Autoencoders," Advances in Knowledge Discovery and Data Mining, (2016), 328-340.
- Kim, M. and N. Kim, "Label Embedding for Improving Classification Accuracy Using Autoencoder with Skip-Connections," Journal of Intelligence and Information Systems, Vol.27, No.3(2021), 175-197. https://doi.org/10.13088/JIIS.2021.27.3.175
- Lample, G. and A. Conneau, "Cross-lingual Language Model Pretraining," arXiv:1901.07291, (2019).
- Lee, M., S. Yang, and H. Lee, "Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity," Journal of Intelligence and Information Systems, Vol.25, No.4 (2019), 105-122. https://doi.org/10.13088/JIIS.2019.25.4.105
- Lin, Z., G. Ding, M. Hu, and J. Wang, "Multi-label Classification via Feature-aware Implicit Label Space Encoding," In International Conference on Machine Learning, (2014), 325-333.
- Mikolov, T., K. Chen, G. Corrado, and J. Dean, "Efficient Estimation of Word Representations in Vector Space," arXiv:1301.3781, (2013).
- Mikolov, T., Q. V. Le, and I. Sutskever, "Exploiting Similarities Among Languages for Machine Translation," arXiv:1309.4168, (2013).
- Nakashole, N. and R. Flauger, "Characterizing Departures from Linearity in Word Translation," arXiv:1806.04508, (2018).
- Park, H. and K. Shin, "Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models," Journal of Intelligence and Information Systems, Vol.26, No.4(2020), 1-25. https://doi.org/10.13088/JIIS.2020.26.4.001
- Patra, B., J. R. A. Moniz, S. Garg, M. R. Gormley, and G. Neubig, "Bilingual Lexicon Induction with Semi-supervision in Non-isometric Embedding Spaces," arXiv:1908.06625, (2019).
- Peters, M. E., N. Mark, I. Mohi, G. Matt, C. Christopher, K. Lee, and Z. Luke, "Deep Contextualized Word Representations," arXiv: 1802.05365, (2018).
- Piotr, B., G. Eduard, J. Armand, and M. Tomas, "Enriching Word Vectors with Subword Information," arXiv:1607.04606, (2016).
- Schonemann, P. H., "A Generalized Solution of the Orthogonal Procrustes Problem," Psychometrika, Vol.31, No.1(1966), 1-10. https://doi.org/10.1007/BF02289451
- Sogaard, A., S. Ruder, and I. Vulic, "On the Limitations of Unsupervised Bilingual Dictionary Induction," arXiv:1805.03620, (2018).
- Tai, W., H. T. Kung, X. L. Dong, M. Comiter, and C. F. Kuo, "exBERT: Extending Pre-trained Models with Domain-specific Vocabulary Under Constrained Training Resources," In Findings of the Association for Computational Linguistics: EMNLP 2020, (2020), 1433-1439.
- Vulic, I. and M. F. Moens, "A Study on Bootstrapping Bilingual Vector Spaces from Non-parallel Data (and Nothing Else)," In Proceedings of EMNLP, (2013), 1613-1624.
- Vulic, I., G. Glavas, R. Reichart, and A. Korhonen, "Do We Really Need Fully Unsupervised Cross-lingual Embeddings?," arXiv:1909.01638, (2019).
- Wu, S. and M. Dredze, "Beto, Bentz, Becas: The Surprising Cross-lingual Effectiveness of BERT," arXiv:1904.09077 (2019).
- Xing, C., D. Wang, C. Liu, and Y. Lin, "Normalized Word Embedding and Orthogonal Transform for Bilingual Word Translation," In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (2015), 1006-1011.
- Yao-Nan. Chen and H-T. Lin, "Feature-aware Label Space Dimension Reduction for Multi-label Classification," Advances in Neural Information Processing Systems 25, (2012).
- Yu, E., S. Seo, and N. Kim, "Building Specialized Language Model for National R&D through Knowledge Transfer Based on Further Pre-training," Knowledge Management Research, Vol.22, No.3(2021), 91-106.