• Title/Summary/Keyword: Multilingual processing

Search Result 41, Processing Time 0.032 seconds

Design and Implementation of a Multilingual-Supported Article Translation System using Semantic Web (시맨틱 웹을 이용한 다국어-지원 신문기사 번역시스템의 설계 및 구현)

  • Kang, Jeong-Seok;Lee, Ki-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.786-788
    • /
    • 2010
  • 최근 시맨틱 웹의 등장과 발전은 웹 2.0의 발전과 더불어 새로운 웹의 문화를 바꾸어 놓았다. 시맨틱 웹의 적용분야는 다양하지만 그중에서 의미 정보 검색과 다국어 정보 검색 기술을 통한 다국어 지원 번역이 연구 분야로의 필요성이 있다. 기존 기계번역이 번역률에 있어서 가장 큰 한계점은 단어 의미 중의성과 문법적은 오류이다. 따라서 본 논문에서는 시맨틱 웹과 단어 의미 중의성을 해소 시킬 새로운 알고리즘을 제안함으로써 단점을 제거하여 번역률을 향상시켜 모바일에 적용하였다. 모바일에 입력된 신문기사 이미지를 OCR을 통해 텍스트로 변환하고 사전 및 분야 온톨로지와 문장 규칙 추론을 동해 처리 속도 및 정확도 높은 번역시스템을 설계 및 구현하였다.

Analysis of LinkedIn Jobs for Finding High Demand Job Trends Using Text Processing Techniques

  • Kazi, Abdul Karim;Farooq, Muhammad Umer;Fatima, Zainab;Hina, Saman;Abid, Hasan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.223-229
    • /
    • 2022
  • LinkedIn is one of the most job hunting and career-growing applications in the world. There are a lot of opportunities and jobs available on LinkedIn. According to statistics, LinkedIn has 738M+ members. 14M+ open jobs on LinkedIn and 55M+ Companies listed on this mega-connected application. A lot of vacancies are available daily. LinkedIn data has been used for the research work carried out in this paper. This in turn can significantly tackle the challenges faced by LinkedIn and other job posting applications to improve the levels of jobs available in the industry. This research introduces Text Processing in natural language processing on datasets of LinkedIn which aims to find out the jobs that appear most in a month or/and year. Therefore, the large data became renewed into the required or needful source. This study thus uses Multinomial Naïve Bayes and Linear Support Vector Machine learning algorithms for text classification and developed a trained multilingual dataset. The results indicate the most needed job vacancies in any field. This will help students, job seekers, and entrepreneurs with their career decisions

Korean and Multilingual Language Models Study for Cross-Lingual Post-Training (XPT) (Cross-Lingual Post-Training (XPT)을 위한 한국어 및 다국어 언어모델 연구)

  • Son, Suhyune;Park, Chanjun;Lee, Jungseob;Shim, Midan;Lee, Chanhee;Park, Kinam;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.77-89
    • /
    • 2022
  • It has been proven through many previous researches that the pretrained language model with a large corpus helps improve performance in various natural language processing tasks. However, there is a limit to building a large-capacity corpus for training in a language environment where resources are scarce. Using the Cross-lingual Post-Training (XPT) method, we analyze the method's efficiency in Korean, which is a low resource language. XPT selectively reuses the English pretrained language model parameters, which is a high resource and uses an adaptation layer to learn the relationship between the two languages. This confirmed that only a small amount of the target language dataset in the relationship extraction shows better performance than the target pretrained language model. In addition, we analyze the characteristics of each model on the Korean language model and the Korean multilingual model disclosed by domestic and foreign researchers and companies.

Multilingual Word Translation Service based on Word Semantic Analysis (어휘의미분석 기반 다국어 어휘대역 서비스)

  • Ryu, Pum-Mo
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.75-83
    • /
    • 2018
  • Multicultural family members have difficulty in educating their children due to language differences. In order to solve these difficulties, it is necessary to provide smart translation services that enable them easily and quickly access real-life vocabularies. However, the current automatic translation technology is being developed in dominant languages such as English, Chinese, and Japanese. There are also limitations to translating special-purpose terms such as documents of schools and instructions of public institutions. In this study, we propose a real-time automatic word translation service for multicultural family members who understand beginner level Korean. The service automatically analyzes the semantics of each word in the Korean sentences and provides a word-by-word translation. This study includes semantic analysis research for Korean language, building multilingual translation knowledge, and fusion study of language education. We evaluated the word translation service for migrant women from Vietnam and Japan and obtained meaningful evaluation results.

Design and Implementation of IMAP Server Supporting E-mail Address Internationalization(EAI) in a Mobile Environment (모바일 환경에서 다국어 전자 우편 주소 지원을 위한 IMAP 서버 설계 및 구현)

  • Lee, Jin-Kyu;Kim, Kyongsok
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.10
    • /
    • pp.343-348
    • /
    • 2015
  • Due to the need for multilingual e-mail address, EAI Working Group of the IETF has created a wide range of standards associated with e-mail address internationalization(EAI) since 2006. One of the authors and colleagues designed and implemented the mail server, SMTPUTF8, that supports EAI RFC protocols. SMTPUTF8 mail server is composed of new SMTP and POP3 servers supporting EAI RFC protocols. However, SMTPUTF8 did not include a new IMAP server supporting EAI RFC protocol. Recently many people are using smart phones to read and send e-mail messages in a mobile environment. IMAP server is more useful than POP3 server in a mobile environment. Therefore, in this paper, the authors have designed and implemented IMAP server and client app that complies with the IMAP standard (RFC) published by EAI WG of IETF to support multilingual e-mail address. This IMAP server is added to the SMTPUTF8 mail server so that users can access e-mail messages via IMAP client app in a mobile environment.

Component Analysis for Constructing an Emotion Ontology (감정 온톨로지의 구축을 위한 구성요소 분석)

  • Yoon, Ae-Sun;Kwon, Hyuk-Chul
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.157-175
    • /
    • 2010
  • Understanding dialogue participant's emotion is important as well as decoding the explicit message in human communication. It is well known that non-verbal elements are more suitable for conveying speaker's emotions than verbal elements. Written texts, however, contain a variety of linguistic units that express emotions. This study aims at analyzing components for constructing an emotion ontology, that provides us with numerous applications in Human Language Technology. A majority of the previous work in text-based emotion processing focused on the classification of emotions, the construction of a dictionary describing emotion, and the retrieval of those lexica in texts through keyword spotting and/or syntactic parsing techniques. The retrieved or computed emotions based on that process did not show good results in terms of accuracy. Thus, more sophisticate components analysis is proposed and the linguistic factors are introduced in this study. (1) 5 linguistic types of emotion expressions are differentiated in terms of target (verbal/non-verbal) and the method (expressive/descriptive/iconic). The correlations among them as well as their correlation with the non-verbal expressive type are also determined. This characteristic is expected to guarantees more adaptability to our ontology in multi-modal environments. (2) As emotion-related components, this study proposes 24 emotion types, the 5-scale intensity (-2~+2), and the 3-scale polarity (positive/negative/neutral) which can describe a variety of emotions in more detail and in standardized way. (3) We introduce verbal expression-related components, such as 'experiencer', 'description target', 'description method' and 'linguistic features', which can classify and tag appropriately verbal expressions of emotions. (4) Adopting the linguistic tag sets proposed by ISO and TEI and providing the mapping table between our classification of emotions and Plutchik's, our ontology can be easily employed for multilingual processing.

  • PDF

Mining Parallel Text from the Web based on Sentence Alignment

  • Li, Bo;Liu, Juan;Zhu, Huili
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.285-292
    • /
    • 2007
  • The parallel corpus is an important resource in the research field of data-driven natural language processing, but there are only a few parallel corpora publicly available nowadays, mostly due to the high labor force needed to construct this kind of resource. A novel strategy is brought out to automatically fetch parallel text from the web in this paper, which may help to solve the problem of the lack of parallel corpora with high quality. The system we develop first downloads the web pages from certain hosts. Then candidate parallel page pairs are prepared from the page set based on the outer features of the web pages. The candidate page pairs are evaluated in the last step in which the sentences in the candidate web page pairs are extracted and aligned first, and then the similarity of the two web pages is evaluate based on the similarities of the aligned sentences. The experiments towards a multilingual web site show the satisfactory performance of the system.

  • PDF

GMLP for Korean natural language processing and its quantitative comparison with BERT (GMLP를 이용한 한국어 자연어처리 및 BERT와 정량적 비교)

  • Lee, Sung-Min;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.540-543
    • /
    • 2021
  • 본 논문에서는 Multi-Head Attention 대신 Spatial Gating Unit을 사용하는 GMLP[1]에 작은 Attention 신경망을 추가한 모델을 구성하여 뉴스와 위키피디아 데이터로 사전학습을 실시하고 한국어 다운스트림 테스크(감성분석, 개체명 인식)에 적용해 본다. 그 결과, 감성분석에서 Multilingual BERT보다 0.27%높은 Accuracy인 87.70%를 보였으며, 개체명 인식에서는 1.6%높은 85.82%의 F1 Score를 나타내었다. 따라서 GMLP가 기존 Transformer Encoder의 Multi-head Attention[2]없이 SGU와 작은 Attention만으로도 BERT[3]와 견줄만한 성능을 보일 수 있음을 확인할 수 있었다. 또한 BERT와 추론 속도를 비교 실험했을 때 배치사이즈가 20보다 작을 때 BERT보다 1에서 6배 정도 빠르다는 것을 확인할 수 있었다.

  • PDF

Summarizing the Differences in Chinese-Vietnamese Bilingual News

  • Wu, Jinjuan;Yu, Zhengtao;Liu, Shulong;Zhang, Yafei;Gao, Shengxiang
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1365-1377
    • /
    • 2019
  • Summarizing the differences in Chinese-Vietnamese bilingual news plays an important supporting role in the comparative analysis of news views between China and Vietnam. Aiming at cross-language problems in the analysis of the differences between Chinese and Vietnamese bilingual news, we propose a new method of summarizing the differences based on an undirected graph model. The method extracts elements to represent the sentences, and builds a bridge between different languages based on Wikipedia's multilingual concept description page. Firstly, we calculate the similarity between Chinese and Vietnamese news sentences, and filter the bilingual sentences accordingly. Then we use the filtered sentences as nodes and the similarity grade as the weight of the edge to construct an undirected graph model. Finally, combining the random walk algorithm, the weight of the node is calculated according to the weight of the edge, and sentences with highest weight can be extracted as the difference summary. The experiment results show that our proposed approach achieved the highest score of 0.1837 on the annotated test set, which outperforms the state-of-the-art summarization models.

Relations of multilingual's L1, L2, L3 lexical processing and cerebral activation areas in fMRI (fMRI에 반영된 다중언어화자의 L1, L2, L3 어휘 정보처리 특성과 대뇌 활성화 영역의 관련성)

  • Nam Kichun;Lee Donghoon;Oh Hyun-Gum;Ryu Jaeook
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.313-316
    • /
    • 2002
  • 본 연구에서는 기능적 자기공명 영상법(functional magnetic resonance imaging)을 이용하여, 한국어, 일어, 프랑스어, 영어 등 여러 언어를 구사할 수 있는 다중언어화자들을 대상으로 각 언어에 따른 대뇌 언어처리 과정을 알아보고, 그 처리과정이 해당언어의 유창성, 습득시기에 따라 어떻게 달라지는지를 알아보았다. 실험 결과, 언어처리에 있어 핵심적인 역할을 하는 것으로 보고되는 Broca 영역은 언어의 이해와 산출 과정에 모두 관계된 것으로 보이며, 언어의 산출과정에는 언어의 이해과정에 관계되는 영역외에 조음과정에 따른 영역의 활성화가 보고되었다. 또한 언어습득시기와 유창성에 따른 각 언어의 활성화를 살펴보면, 유창성이 높을수록 대뇌 활성화는 줄어들며, 유창성이 낮은 언어조건에서는 언어처리 영역의 활성화 수준이 높아지며 또한 우반구 및 전전두회(prefrontal gyrus)의 활성화가 높아지는 것이 보인다.

  • PDF