• Title/Summary/Keyword: 대용어

Search Result 149, Processing Time 0.026 seconds

XSTAR: XQuery to SQL Translation Algorithms on RDBMS (XSTAR: XML 질의의 SQL 변환 알고리즘)

  • Hong, Dong-Kweon;Jung, Min-Kyoung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.3
    • /
    • pp.430-433
    • /
    • 2007
  • There have been several researches to manipulate XML Queries efficiently since XML has been accepted in many areas. Among the many of the researches majority of them adopt relational databases as underlying systems because relational model which is used the most widely for managing large data efficiently. In this paper we develop XQuery to SQL Translation Algorithms called XSTAR that can efficiently handle XPath, XQuery FLWORs with nested iteration expressions, element constructors and keywords retrieval on relational database as well as constructing XML fragments from the transformed SQL results. The entire algorithms mentioned in XSTAR have been implemented as the XQuery processor engine in XML management system, XPERT, and we can test and confirm it's prototype from "http ://dblab.kmu.ac.kr/project.jsp".

Inverted Indexes for XML Updates and Full-Text Retrievals in Relational Model (관계형 모델에서 XML 변경과 전문 검색을 지원하기 위한 역 인덱스 구축 기법)

  • Cheon, Yun-Woo;Hong, Dong-Kweon
    • The KIPS Transactions:PartD
    • /
    • v.11D no.3
    • /
    • pp.509-518
    • /
    • 2004
  • Recently there has been some efforts to add XML full-text retrievals and XML updates into new standardization of XML queries. XML full-text retrievals plays an important role in XML query languages. of like tables in relational model an XML document has complex and unstructured natures. We believe that when we try to get some information from unstructured XML documents a full-text retrieval query is much more convenient approach than a regular structured query XML update is another core function that an XML query have to have. In this paper we propose an inverted index to support XML updates and XML full-text queries in relational environment. Performance comparisons exhibit that our approach maintains a comparable size of inverted indexes and it supports many full-text retrieval functions very well. It also shows very stable retrieval performance especially for large size of XML documents. Foremost our approach handles XML updates efficiently by removing cascading effects.

A Study on Monitoring Method of Citizen Opinion based on Big Data : Focused on Gyeonggi Lacal Currency (Gyeonggi Money) (빅데이터 기반 시민의견 모니터링 방안 연구 : "경기지역화폐"를 중심으로)

  • Ahn, Soon-Jae;Lee, Sae-Mi;Ryu, Seung-Ei
    • Journal of Digital Convergence
    • /
    • v.18 no.7
    • /
    • pp.93-99
    • /
    • 2020
  • Text mining is one of the big data analysis methods that extracts meaningful information from atypical large-scale text data. In this study, text mining was used to monitor citizens' opinions on the policies and systems being implemented. We collected 5,108 newspaper articles and 748 online cafe posts related to 'Gyeonggi Lacal Currency' and performed frequency analysis, TF-IDF analysis, association analysis, and word tree visualization analysis. As a result, many articles related to the purpose of introducing local currency, the benefits provided, and the method of use. However, the contents related to the actual use of local currency were written in the online cafe posts. In order to revitalize local currency, the news was involved in the promotion of local currency as an informant. Online cafe posts consisted of the opinions of citizens who are local currency users. SNS and text mining are expected to effectively activate various policies as well as local currency.

An Efficient Method of IR-based Automated Keyword Tagging (정보검색 기법을 이용한 효율적인 자동 키워드 태깅)

  • Kim, Jinsuk;Choe, Ho-Seop;You, Beom-Jong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.24-27
    • /
    • 2008
  • As shown in Wikipedia, tagging or cross-linking through major key-words improves the readability of documents. Recently, the Semantic Web rises the importance of social tagging as a key feature of the Web 2.0 and Tag Cloud has emerged as its crucial phenotype. In this paper we provides an efficient method of automated keyword tagging based on controlled term collection, where the computational complexity of O(mN) - if pattern matching algorithm is used - can be reduced to O(mlogN) - if Information Retrieval is adopted - while m is the length of target document and N is the total number of candidate terms to be tagged. The result shows that IR-based tagging speeds up 5.6 times compared with fast pattern matching algorithm.

  • PDF

SPARQL Query Processing in Distributed In-Memory System (분산 메모리 시스템에서의 SPARQL 질의 처리)

  • Jagvaral, Batselem;Lee, Wangon;Kim, Kang-Pil;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1109-1116
    • /
    • 2015
  • In this paper, we propose a query processing approach that uses the Spark functional programming and distributed memory system to solve the computational overhead of SPARQL. In the semantic web, RDF ontology data is produced at large scale, and the main challenge for the semantic web is to query and manipulate such a large ontology with a high throughput. The most existing studies on SPARQL have focused on deploying the Hadoop MapReduce framework, and although approaches based on Hadoop MapReduce have shown promising results, they achieve a low level of throughput due to the underlying distributed file processes. Therefore, in order to speed up the query processes, we suggest query- processing methods that are based on memory caching in distributed memory system. Our approach is also integrated with a clause unification method for propagating between the clauses that exploits Spark join, map and filter methods along with caching. In our experiments, we have achieved a high level of performance relative to other approaches. In particular, our performance was nearly similar to that of Sempala, which has been considered to be the fastest query processing system.

The Verification of the Transfer Learning-based Automatic Post Editing Model (전이학습 기반 기계번역 사후교정 모델 검증)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.27-35
    • /
    • 2021
  • Automatic post editing is a research field that aims to automatically correct errors in machine translation results. This research is mainly being focus on high resource language pairs, such as English-German. Recent APE studies are mainly adopting transfer learning based research, where pre-training language models, or translation models generated through self-supervised learning methodologies are utilized. While translation based APE model shows superior performance in recent researches, as such researches are conducted on the high resource languages, the same perspective cannot be directly applied to the low resource languages. In this work, we apply two transfer learning strategies to Korean-English APE studies and show that transfer learning with translation model can significantly improves APE performance.

Filter-mBART Based Neural Machine Translation Using Parallel Corpus Filtering (병렬 말뭉치 필터링을 적용한 Filter-mBART기반 기계번역 연구)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Park, JeongBae;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.5
    • /
    • pp.1-7
    • /
    • 2021
  • In the latest trend of machine translation research, the model is pretrained through a large mono lingual corpus and then finetuned with a parallel corpus. Although many studies tend to increase the amount of data used in the pretraining stage, it is hard to say that the amount of data must be increased to improve machine translation performance. In this study, through an experiment based on the mBART model using parallel corpus filtering, we propose that high quality data can yield better machine translation performance, even utilizing smaller amount of data. We propose that it is important to consider the quality of data rather than the amount of data, and it can be used as a guideline for building a training corpus.

Research on Recent Quality Estimation (최신 기계번역 품질 예측 연구)

  • Eo, Sugyeong;Park, Chanjun;Moon, Hyeonseok;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.7
    • /
    • pp.37-44
    • /
    • 2021
  • Quality estimation (QE) can evaluate the quality of machine translation output even for those who do not know the target language, and its high utilization highlights the need for QE. QE shared task is held every year at Conference on Machine Translation (WMT), and recently, researches applying Pretrained Language Model (PLM) are mainly being conducted. In this paper, we conduct a survey on the QE task and research trends, and we summarize the features of PLM. In addition, we used a multilingual BART model that has not yet been utilized and performed comparative analysis with the existing studies such as XLM, multilingual BERT, and XLM-RoBERTa. As a result of the experiment, we confirmed which PLM was most effective when applied to QE, and saw the possibility of applying the multilingual BART model to the QE task.

A Study on Data Cleansing Techniques for Word Cloud Analysis of Text Data (텍스트 데이터 워드클라우드 분석을 위한 데이터 정제기법에 관한 연구)

  • Lee, Won-Jo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.745-750
    • /
    • 2021
  • In Big data visualization analysis of unstructured text data, raw data is mostly large-capacity, and analysis techniques cannot be applied without cleansing it unstructured. Therefore, from the collected raw data, unnecessary data is removed through the first heuristic cleansing process and Stopwords are removed through the second machine cleansing process. Then, the frequency of the vocabulary is calculated, visualized using the word cloud technique, and key issues are extracted and informationalized, and the results are analyzed. In this study, we propose a new Stopword cleansing technique using an external Stopword set (DB) in Python word cloud, and derive the problems and effectiveness of this technique through practical case analysis. And, through this verification result, the utility of the practical application of word cloud analysis applying the proposed cleansing technique is presented.

The Processing of Causative and Passive Verbs in Korean (한국어의 사.피동문 처리에 관한 연구:실어증 환자의 처리 양상을 바탕으로)

  • 문영선;김동휘;남기춘
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.05a
    • /
    • pp.267-272
    • /
    • 2000
  • 본 연구에서는 한국어의 사·피동문을 실어증 환자가 처리하는 양상에 대하여 살펴보았다. 한국어의 사·피동문은 용언에 파생접사가 붙어 이루어지는 경우와 '-게 하다'나 '-어 지다'와 같이 구문 변형으로 하여, 실어증 환자에게 실험을 하였다. 실험에 참여한 환자는 명칭성 실어증 환자, 이해성 실어증 환자, 표현성 실어증 환자, 전반성 실어증 환자로 구성되어 있다. 본 실험에서는 단어 채워 넣기 과제(word completion task)를 사용하였다. 명칭성 실어증 환자의 경우 피동에서는 처리 오류를 보이는 반면, 사동에는 아무런 문제도 보이지 않았다. 표현성 실어증 환자의 경우, <피동-비변형>에서 오류를 많이 보였다. 이를 통해 한국어의 사·피동은 영어와 달리 통사상의 문제가 아니라는 결론을 내릴 수 있다. 즉 이미 사·피동 접사에 의해 파생된 단어가 어휘부에 저장되어 있고, 각 단어의 논항 정보에 따라 문장이 생성되는 것이다. 표현성 실어증 환자가 피동의 비변형에서 지배적인 오류를 보이는 것은 피동의 비변형이 타동사로서 변형인 피동형에 비해 하나의 논항을 더 취하기 때문이다. 이해성 실어증 환자의 경우 사·피동 생성에 별 어려움을 보이지 않았다. 이는 이해성 실어증 환자가 개별 어휘의 논항 정보에 손실을 적게 입고 있음을 시사하는 결과이다. 본 연구에서는 서로 다른 유형을 보이는 환자들을 대상으로 한국어의 사·피동의 처리양상을 대조한 결과, 첫재 사·피동은 서로 다른 통사, 의미상의 처리 양상을 보이고 있고, 둘째 파생접사가 결합된 형태로 어휘부에 저장되어 있는 개별 사·피동사에 의해 형성되는 것임을 확인하였다.d CO2 quantity causes flame temperature to fall since at high strain retes diluent effect is prevailing and at low strain rates the products inhibits chain branching. It is also found that the contribution of NO production by N2O and NO2 mechanisms are negligible and that thermal mechanism is concentrate on only the reaction zone. As strain rate and CO2 quantity increase, NO production is remarkably augmented.our 10%를 대용한 것이 무첨가한 것보다 많이 단단해졌음을 알 수 있었다. 혼합중의 반죽의 조사형 전자현미경 관찰로 amarans flour로 대체한 gluten이 단단해졌음을 알수 있었다. 유화제 stearly 칼슘, 혹은 hemicellulase를 amarans 10% 대체한 밀가루에 첨가하면 확연히 비용적을 증대시킬 수 있다는 사실을 알 수 있었다. quinoa는 명아주과 Chenopodium에 속하고 페루, 볼리비아 등의 고산지에서 재배 되어지는 것을 시료로 사용하였다. quinoa 분말은 중량의 5-20%을 quinoa를 대체하고 더욱이 분말중량에 대하여 0-200ppm의 lipase를 lipid(밀가루의 2-3배)에 대하여 품질개량제로서 이용했다.

  • PDF