• Title/Summary/Keyword: Language-Based Retrieval Model

Search Result 73, Processing Time 0.027 seconds

Design of Query Processing System to Retrieve Information from Social Network using NLP

  • Virmani, Charu;Juneja, Dimple;Pillai, Anuradha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1168-1188
    • /
    • 2018
  • Social Network Aggregators are used to maintain and manage manifold accounts over multiple online social networks. Displaying the Activity feed for each social network on a common dashboard has been the status quo of social aggregators for long, however retrieving the desired data from various social networks is a major concern. A user inputs the query desiring the specific outcome from the social networks. Since the intention of the query is solely known by user, therefore the output of the query may not be as per user's expectation unless the system considers 'user-centric' factors. Moreover, the quality of solution depends on these user-centric factors, the user inclination and the nature of the network as well. Thus, there is a need for a system that understands the user's intent serving structured objects. Further, choosing the best execution and optimal ranking functions is also a high priority concern. The current work finds motivation from the above requirements and thus proposes the design of a query processing system to retrieve information from social network that extracts user's intent from various social networks. For further improvements in the research the machine learning techniques are incorporated such as Latent Dirichlet Algorithm (LDA) and Ranking Algorithm to improve the query results and fetch the information using data mining techniques.The proposed framework uniquely contributes a user-centric query retrieval model based on natural language and it is worth mentioning that the proposed framework is efficient when compared on temporal metrics. The proposed Query Processing System to Retrieve Information from Social Network (QPSSN) will increase the discoverability of the user, helps the businesses to collaboratively execute promotions, determine new networks and people. It is an innovative approach to investigate the new aspects of social network. The proposed model offers a significant breakthrough scoring up to precision and recall respectively.

Inverse Document Frequency-Based Word Embedding of Unseen Words for Question Answering Systems (질의응답 시스템에서 처음 보는 단어의 역문헌빈도 기반 단어 임베딩 기법)

  • Lee, Wooin;Song, Gwangho;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.902-909
    • /
    • 2016
  • Question answering system (QA system) is a system that finds an actual answer to the question posed by a user, whereas a typical search engine would only find the links to the relevant documents. Recent works related to the open domain QA systems are receiving much attention in the fields of natural language processing, artificial intelligence, and data mining. However, the prior works on QA systems simply replace all words that are not in the training data with a single token, even though such unseen words are likely to play crucial roles in differentiating the candidate answers from the actual answers. In this paper, we propose a method to compute vectors of such unseen words by taking into account the context in which the words have occurred. Next, we also propose a model which utilizes inverse document frequencies (IDF) to efficiently process unseen words by expanding the system's vocabulary. Finally, we validate that the proposed method and model improve the performance of a QA system through experiments.

Opera Clustering: K-means on librettos datasets

  • Jeong, Harim;Yoo, Joo Hun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.45-52
    • /
    • 2022
  • With the development of artificial intelligence analysis methods, especially machine learning, various fields are widely expanding their application ranges. However, in the case of classical music, there still remain some difficulties in applying machine learning techniques. Genre classification or music recommendation systems generated by deep learning algorithms are actively used in general music, but not in classical music. In this paper, we attempted to classify opera among classical music. To this end, an experiment was conducted to determine which criteria are most suitable among, composer, period of composition, and emotional atmosphere, which are the basic features of music. To generate emotional labels, we adopted zero-shot classification with four basic emotions, 'happiness', 'sadness', 'anger', and 'fear.' After embedding the opera libretto with the doc2vec processing model, the optimal number of clusters is computed based on the result of the elbow method. Decided four centroids are then adopted in k-means clustering to classify unsupervised libretto datasets. We were able to get optimized clustering based on the result of adjusted rand index scores. With these results, we compared them with notated variables of music. As a result, it was confirmed that the four clusterings calculated by machine after training were most similar to the grouping result by period. Additionally, we were able to verify that the emotional similarity between composer and period did not appear significantly. At the end of the study, by knowing the period is the right criteria, we hope that it makes easier for music listeners to find music that suits their tastes.

Disease Prediction By Learning Clinical Concept Relations (딥러닝 기반 임상 관계 학습을 통한 질병 예측)

  • Jo, Seung-Hyeon;Lee, Kyung-Soon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.35-40
    • /
    • 2022
  • In this paper, we propose a method of constructing clinical knowledge with clinical concept relations and predicting diseases based on a deep learning model to support clinical decision-making. Clinical terms in UMLS(Unified Medical Language System) and cancer-related medical knowledge are classified into five categories. Medical related documents in Wikipedia are extracted using the classified clinical terms. Clinical concept relations are established by matching the extracted medical related documents with the extracted clinical terms. After deep learning using clinical knowledge, a disease is predicted based on medical terms expressed in a query. Thereafter, medical terms related to the predicted disease are selected as an extended query for clinical document retrieval. To validate our method, we have experimented on TREC Clinical Decision Support (CDS) and TREC Precision Medicine (PM) test collections.

A Term Cluster Query Expansion Model Based on Classification Information of Retrieval Documents (검색 문서의 분류 정보에 기반한 용어 클러스터 질의 확장 모델)

  • Kang, Hyun-Su;Kang, Hyun-Kyu;Park, Se-Young;Lee, Yong-Seok
    • Annual Conference on Human and Language Technology
    • /
    • 1999.10e
    • /
    • pp.7-12
    • /
    • 1999
  • 정보 검색 시스템은 사용자 질의의 키워드들과 문서들의 유사성(similarity)을 기준으로 관련 문서들을 순서화하여 사용자에게 제공한다. 그렇지만 인터넷 검색에 사용되는 질의는 일반적으로 짧기 때문에 보다 유용한 질의를 만들고자 하는 노력이 지금까지 계속되고 있다. 그러나 키워드에 포함된 정보가 제한적이기 때문에 이에 대한 보완책으로 사용자의 적합성 피드백을 이용하는 방법을 널리 사용하고 있다. 본 논문에서는 일반적인 적합성 피드백의 가장 큰 단점인 빈번한 사용자 참여는 지양하고, 시스템에 기반한 적합성 피드백에서 배제한 사용자 참여를 유도하는 검색 문서의 분류 정보에 기반한 용어 클러스터 질의 확장 모델(Term Cluster Query Expansion Model)을 제안한다. 이 방법은 검색 시스템에 의해 검색된 상위 n개의 문서에 대하여 분류기를 이용하여 각각의 문서에 분류 정보를 부여하고, 문서에 부여된 분류 정보를 이용하여 분류 정보의 수(m)만큼으로 문서들을 그룹을 짓는다. 적합성 피드백 알고리즘을 이용하여 m개의 그룹으로부터 각각의 용어 클러스터(Term Cluster)를 생성한다. 이 클러스터가 사용자에게 문서 대신에 피드백의 자료로 제공된다. 실험 결과, 적합성 알고리즘 중 Rocchio방법을 이용할 때 초기 질의보다 나은 성능을 보였지만, 다른 연구에서 보여준 성능 향상은 나타내지 못했다. 그 이유는 분류기의 오류와 문서의 특성상 한 영역으로 규정짓기 어려운 문서가 존재하기 때문이다. 그러나 검색하고자 하는 사용자의 관심 분야나 찾고자 하는 성향이 다르더라도 시스템에 종속되지 않고 유연하게 대처하며 검색 성능(retrieval effectiveness)을 향상시킬 수 있다.사용되고 있어 적응에 문제점을 가지기도 하였다. 본 연구에서는 그 동안 계속되어 온 한글과 한잔의 사용에 관한 논쟁을 언어심리학적인 연구 방법을 통해 조사하였다. 즉, 글을 읽는 속도, 글의 의미를 얼마나 정확하게 이해했는지, 어느 것이 더 기억에 오래 남는지를 측정하여 어느 쪽의 입장이 옮은 지를 판단하는 것이다. 실험 결과는 문장을 읽는 시간에서는 한글 전용문인 경우에 월등히 빨랐다. 그러나. 내용에 대한 기억 검사에서는 국한 혼용 조건에서 더 우수하였다. 반면에, 이해력 검사에서는 천장 효과(Ceiling effect)로 두 조건간에 차이가 없었다. 따라서, 본 실험 결과에 따르면, 글의 읽기 속도가 중요한 문서에서는 한글 전용이 좋은 반면에 글의 내용 기억이 강조되는 경우에는 한자를 혼용하는 것이 더 효율적이다.이 높은 활성을 보였다. 7. 이상을 종합하여 볼 때 고구마 끝순에는 페놀화합물이 다량 함유되어 있어 높은 항산화 활성을 가지며, 아질산염소거능 및 ACE저해활성과 같은 생리적 효과도 높아 기능성 채소로 이용하기에 충분한 가치가 있다고 판단된다.등의 관련 질환의 예방, 치료용 의약품 개발과 기능성 식품에 효과적으로 이용될 수 있음을 시사한다.tall fescue 23%, Kentucky bluegrass 6%, perennial ryegrass 8%) 및 white clover 23%를 유지하였다. 이상의 결과를 종합할 때, 초종과 파종비율에 따른 혼파초지의 건물수량과 사료가치의 차이를 확인할 수 있었으며, 레드 클로버 + 혼파 초지가 건물수량과 사료가치를 높이는데 효과적이었다.\ell}$ 이었으며 , yeast extract 첨가(添加)하여 배양시(培養時)는 yeast extract

  • PDF

A Study of Step-by-step Countermeasures Model through Analysis of SQL Injection Attacks Code (공격코드 사례분석을 기반으로 한 SQL Injection에 대한 단계적 대응모델 연구)

  • Kim, Jeom-Goo;Noh, Si-Choon
    • Convergence Security Journal
    • /
    • v.12 no.1
    • /
    • pp.17-25
    • /
    • 2012
  • SQL Injection techniques disclosed web hacking years passed, but these are classified the most dangerous attac ks. Recent web programming data for efficient storage and retrieval using a DBMS is essential. Mainly PHP, JSP, A SP, and scripting language used to interact with the DBMS. In this web environments application does not validate the client's invalid entry may cause abnormal SQL query. These unusual queries to bypass user authentication or da ta that is stored in the database can be exposed. SQL Injection vulnerability environment, an attacker can pass the web-based authentication using username and password and data stored in the database. Measures against SQL Inj ection on has been announced as a number of methods. But if you rely on any one method of many security hole ca n occur. The proposal of four levels leverage is composed with the source code, operational phases, database, server management side and the user input validation. This is a way to apply the measures in terms of why the accident preventive steps for creating a phased step-by-step response nodel, through the process of management measures, if applied, there is the possibility of SQL Injection attacks can be.

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

Error Analysis of Recent Conversational Agent-based Commercialization Education Platform (최신 대화형 에이전트 기반 상용화 교육 플랫폼 오류 분석)

  • Lee, Seungjun;Park, Chanjun;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.11-22
    • /
    • 2022
  • Recently, research and development using various Artificial Intelligence (AI) technologies are being conducted in the field of education. Among the AI in Education (AIEd), conversational agents are not limited by time and space, and can learn more effectively by combining them with various AI technologies such as voice recognition and translation. This paper conducted a trend analysis on platforms that have a large number of users and used conversational agents for English learning among commercialized application. Currently commercialized educational platforms using conversational agent through trend analysis has several limitations and problems. To analyze specific problems and limitations, a comparative experiment was conducted with the latest pre-trained large-capacity dialogue model. Sensibleness and Specificity Average (SSA) human evaluation was conducted to evaluate conversational human-likeness. Based on the experiment, this paper propose the need for trained with large-capacity parameters dialogue models, educational data, and information retrieval functions for effective English conversation learning.

KAB: Knowledge Augmented BERT2BERT Automated Questions-Answering system for Jurisprudential Legal Opinions

  • Alotaibi, Saud S.;Munshi, Amr A.;Farag, Abdullah Tarek;Rakha, Omar Essam;Al Sallab, Ahmad A.;Alotaibi, Majid
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.346-356
    • /
    • 2022
  • The jurisprudential legal rules govern the way Muslims react and interact to daily life. This creates a huge stream of questions, that require highly qualified and well-educated individuals, called Muftis. With Muslims representing almost 25% of the planet population, and the scarcity of qualified Muftis, this creates a demand supply problem calling for Automation solutions. This motivates the application of Artificial Intelligence (AI) to solve this problem, which requires a well-designed Question-Answering (QA) system to solve it. In this work, we propose a QA system, based on retrieval augmented generative transformer model for jurisprudential legal question. The main idea in the proposed architecture is the leverage of both state-of-the art transformer models, and the existing knowledge base of legal sources and question-answers. With the sensitivity of the domain in mind, due to its importance in Muslims daily lives, our design balances between exploitation of knowledge bases, and exploration provided by the generative transformer models. We collect a custom data set of 850,000 entries, that includes the question, answer, and category of the question. Our evaluation methodology is based on both quantitative and qualitative methods. We use metrics like BERTScore and METEOR to evaluate the precision and recall of the system. We also provide many qualitative results that show the quality of the generated answers, and how relevant they are to the asked questions.

Empirical study on BlenderBot 2.0's errors analysis in terms of model, data and dialogue (모델, 데이터, 대화 관점에서의 BlendorBot 2.0 오류 분석 연구)

  • Lee, Jungseob;Son, Suhyune;Shim, Midan;Kim, Yujin;Park, Chanjun;So, Aram;Park, Jeongbae;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.93-106
    • /
    • 2021
  • Blenderbot 2.0 is a dialogue model representing open domain chatbots by reflecting real-time information and remembering user information for a long time through an internet search module and multi-session. Nevertheless, the model still has many improvements. Therefore, this paper analyzes the limitations and errors of BlenderBot 2.0 from three perspectives: model, data, and dialogue. From the data point of view, we point out errors that the guidelines provided to workers during the crowdsourcing process were not clear, and the process of refining hate speech in the collected data and verifying the accuracy of internet-based information was lacking. Finally, from the viewpoint of dialogue, nine types of problems found during conversation and their causes are thoroughly analyzed. Furthermore, practical improvement methods are proposed for each point of view, and we discuss several potential future research directions.