• Title/Summary/Keyword: Search engines

Search Result 437, Processing Time 0.023 seconds

The Utility of Chatbot for Learning in the Field of Radiology (방사선(학)과 분야에서 챗봇을 이용한 학습방법의 유용성)

  • Yoon-Seo Park;Yong-Ki Lee;Sung-Min Ahn
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.3
    • /
    • pp.411-416
    • /
    • 2023
  • The purpose of this study is to investigate the utilization of major learning tools among radiology science students and assess the accuracy of a conversational artificial intelligence service program, specifically a chatbot, in the context of the national radiologic technologist licensing exam. The survey revealed that 84.3% of radiology science students actively utilize electronic devices during their learning process. In addition, 104 out of 140 respondents said they use search engines as a top priority for efficient data collection while studying. When asked about their awareness of chatbots, 80% of participants responded affirmatively, and 22.9% reported having used chatbots for academic purposes at least once. From 2018 to 2022, exam questions from the first and second periods were presented to the chatbot for answers. The results showed that ChatGPT's accuracy in answering first period questions increased from 48.28% to 60%, while for second period questions, it increased from 50% to 62.22%. Bing's accuracy in answering first period questions improved from 55% to 64.55%, and for second period questions, it increased from 48% to 52.22%. The study confirmed the general trend of radiology science students utilizing electronic devices for learning and obtaining information through the internet. However, conversational artificial intelligence service programs in the field of radiation science face challenges related to accuracy and reliability, and providing perfect solutions remains difficult, highlighting the need for continuous development and improvement.

A Survey on the Journal of the Korean Academy of Child and Adolescent Psychiatry: Implications for Growth and Development

  • Duk-Soo Moon;Jae Hyun Yoo;Jung-Woo Son;Geon Ho Bahn;Min-Hyeon Park;Bung-Nyun Kim;Hee Jeong Yoo;Editorial Board of JKACAP
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.34 no.4
    • /
    • pp.229-235
    • /
    • 2023
  • Objectives: This study aimed to assess the status of the Journal of the Korean Academy of Child and Adolescent Psychiatry (JKACAP) and propose measures for its growth and development. Methods: The study was conducted using a questionnaire survey targeting members of the Korean Academy of Child and Adolescent Psychiatry. The six key elements analyzed were Access to the journal, Convenience following conversion to English, Recognition as an international journal and institutional achievements, Author perspectives on manuscript submission, Transition to an online-only journal, and Content and identity of the journal. Results: The survey revealed that email notification was highly effective for Journal Accessibility, with the website and search engines also frequently being used by members. Conversion to English in 2018 initially impacted readability and submission rates, but these concerns have decreased over time. However, the Recognition of JKACAP as an international academic journal was still not on par with SCIE journals, highlighting the need for further efforts towards SCIE inclusion. Despite these challenges and limited research opportunities, there was an active intention among members to submit manuscripts. Respondents showed a notable preference for the Transition to an online-only journal. Regarding content and identity of the JKACAP, members predominantly favored review articles and perceived the journal as a research and communication platform for Korean child and adolescent psychiatrists. Conclusion: The results indicate the need for JKACAP to enhance its digital accessibility, provide more support for domestic and international authors, and actively seek SCIE indexing. Addressing the varied content preferences of its members, improving the submission process, and transitioning to an online-only format could further its growth and solidify its position as an internationally recognized academic journal in the field of child and adolescent psychiatry.

Policies to Manage Drug Shortages in Selected Countries: A Review and Implications (주요국의 수급불안정 의약품 관리제도에 관한 고찰과 한국에의 시사점)

  • Inmyung Song;Sang Jun Jung;Eunja Park;Sang-Eun Choi;Eun-A Lim;Sanghyun Kim;Dongsook Kim
    • Health Policy and Management
    • /
    • v.34 no.2
    • /
    • pp.106-119
    • /
    • 2024
  • Drug shortage is a persistent phenomenon that poses a public health risk worldwide and occurs due to a range of causes. The purpose of this study is to review key policies to prepare for and respond to drug shortages in selected countries, such as the United States, Canada, and some European countries in order to draw implications. This study reviewed the reports and articles derived from search engines and Google Scholar by using keywords such as drug shortage and stock-out. Over the last decade or so, the United States have strengthened requirements on advance notification for disruption and interruption of drug manufacturing, established the Inter-agency Drug Shortages Task Force to promote the communication and coordination of responses, and expedited drug regulatory processes. Similarly, Canada established the Multi-Stakeholder Steering Committee on drug shortages by involving representatives from central and local governments and private sectors. Canada also adopted a tiered approach to the communication of drug shortages based on the assessment of the severity of the shortage problem and released a detailed information guide on communication. In 2019, the joint task force between the European Medicines Agency and the Heads of Medicines Agencies issued guidelines on drug shortage communication in the European Economic Area. The countries reviewed in this paper focus on communication across different stakeholders for the monitoring of and timely response to drug shortages. The efforts to protect public health from the negative impact of the drug shortage crisis would require multi-sectorial and multi-governmental coordination and development of guidelines.

A Systematic Review of Developmental Coordination Disorders in South Korea: Evaluation and Intervention (국내의 발달성협응장애(DCD) 연구에 관한 체계적 고찰 : 평가와 중재접근 중심으로)

  • Kim, Min Joo;Choi, Jeong-Sil
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.19 no.1
    • /
    • pp.69-82
    • /
    • 2021
  • Objective : This recent work intended to provide basic information for researchers and practitioners related to occupational therapy about Developmental Coordination Disorder (DCD) in South Korea. The previous research of screening DCD and the effects of intervention programs were reviewed. Methods : Peer-reviewed papers relating to DCD and published in Korea from January 1990 to December 2020 were systematically reviewed. The search terms "developmental coordination disorder," "development coordination," and "developmental coordination" were used to identify previous Korean research in this area from three representation database, the Research Information Sharing Service, Korean Studies Information Service System, and Google Scholar. We found a total of 4,878 articles identified through the three search engines and selected seventeen articles for analysis after removing those that corresponded to the overlapping or exclusion criteria. We adopted "the conceptual model" to analyze the selected articles about DCD assessment and intervention. Results : We found that twelve of the 17 studies showed the qualitative level of Level 2 using non-randomized approach between the two groups. The Movement Assessment Battery for Children and its second edition were the most frequently used tools in assessing children for DCD. Among the intervention studies, the eight articles (47%) were adopted a dynamic systems approach; a normative functional skill framework and cognitive neuroscience were each used in 18% of the pieces; and 11% of the articles were applied neurodevelopmental theory. Only one article was used a combination approach of normative functional skill and general abilities. These papers were mainly focused on the movement characteristics of children with DCD and the intervention effect of exercise or sports programs. Conclusion : Most of the reviewed studies investigated the movement characteristics of DCD or explore the effectiveness of particular intervention programs. In the future, it would be useful to investigate the feasibility of different assessment tools and to establish the effectiveness of various interventions used in rehabilitation for better motor performance in children with DCD.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.