• Title/Summary/Keyword: A$^{*}$알고리즘

Search Result 30,534, Processing Time 0.061 seconds

Research Trends of Health Recommender Systems (HRS): Applying Citation Network Analysis and GraphSAGE (건강추천시스템(HRS) 연구 동향: 인용네트워크 분석과 GraphSAGE를 활용하여)

  • Haryeom Jang;Jeesoo You;Sung-Byung Yang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.57-84
    • /
    • 2023
  • With the development of information and communications technology (ICT) and big data technology, anyone can easily obtain and utilize vast amounts of data through the Internet. Therefore, the capability of selecting high-quality data from a large amount of information is becoming more important than the capability of just collecting them. This trend continues in academia; literature reviews, such as systematic and non-systematic reviews, have been conducted in various research fields to construct a healthy knowledge structure by selecting high-quality research from accumulated research materials. Meanwhile, after the COVID-19 pandemic, remote healthcare services, which have not been agreed upon, are allowed to a limited extent, and new healthcare services such as health recommender systems (HRS) equipped with artificial intelligence (AI) and big data technologies are in the spotlight. Although, in practice, HRS are considered one of the most important technologies to lead the future healthcare industry, literature review on HRS is relatively rare compared to other fields. In addition, although HRS are fields of convergence with a strong interdisciplinary nature, prior literature review studies have mainly applied either systematic or non-systematic review methods; hence, there are limitations in analyzing interactions or dynamic relationships with other research fields. Therefore, in this study, the overall network structure of HRS and surrounding research fields were identified using citation network analysis (CNA). Additionally, in this process, in order to address the problem that the latest papers are underestimated in their citation relationships, the GraphSAGE algorithm was applied. As a result, this study identified 'recommender system', 'wireless & IoT', 'computer vision', and 'text mining' as increasingly important research fields related to HRS research, and confirmed that 'personalization' and 'privacy' are emerging issues in HRS research. The study findings would provide both academic and practical insights into identifying the structure of the HRS research community, examining related research trends, and designing future HRS research directions.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

A study on the classification of research topics based on COVID-19 academic research using Topic modeling (토픽모델링을 활용한 COVID-19 학술 연구 기반 연구 주제 분류에 관한 연구)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.155-174
    • /
    • 2022
  • From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (

    ) were the topic modeling results for each research topic (
    ) was found to be derived from For example, as a result of topic modeling for papers related to 'vaccine', a new topic titled Topic 05 'neutralizing antibodies' was extracted. A neutralizing antibody is an antibody that protects cells from infection when a virus enters the body, and is said to play an important role in the production of therapeutic agents and vaccine development. In addition, as a result of extracting topics from papers related to 'treatment', a new topic called Topic 05 'cytokine' was discovered. A cytokine storm is when the immune cells of our body do not defend against attacks, but attack normal cells. Hidden topics that could not be found for the entire thesis were classified according to keywords, and topic modeling was performed to find detailed topics. In this study, we proposed a method of extracting topics from a large amount of literature using the LDA algorithm and extracting similar words using the Skip-gram method that predicts the similar words as the central word among the Word2vec models. The combination of the LDA model and the Word2vec model tried to show better performance by identifying the relationship between the document and the LDA subject and the relationship between the Word2vec document. In addition, as a clustering method through PCA dimension reduction, a method for intuitively classifying documents by using the t-SNE technique to classify documents with similar themes and forming groups into a structured organization of documents was presented. In a situation where the efforts of many researchers to overcome COVID-19 cannot keep up with the rapid publication of academic papers related to COVID-19, it will reduce the precious time and effort of healthcare professionals and policy makers, and rapidly gain new insights. We hope to help you get It is also expected to be used as basic data for researchers to explore new research directions.

  • An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

    • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
      • Journal of Intelligence and Information Systems
      • /
      • v.20 no.1
      • /
      • pp.149-161
      • /
      • 2014
    • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

    An Energy Efficient Cluster Management Method based on Autonomous Learning in a Server Cluster Environment (서버 클러스터 환경에서 자율학습기반의 에너지 효율적인 클러스터 관리 기법)

    • Cho, Sungchul;Kwak, Hukeun;Chung, Kyusik
      • KIPS Transactions on Computer and Communication Systems
      • /
      • v.4 no.6
      • /
      • pp.185-196
      • /
      • 2015
    • Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(Quality of Service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to let only the minimum number of servers needed to handle current user requests ON. Previous studies on energy aware server cluster put efforts to reduce power consumption further or to keep QoS, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management based on autonomous learning for energy aware server clusters. Using parameters optimized through autonomous learning, our method adjusts server power mode to achieve maximum performance with respect to power consumption. Our method repeats the following procedure for adjusting the power modes of servers. Firstly, according to the current load and traffic pattern, it classifies current workload pattern type in a predetermined way. Secondly, it searches learning table to check whether learning has been performed for the classified workload pattern type in the past. If yes, it uses the already-stored parameters. Otherwise, it performs learning for the classified workload pattern type to find the best parameters in terms of energy efficiency and stores the optimized parameters. Thirdly, it adjusts server power mode with the parameters. We implemented the proposed method and performed experiments with a cluster of 16 servers using three different kinds of load patterns. Experimental results show that the proposed method is better than the existing methods in terms of energy efficiency: the numbers of good response per unit power consumed in the proposed method are 99.8%, 107.5% and 141.8% of those in the existing static method, 102.0%, 107.0% and 106.8% of those in the existing prediction method for banking load pattern, real load pattern, and virtual load pattern, respectively.

    Predicting Potential Habitat for Hanabusaya Asiatica in the North and South Korean Border Region Using MaxEnt (MaxEnt 모형 분석을 통한 남북한 접경지역의 금강초롱꽃 자생가능지 예측)

    • Sung, Chan Yong;Shin, Hyun-Tak;Choi, Song-Hyun;Song, Hong-Seon
      • Korean Journal of Environment and Ecology
      • /
      • v.32 no.5
      • /
      • pp.469-477
      • /
      • 2018
    • Hanabusaya asiatica is an endemic species whose distribution is limited in the mid-eastern part of the Korean peninsula. Due to its narrow range and small population, it is necessary to protect its habitats by identifying it as Key Biodiversity Areas (KBAs) adopted by the International Union for Conservation of Nature (IUCN). In this paper, we estimated potential natural habitats for H. asiatica using maximum entropy model (MaxEnt) and identified candidate sites for KBA based on the model results. MaxEnt is a machine learning algorithm that can predict habitats for species of interest unbiasedly with presence-only data. This property is particularly useful for the study area where data collection via a field survey is unavailable. We trained MaxEnt using 38 locations of H. asiatica and 11 environmental variables that measured climate, topography, and vegetation status of the study area which encompassed all locations of the border region between South and North Korea. Results showed that the potential habitats where the occurrence probabilities of H. asiatica exceeded 0.5 were $778km^2$, and the KBA candidate area identified by taking into account existing protected areas was $1,321km^2$. Of 11 environmental variables, elevation, annual average precipitation, average precipitation in growing seasons, and the average temperature in the coldest month had impacts on habitat selection, indicating that H. asiatica prefers cool regions at a relatively high elevation. These results can be used not only for identifying KBAs but also for the reference to a protection plan for H. asiatica in preparation of Korean reunification and climate change.

    An Implementation of Lighting Control System using Interpretation of Context Conflict based on Priority (우선순위 기반의 상황충돌 해석 조명제어시스템 구현)

    • Seo, Won-Il;Kwon, Sook-Youn;Lim, Jae-Hyun
      • Journal of Internet Computing and Services
      • /
      • v.17 no.1
      • /
      • pp.23-33
      • /
      • 2016
    • The current smart lighting is shaped to offer the lighting environment suitable for current context, after identifying user's action and location through a sensor. The sensor-based context awareness technology just considers a single user, and the studies to interpret many users' various context occurrences and conflicts lack. In existing studies, a fuzzy theory and algorithm including ReBa have been used as the methodology to solve context conflict. The fuzzy theory and algorithm including ReBa just avoid an opportunity of context conflict that may occur by providing services by each area, after the spaces where users are located are classified into many areas. Therefore, they actually cannot be regarded as customized service type that can offer personal preference-based context conflict. This paper proposes a priority-based LED lighting control system interpreting multiple context conflicts, which decides services, based on the granted priority according to context type, when service conflict is faced with, due to simultaneous occurrence of various contexts to many users. This study classifies the residential environment into such five areas as living room, 'bed room, study room, kitchen and bath room, and the contexts that may occur within each area are defined as 20 contexts such as exercising, doing makeup, reading, dining and entering, targeting several users. The proposed system defines various contexts of users using an ontology-based model and gives service of user oriented lighting environment through rule based on standard and context reasoning engine. To solve the issue of various context conflicts among users in the same space and at the same time point, the context in which user concentration is required is set in the highest priority. Also, visual comfort is offered as the best alternative priority in the case of the same priority. In this manner, they are utilized as the criteria for service selection upon conflict occurrence.

    Approximation of Multiple Trait Effective Daughter Contribution by Dairy Proven Bulls for MACE (젖소 국제유전능력 평가를 위한 종모우별 다형질 Effective Daughter Contribution 추정)

    • Cho, Kwang-Hyun;Choi, Tae-Jeong;Cho, Chung-Il;Park, Kyung-Do;Do, Kyoung-Tag;Oh, Jae-Don;Lee, Hak-Kyo;Kong, Hong-Sik;Lee, Joon-Ho
      • Journal of Animal Science and Technology
      • /
      • v.55 no.5
      • /
      • pp.399-403
      • /
      • 2013
    • This study was conducted to investigate the basic concept of multiple trait effective daughter contribution (MTEDC) for dairy cattle sires and calculate effective daughter contribution (EDC) by applying a five lactation multiple trait model using milk yield test records of daughters for the Multiple-trait Across Country Evaluation (MACE). Milk yield data and pedigree information of 301,551 cows that were the progeny of 2,046 Korean and imported dairy bulls were collected from the National Agricultural Cooperative Federation and used in this study. For MTEDC approximation, the reliability of the breeding value was separated based on parents average, own yield deviation and mate adjusted progeny contribution. EDC was then calculated by lactation using these reliabilities. The average number of recorded daughters per sire by lactations were 140.57, 94.24, 55.14, 29.20 and 14.06 from the first to fifth lactation, respectively. However, the average EDC per sire by lactation using the five lactation multiple trait model was 113.49, 89.28, 73.56, 54.02 and 35.08 from the first to fifth lactation, respectively, while the decrease of EDC in late lactations was comparably lower than the average number of recorded daughters per sire. These findings indicate that the availability of daughters without late lactation records is increased by genetic correlation using the multiple trait model. Owing to the relatedness between the EDC and reliability of the estimated breeding value for sire, understanding the MTEDC algorithm and continuous monitoring of EDC is required for correct MACE application of the five lactation multiple trait model.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.