• Title/Summary/Keyword: 자동시스템

Search Result 8,948, Processing Time 0.056 seconds

Development of Program for Renal Function Study with Quantification Analysis of Nuclear Medicine Image (핵의학 영상의 정량적 분석을 통한 신장기능 평가 프로그램 개발)

  • Song, Ju-Young;Lee, Hyoung-Koo;Suh, Tae-Suk;Choe, Bo-Young;Shinn, Kyung-Sub;Chung, Yong-An;Kim, Sung-Hoon;Chung, Soo-Kyo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.35 no.2
    • /
    • pp.89-99
    • /
    • 2001
  • Purpose: In this study, we developed a new software tool for the analysis of renal scintigraphy which can be modified more easily by a user who needs to study new clinical applications, and the appropriateness of the results from our program was studied. Materials and Methods: The analysis tool was programmed with IDL5.2 and designed for use on a personal computer running Windows. For testing the developed tool and studying the appropriateness of the calculated glomerular filtration rate (GFR), $^{99m}Tc$-DTPA was administered to 10 adults in normal condition. In order to study the appropriateness of the calculated mean transit time (MTT), $^{99m}Tc-DTPA\;and\;^{99m}Tc-MAG3$ were administered to 11 adults in normal condition and 22 kidneys were analyzed. All the images were acquired with ORBITOR. the Siemens gamma camera. Results: With the developed tool, we could show dynamic renal images and time activity curve (TAC) in each ROI and calculate clinical parameters of renal function. The results calculated by the developed tool were not different statistically from the results obtained by the Siemens application program (Tmax: p=0.68, Relative Renal Function: p:1.0, GFR: p=0.25) and the developed program proved reasonable. The MTT calculation tool proved to be reasonable by the evaluation of the influence of hydration status on MTT. Conclusion: We have obtained reasonable clinical parameters for the evaluation of renal function with the software tool developed in this study. The developed tool could prove more practical than conventional, commercial programs.

  • PDF

Estimation of Soybean Growth Using Polarimetric Discrimination Ratio by Radar Scatterometer (레이더 산란계 편파 차이율을 이용한 콩 생육 추정)

  • Kim, Yi-Hyun;Hong, Suk-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.44 no.5
    • /
    • pp.878-886
    • /
    • 2011
  • The soybean is one of the oldest cultivated crops in the world. Microwave remote sensing is an important tool because it can penetrate into cloud independent of weather and it can acquire day or night time data. Especially a ground-based polarimetric scatterometer has advantages of monitoring crop conditions continuously with full polarization and different frequencies. In this study, soybean growth parameters and soil moisture were estimated using polarimetric discrimination ratio (PDR) by radar scatterometer. A ground-based polarimetric scatterometer operating at multiple frequencies was used to continuously monitor the soybean growth condition and soil moisture change. It was set up to obtain data automatically every 10 minutes. The temporal trend of the PDR for all bands agreed with the soybean growth data such as fresh weight, Leaf Area Index, Vegetation Water Content, plant height; i.e., increased until about DOY 271 and decreased afterward. Soil moisture lowly related with PDR in all bands during whole growth stage. In contrast, PDR is relative correlated with soil moisture during below LAI 2. We also analyzed the relationship between the PDR of each band and growth data. It was found that L-band PDR is the most correlated with fresh weight (r=0.96), LAI (r=0.91), vegetation water content (r=0.94) and soil moisture (r=0.86). In addition, the relationship between C-, X-band PDR and growth data were moderately correlated ($r{\geq}0.83$) with the exception of the soil moisture. Based on the analysis of the relation between the PDR at L, C, X-band and soybean growth parameters, we predicted the growth parameters and soil moisture using L-band PDR. Overall good agreement has been observed between retrieved growth data and observed growth data. Results from this study show that PDR appear effective to estimate soybean growth parameters and soil moisture.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.

Comparisons of Soil Water Retention Characteristics and FDR Sensor Calibration of Field Soils in Korean Orchards (노지 과수원 토성별 수분보유 특성 및 FDR 센서 보정계수 비교)

  • Lee, Kiram;Kim, Jongkyun;Lee, Jaebeom;Kim, Jongyun
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.401-408
    • /
    • 2022
  • As research on a controlled environment system based on crop growth environment sensing for sustainable production of horticultural crops and its industrial use has been important, research on how to properly utilize soil moisture sensors for outdoor cultivation is being actively conducted. This experiment was conducted to suggest the proper method of utilizing the TEROS 12, an FDR (frequency domain reflectometry) sensor, which is frequently used in industry and research fields, for each orchard soil in three regions in Korea. We collected soils from each orchard where fruit trees were grown, investigated the soil characteristics and soil water retention curve, and compared TEROS 12 sensor calibration equations to correlate the sensor output to the corresponding soil volumetric water content through linear and cubic regressions for each soil sample. The estimated value from the calibration equation provided by the manufacturer was also compared. The soil collected from all three orchards showed different soil characteristics and volumetric water content values by each soil water retention level across the soil samples. In addition, the cubic calibration equation for TEROS 12 sensor showed the highest coefficient of determination higher than 0.95, and the lowest RMSE for all soil samples. When estimating volumetric water contents from TEROS 12 sensor output using the calibration equation provided by the manufacturer, their calculated volumetric water contents were lower than the actual volumetric water contents, with the difference up to 0.09-0.17 m3·m-3 depending on the soil samples, indicating an appropriate calibration for each soil should be preceded before FDR sensor utilization. Also, there was a difference in the range of soil volumetric water content corresponding to the soil water retention levels across the soil samples, suggesting that the soil water retention information should be required to properly interpret the volumetric water content value of the soil. Moreover, soil with a high content of sand had a relatively narrow range of volumetric water contents for irrigation, thus reducing the accuracy of an FDR sensor measurement. In conclusion, analyzing soil water retention characteristics of the target soil and the soil-specific calibration would be necessary to properly quantify the soil water status and determine their adequate irrigation point using an FDR sensor.

Improvement of turbid water prediction accuracy using sensor-based monitoring data in Imha Dam reservoir (센서 기반 모니터링 자료를 활용한 임하댐 저수지 탁수 예측 정확도 개선)

  • Kim, Jongmin;Lee, Sang Ung;Kwon, Siyoon;Chung, Se Woong;Kim, Young Do
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.11
    • /
    • pp.931-939
    • /
    • 2022
  • In Korea, about two-thirds of the precipitation is concentrated in the summer season, so the problem of turbidity in the summer flood season varies from year to year. Concentrated rainfall due to abnormal rainfall and extreme weather is on the rise. The inflow of turbidity caused a sudden increase in turbidity in the water, causing a problem of turbidity in the dam reservoir. In particular, in Korea, where rivers and dam reservoirs are used for most of the annual average water consumption, if turbidity problems are prolonged, social and environmental problems such as agriculture, industry, and aquatic ecosystems in downstream areas will occur. In order to cope with such turbidity prediction, research on turbidity modeling is being actively conducted. Flow rate, water temperature, and SS data are required to model turbid water. To this end, the national measurement network measures turbidity by measuring SS in rivers and dam reservoirs, but there is a limitation in that the data resolution is low due to insufficient facilities. However, there is an unmeasured period depending on each dam and weather conditions. As a sensor for measuring turbidity, there are Optical Backscatter Sensor (OBS) and YSI, and a sensor for measuring SS uses equipment such as Laser In-Situ Scattering and Transmissometry (LISST). However, in the case of such a high-tech sensor, there is a limit due to the stability of the equipment. Therefore, there is an unmeasured period through analysis based on the acquired flow rate, water temperature, SS, and turbidity data, so it is necessary to develop a relational expression to calculate the SS used for the input data. In this study, the AEM3D model used in the Water Resources Corporation SURIAN system was used to improve the accuracy of prediction of turbidity through the turbidity-SS relationship developed based on the measurement data near the dam outlet.

A Study on Image Copyright Archive Model for Museums (미술관 이미지저작권 아카이브 모델 연구)

  • Nam, Hyun Woo;Jeong, Seong In
    • Korea Science and Art Forum
    • /
    • v.23
    • /
    • pp.111-122
    • /
    • 2016
  • The purpose of this multi-disciplinary convergent study is to establish Image Copyright Archive Model for Museums to protect image copyright and vitalize the use of images out of necessity of research and development on copyright services over the life cycle of art contents created by the museums and out of the necessity to vitalize distribution market of image copyright contents in creative industry and to formulate management system of copyright services. This study made various suggestions for enhancement of transparency and efficiency of art contents ecosystem through vitalization of use and recycling of image copyright materials by proposing standard system for calculation, distribution, settlement and monitoring of copyright royalty of 1,000 domestic museums, galleries and exhibit halls. First, this study proposed contents and structure design of image copyright archive model and, by proposing art contents distribution service platform for prototype simulation, execution simulation and model operation simulation, established art contents copyright royalty process model. As billing system and technological development for image contents are still in incipient stage, this study used the existing contents billing framework as basic model for the development of billing technology for distribution of museum collections and artworks and automatic division and calculation engine for copyright royalty. Ultimately, study suggested image copyright archive model which can be used by artists, curators and distributors. In business strategy, study suggested niche market penetration of museum image copyright archive model. In sales expansion strategy, study established a business model in which effective process of image transaction can be conducted in the form of B2B, B2G, B2C and C2B through flexible connection of museum archive system and controllable management of image copyright materials can be possible. This study is expected to minimize disputes between copyright holder of artwork images and their owners and enhance manageability of copyrighted artworks through prevention of such disputes and provision of information on distribution and utilization of art contents (of collections and new creations) owned by the museums. In addition, by providing a guideline for archives of collections of museums and new creations, this study is expected to increase registration of image copyright and to make various convergent businesses possible such as billing, division and settlement of copyright royalty for image copyright distribution service.

Utilization of Smart Farms in Open-field Agriculture Based on Digital Twin (디지털 트윈 기반 노지스마트팜 활용방안)

  • Kim, Sukgu
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2023.04a
    • /
    • pp.7-7
    • /
    • 2023
  • Currently, the main technologies of various fourth industries are big data, the Internet of Things, artificial intelligence, blockchain, mixed reality (MR), and drones. In particular, "digital twin," which has recently become a global technological trend, is a concept of a virtual model that is expressed equally in physical objects and computers. By creating and simulating a Digital twin of software-virtualized assets instead of real physical assets, accurate information about the characteristics of real farming (current state, agricultural productivity, agricultural work scenarios, etc.) can be obtained. This study aims to streamline agricultural work through automatic water management, remote growth forecasting, drone control, and pest forecasting through the operation of an integrated control system by constructing digital twin data on the main production area of the nojinot industry and designing and building a smart farm complex. In addition, it aims to distribute digital environmental control agriculture in Korea that can reduce labor and improve crop productivity by minimizing environmental load through the use of appropriate amounts of fertilizers and pesticides through big data analysis. These open-field agricultural technologies can reduce labor through digital farming and cultivation management, optimize water use and prevent soil pollution in preparation for climate change, and quantitative growth management of open-field crops by securing digital data for the national cultivation environment. It is also a way to directly implement carbon-neutral RED++ activities by improving agricultural productivity. The analysis and prediction of growth status through the acquisition of the acquired high-precision and high-definition image-based crop growth data are very effective in digital farming work management. The Southern Crop Department of the National Institute of Food Science conducted research and development on various types of open-field agricultural smart farms such as underground point and underground drainage. In particular, from this year, commercialization is underway in earnest through the establishment of smart farm facilities and technology distribution for agricultural technology complexes across the country. In this study, we would like to describe the case of establishing the agricultural field that combines digital twin technology and open-field agricultural smart farm technology and future utilization plans.

  • PDF

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.