• Title/Summary/Keyword: 정보공학 방법론

Search Result 788, Processing Time 0.028 seconds

Web-enabled Healthcare System for Hypertension: Hyperlink-based Inference Approach (고혈압관리를 위한 웹 기반의 지능정보시스템: 하이퍼링크를 이용한 추론방식으로)

  • Song, Yong-Uk;Ho, Seung-Hee;Chae, Young-Moon;Cho, Kyoung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.9 no.1
    • /
    • pp.91-107
    • /
    • 2003
  • In the conduct of this study, a web-enabled healthcare system for the management of hypertension was implemented through a hyperlink-based inference approach. The hyperlink-based inference platform was implemented using the hypertext capacity of HTML which ensured accessibility, multimedia facilities, fast response, stability, ease of use and upgrade, and platform independency of expert systems. Many HTML documents, which are hyperlinked to each other based on expert rules, were uploaded beforehand to perform the hyperlink-based inference. The HTML documents were uploaded and maintained automatically by our proprietary tool called the Web-Based Inference System (WeBIS) that supports a graphical user interface (GUI) for the input and edit of decision graphs. Nevertheless, the editing task of the decision graph using the GUI tool is a time consuming and tedious chore when the knowledge engineer must perform it manually. Accordingly, this research implemented an automatic generator of the decision graph for the management of hypertension. As a result, this research suggests a methodology for the development of Web-enabled healthcare systems using the hyperlink-based inference approach and, as an example, implements a Web-enabled healthcare system for hypertension, a platform which performed especially well in the areas of speed and stability.

  • PDF

Estimation of Employment Creation Center considering Spatial Autocorrelation: A Case of Changwon City (공간자기상관을 고려한 고용창출중심지 추정: 창원시 사례를 중심으로)

  • JEONG, Ha-Yeong;LEE, Tai-Hun;HWANG, In-Sik
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.1
    • /
    • pp.77-100
    • /
    • 2022
  • In the era of low growth, many provincial cities are experiencing population decline and aging. Population decline phenomena such as reduction of productive manpower, reduction of finances, deterioration of quality of life, and collapse of the community base are occurring in a chain and are being pushed to the brink of extinction of the cities. This study aims to propose a methodology to objectively estimate the employment creation centers and setting the basic unit of industrial-centered zoning by applying spatial statistical techniques and GIS for the application of the compact city plan as an efficient spatial management policy in a city with a declining population. In details, based on reviewing previous studies on compact city, 'employment complex index(ECI)' were defined considering the number of workers, the number of settlers, and the area of development land, the employment creation center was estimated by applying the 'Local Moran's I' and 'Getis-Ord's Hot-Spot Analysis'. As a case study, changes in the four years of 2013, 2015, 2017, and 2019 were compared and analyzed for Changwon City. As a result, it was confirmed that the employment creation center is becoming compacted and polycentric, which is a significant result that reflects the actual situation well. This results provide the basic data for functional and institutional territorial governance for the regional revitalization platform, and provide meaningful information necessary for spatial policy decision-making, such as population reduction, regional gross domestic product, and public facility arrangement that can respond to energy savings, transportation plans, and medical and health plans.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

A Study on the Application Methodology of Set-based Design Approach of Outrigger System based on Lean Process (린 프로세스 기반 아웃리거 시스템의 Set-based Design 적용 방안에 관한 연구)

  • Lee, Seung-Il;Cho, Young-Sang
    • Korean Journal of Construction Engineering and Management
    • /
    • v.12 no.4
    • /
    • pp.50-58
    • /
    • 2011
  • Lean concept is management philosophy that defines a customer's value and eliminates wasteful and impeditive factors. Management philosophy of Lean in the construction industry is referred to as "Lean Construction". Now this concept has expanded to achieve effective productivity during the design phase. Currently the norm of the domestic design process has been Point-based Design(PBD). It involves selecting a single structurally-feasible design option early and then refining that single design as more information becomes available throughout the design process. This single design is then re-worked until a solution is found that is feasible for all parties. On the contrary, Set-based Design(SBD) is based on lean processes to eliminate waste and improve project productivity. It focuses on keeping the design space as open as long as possible, to allow "subdesign" to advance and not labeling them as secondary in importance. Preserving the maximum number of feasible designs as long as possible reduces the likelihood that rework will be necessary and allows all project participants to utilize their unique expertise to make the project successful. This study proposes that the design methodology of minimizing waste and increasing productivity through SBD of AHP, one of the decision making process so as to compare PBD with SBD and tries to find decision making process and then suggest that application methodology through performs case study of SBD process.

Predicting Sensitivity of Motion Sickness using by Pattern of Cardinal Gaze Position (기본 주시눈 위치의 패턴을 이용한 영상멀미의 민감도 예측)

  • Park, Sangin;Lee, Dong Won;Mun, Sungchul;Whang, Mincheol
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.11
    • /
    • pp.227-235
    • /
    • 2018
  • The aim of this study is to predict the sensitivity of motion sickness (MS) using pattern of cardinal gaze position (CGP) before experiencing the virtual reality (VR) content. Twenty volunteers of both genders (8 females, mean age $28.42{\pm}3.17$) participated in this experiment. They was required to measure the pattern of CGP for 5 minute, and then watched VR content for 15 minute. After watching VR content, subjective experience for MS reported from participants using by 'Simulator Sickness Questionnaire (SSQ)'. Statistical significance between CGP and SSQ score were confirmed using Pearson correlation analysis and independent t-test, and prediction model was extracted from multiple regression model. PCPA & PCPR indicators from CGP revealed significantly difference and strong or moderate positive correlation with SSQ score. Extracted prediction model was tested using correlation coefficient and mean error, SSQ score between subjective rating and prediction model showed strong positive correlation and low difference.

Component Grid: A Developer-centric Environment for Defense Software Reuse (컴포넌트 그리드: 개발자 친화적인 국방 소프트웨어 재사용 지원 환경)

  • Ko, In-Young;Koo, Hyung-Min
    • Journal of Software Engineering Society
    • /
    • v.23 no.4
    • /
    • pp.151-163
    • /
    • 2010
  • In the defense software domain where large-scale software products in various application areas need to be built, reusing software is regarded as one of the important practices to build software products efficiently and economically. There have been many efforts to apply various methods to support software reuse in the defense software domain. However, developers in the defense software domain still experience many difficulties and face obstacles in reusing software assets. In this paper, we analyze practical problems of software reuse in the defense software domain, and define core requirements to solve those problems. To meet these requirements, we are currently developing the Component Grid system, a reuse-support system that provides a developer-centric software reuse environment. We have designed an architecture of Component Grid, and defined essential elements of the architecture. We have also developed the core approaches for developing the Component Grid system: a semantic-tagging-based requirement tracing method, a reuse-knowledge representation model, a social-network-based asset search method, a web-based asset management environment, and a wiki-based collaborative and participative knowledge construction and refinement method. We expect that the Component Grid system will contribute to increase the reusability of software assets in the defense software domain by providing the environment that supports transparent and efficient sharing and reuse of software assets.

  • PDF

Development of an Economic Material Selection Model for G-SEED Certification (녹색건축(G-SEED) 인증을 위한 경제적 자재선정 모델 개발)

  • Jeon, Byung-Ju;Kim, Byung-Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.6
    • /
    • pp.613-622
    • /
    • 2020
  • The South Korean government plans for a 37 % reduction in CO2 emissions against business as usual by 2030. Subsequently, the Ministry of Land, Infrastructure and Transport declared a 26.9 % reduction target in greenhouse gas emissions from buildings by 2020 and established the Green Standard for Energy and Environmental Design (G-SEED) to help improve the environmental performance of buildings. Construction companies often work with consulting firms to prepare for G-SEED certification. In the process, owing to inefficient data sharing and work connections, it is difficult to achieve economic efficiency and obtain certification. The objective of this study was to develop an economic model to assist contractors in achieving the required G-SEED scores for materials and resources. To do this, we automated the process for material comparison and selection on the basis of an analysis of actual consulting data, and developed a model that selects material alternatives that can meet the required scores at a minimum cost. Information on materials is input by applying a genetic algorithm to the optimization of alternatives. When the model was applied to actual data, the construction cost could be lowered by 79.3 % compared with existing methods. The economical material selection model is expected to not only reduce construction costs for owners desiring G-SEED certification but also shorten the project design time.

Various Quality Fingerprint Classification Using the Optimal Stochastic Models (최적화된 확률 모델을 이용한 다양한 품질의 지문분류)

  • Jung, Hye-Wuk;Lee, Jee-Hyong
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.143-151
    • /
    • 2010
  • Fingerprint classification is a step to increase the efficiency of an 1:N fingerprint recognition system and plays a role to reduce the matching time of fingerprint and to increase accuracy of recognition. It is difficult to classify fingerprints, because the ridge pattern of each fingerprint class has an overlapping characteristic with more than one class, fingerprint images may include a lot of noise and an input condition is an exceptional case. In this paper, we propose a novel approach to design a stochastic model and to accomplish fingerprint classification using a directional characteristic of fingerprints for an effective classification of various qualities. We compute the directional value by searching a fingerprint ridge pixel by pixel and extract a directional characteristic by merging a computed directional value by fixed pixels unit. The modified Markov model of each fingerprint class is generated using Markov model which is a stochastic information extraction and a recognition method by extracted directional characteristic. The weight list of classification model of each class is decided by analyzing the state transition matrixes of the generated Markov model of each class and the optimized value which improves the performance of fingerprint classification using GA (Genetic Algorithm) is estimated. The performance of the optimized classification model by GA is superior to the model before the optimization by the experiment result of applying the fingerprint database of various qualities to the optimized model by GA. And the proposed method effectively achieved fingerprint classification to exceptional input conditions because this approach is independent of the existence and nonexistence of singular points by the result of analyzing the fingerprint database which is used to the experiments.

GIS Based Distributed Flood Damage Assessment (GIS기반의 분포형 홍수피해산정 기법)

  • Yi, Choong Sung;Choi, Seung An;Shim, Myung Pil;Kim, Hung Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3B
    • /
    • pp.301-310
    • /
    • 2006
  • Typically, we needs enormous national budget for the flood control project and so the project usually has big influence on the national economy. Therefore, the reliable estimation of flood damage is the key issue for the economic analysis of the flood control project. This study aims to provide a GIS based technique for distributed flood damage estimation. We consider two aspects of engineering and economic sides, which are the inundation analysis and MD-FDA (Multi-Dimensional Flood Damage Analysis), for the flood damage assessment. We propose the analysis framework and data processing using GIS for assessing flood damages. The proposed methodology is applied to the flood control channel project for flood disaster prevention in Mokgamcheon/Dorimcheon streams and this study presents the detailed GIS database and the assessment results of flood damages. This study may have the worth in improving practical usability of MD-FDA and also providing research direction for combining economic side with the engineering aspect. Also this distributed technique will help decision-making in evaluating the feasibility of flood damage reduction programs for structural and nonstructural measures.