• Title/Summary/Keyword: Parametric System

Search Result 1,590, Processing Time 0.026 seconds

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

An Evaluation of the Use of Statistical Methods in the Journal of Tuberculosis and Respiratory Diseases ([결핵 및 호흡기질환] 게재 논문의 통계적 기법 활용에 대한 평가)

  • Koh, Won-Jung;Lee, Seung-Joon;Kang, Min Jong;Lee, Hun Jae
    • Tuberculosis and Respiratory Diseases
    • /
    • v.57 no.2
    • /
    • pp.168-179
    • /
    • 2004
  • Background : The statistical analysis is an essential procedure ensuring that the results of researches are based on evidences rather than opinion. The purpose of this study is to evaluate which statistical techniques are used and whether these statistical methods are used appropriately or not in the journal of Tuberculosis and Respiratory Diseases. Materials and Methods : We reviewed 185 articles reported in the journal of Tuberculosis and Respiratory Diseases in 1999. We evaluated the validity of used statistical methods based upon the checklist that was developed on the basis of the guideline for statistical reporting in articles for medical journals by International Committee of Medical Journal Editors. Results : Among 185 articles, original articles and case reports were 110 (59.5%) and 61 (33.0%) respectively. In 112 articles excluding case reports and reviews, statistical techniques were used in 107 articles (95.5%). In 94 articles (83.9%) descriptive and inferential methods were used, while in 13 (11.6%) articles only descriptive methods were used. With the types of inferential statistical techniques, comparison of means was most commonly used (64/94, 68.1%), followed by contingency table (43/94, 45.7%) and correlation or regression (18/94, 19.1%). Among the articles in which descriptive methods were used, 83.2% (89/107) showed inappropriate central tendency and dispersion. In the articles in which inferential methods were used, improper methods were applied in 88.8% (79/89) and the most frequent misuse of statistical methods was inappropriate use of parametric methods (35/89, 39.3%). Only 14 articles (13.1%) were satisfactory in utilization of statistical methodology. Conclusion : Most of the statistical errors found in the journal were misuses of statistical methods related to basic statistics. This study suggests that researchers should be more careful when they describe and apply statistical methods and more extensive statistical refereeing system would be needed.

Functional Brain Mapping Using $H_2^{15}O$ Positron Emission Tomography ( II ): Mapping of Human Working Memory ($H_2^{15}O$ 양전자단층촬영술을 이용한 뇌기능 지도 작성(II): 작업 기억의 지도 작성)

  • Lee, Jae-Sung;Lee, Dong-Soo;Lee, Sang-Kun;Nam, Hyun-Woo;Kim, Seok-Ki;Park, Kwang-Suk;Jeong, Jae-Min;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.3
    • /
    • pp.238-249
    • /
    • 1998
  • Purpose: To localize and compare the neural basis of verbal and visual human working memory, we performed functional activation study using $H_2^{15}O$ PET. Materials and Methods: Repeated $H_2^{15}O$ PET scans with one control and three different activation tasks were performed on six right-handed normal volunteers. Each activation task was composed of 13 match-ing trials. On each trial, four targets, a fixation dot and a probe were presented sequentially and subject's task was to press a response button to indicate whether or not the probe was one of the previous targets. Short meaningful Korean words, simple drawings and monochromic pictures of human faces were used as matching objects for verbal or visual memory. All the images were spatially normalized and the differences between control and activation states were statistically analyzed using SPM96. Results: Statistical analysis of verbal memory activation with short words showed activation in the left Broca's area, promoter cortex, cerebellum and right cingulate gyrus. In verbal memory with simple drawings, activation was shown in the larger regions including where activated with short words and left superior temporal cortex, basal ganglia, thalamus, prefrontal cortex, anterior portion of right superior temporal gyrus and right infero-lateral frontal cortex. On the other hand, the visual memory task activated predominantly right-sided structures, especially inferior frontal cortex, supplementary motor cortex and superior parietal cortex. Conclusion: The results are consistent with the hypothesis of the laterality and dissociation of the verbal and visual working memory from the invasive electrophysiological studies and emphasize the pivotal role of frontal cortex and cingulate gyrus in working memory system.

  • PDF

Different Metabolic Patterns of Parkinsonism: Analysed by Statistical Parametric Mapping (통계적 파라미터를 이용한 Parkinsonism의 Metabolic pattern 분석)

  • 주라형;김재승;최보영;문대혁;서태석
    • Progress in Medical Physics
    • /
    • v.14 no.2
    • /
    • pp.108-123
    • /
    • 2003
  • The purpose of this study is to evaluate the contribution of $^{18}$ F-FDG brain PET in the differentiating Idiopathic parkinson's diesease (IPD), progressive supranuclear palsy (PSP), and multiple system atrophy (MSA). We studied 24 patients with parkinsonism : 8 patients (mean age 67.9$\pm$10.7 y: M/F : 3/5) with IPD, 9 patients (57.9$\pm$9.2 y : M/F : 4/5) with MSA and 7 patients (67.6$\pm$4.8 y : M/F 3/4) with PSP. All patients with parkinsonism and 22 age-matched normal controls underwent $^{18}$ F FDG PET in 3D mode after the injection of 370 MBq $^{118}$ F FDG. The patients with IPD, MSh and PSP were compared with a normal control group by a two-sided t-test of SPM99 (uncorrected P<0.001, extent threshold>100 voxel). All three parkinsonism groups, showed significant hypometabolism in the cerebral neocortex compared to the normal control group. However, the three groups displayed different metabolism in the subcortical structure, brain stem, and cerebellum. In IPD, there was no significant hypometabolism in the putamen, brain stem and cerebellum. However, MSA patients showed significant hypometabolism in the striatum, pons, and cerebellum compared to the normal controls and IPD patients. In addition, PSP showed significant hypometabolism in the caudate nuclei, the thalamus, midbrain, and the cingulate gyrus compared to the normal controls, the IPD, and MSA groups (IPD vs Normal sensitivity/specificity : 75%/l00%, MSA vs Normal sensitivity/specificity :100%/87%, PSP vs Normal sensitivity/specificity : 86%/94%). Our results show that the regional metabolism of IPD, MSA, and PSP is different mainly in the striatum, thalamus, brain stem and cerebellum. An assessment of the $^{18}$ F-FDG PET scan images using SPM may be a useful adjunct to a clinical examination in making a differential diagnosis of Parkinsonism.

  • PDF

Numerical and Experimental Study on the Coal Reaction in an Entrained Flow Gasifier (습식분류층 석탄가스화기 수치해석 및 실험적 연구)

  • Kim, Hey-Suk;Choi, Seung-Hee;Hwang, Min-Jung;Song, Woo-Young;Shin, Mi-Soo;Jang, Dong-Soon;Yun, Sang-June;Choi, Young-Chan;Lee, Gae-Goo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.2
    • /
    • pp.165-174
    • /
    • 2010
  • The numerical modeling of a coal gasification reaction occurring in an entrained flow coal gasifier is presented in this study. The purposes of this study are to develop a reliable evaluation method of coal gasifier not only for the basic design but also further system operation optimization using a CFD(Computational Fluid Dynamics) method. The coal gasification reaction consists of a series of reaction processes such as water evaporation, coal devolatilization, heterogeneous char reactions, and coal-off gaseous reaction in two-phase, turbulent and radiation participating media. Both numerical and experimental studies are made for the 1.0 ton/day entrained flow coal gasifier installed in the Korea Institute of Energy Research (KIER). The comprehensive computer program in this study is made basically using commercial CFD program by implementing several subroutines necessary for gasification process, which include Eddy-Breakup model together with the harmonic mean approach for turbulent reaction. Further Lagrangian approach in particle trajectory is adopted with the consideration of turbulent effect caused by the non-linearity of drag force, etc. The program developed is successfully evaluated against experimental data such as profiles of temperature and gaseous species concentration together with the cold gas efficiency. Further intensive investigation has been made in terms of the size distribution of pulverized coal particle, the slurry concentration, and the design parameters of gasifier. These parameters considered in this study are compared and evaluated each other through the calculated syngas production rate and cold gas efficiency, appearing to directly affect gasification performance. Considering the complexity of entrained coal gasification, even if the results of this study looks physically reasonable and consistent in parametric study, more efforts of elaborating modeling together with the systematic evaluation against experimental data are necessary for the development of an reliable design tool using CFD method.

$CO_2$ Transport for CCS Application in Republic of Korea (이산화탄소 포집 및 저장 실용화를 위한 대한민국에서의 이산화탄소 수송)

  • Huh, Cheol;Kang, Seong-Gil;Cho, Mang-Ik
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.13 no.1
    • /
    • pp.18-29
    • /
    • 2010
  • Offshore subsurface storage of $CO_2$ is regarded as one of the most promising options to response severe climate change. Marine geological storage of $CO_2$ is to capture $CO_2$ from major point sources, to transport to the storage sites and to store $CO_2$ into the offshore subsurface geological structure such as the depleted gas reservoir and deep sea saline aquifer. Since 2005, we have developed relevant technologies for marine geological storage of $CO_2$. Those technologies include possible storage site surveys and basic designs for $CO_2$ transport and storage processes. To design a reliable $CO_2$ marine geological storage system, we devised a hypothetical scenario and used a numerical simulation tool to study its detailed processes. The process of transport $CO_2$ from the onshore capture sites to the offshore storage sites can be simulated with a thermodynamic equation of state. Before going to main calculation of process design, we compared and analyzed the relevant equation of states. To evaluate the predictive accuracies of the examined equation of states, we compare the results of numerical calculations with experimental reference data. Up to now, process design for this $CO_2$ marine geological storage has been carried out mainly on pure $CO_2$. Unfortunately the captured $CO_2$ mixture contains many impurities such as $N_2$, $O_2$, Ar, $H_{2}O$, $SO_{\chi}$, $H_{2}S$. A small amount of impurities can change the thermodynamic properties and then significantly affect the compression, purification and transport processes. This paper analyzes the major design parameters that are useful for constructing onshore and offshore $CO_2$ transport systems. On the basis of a parametric study of the hypothetical scenario, we suggest relevant variation ranges for the design parameters, particularly the flow rate, diameter, temperature, and pressure.

The Impact of Service Level Management(SLM) Process Maturity on Information Systems Success in Total Outsourcing: An Analytical Case Study (토털 아웃소싱 환경 하에서 IT서비스 수준관리(Service Level Management) 프로세스 성숙도가 정보시스템 성공에 미치는 영향에 관한 분석적 사례연구)

  • Cho, Geun Su;An, Joon Mo;Min, Hyoung Jin
    • Asia pacific journal of information systems
    • /
    • v.23 no.2
    • /
    • pp.21-39
    • /
    • 2013
  • As the utilization of information technology and the turbulence of technological change increase in organizations, the adoption of IT outsourcing also grows to manage IT resource more effectively and efficiently. In this new way of IT management technique, service level management(SLM) process becomes critical to derive success from the outsourcing in the view of end users in organization. Even though much of the research on service level management or agreement have been done during last decades, the performance of the service level management process have not been evaluated in terms of final objectives of the management efforts or success from the view of end-users. This study explores the relationship between SLM maturity and IT outsourcing success from the users' point of view by a analytical case study in four client organizations under an IT outsourcing vendor, which is a member company of a major Korean conglomerate. For setting up a model for the analysis, previous researches on service level management process maturity and information systems success are reviewed. In particular, information systems success from users' point of view are reviewed based the DeLone and McLean's study, which is argued and accepted as a comprehensively tested model of information systems success currently. The model proposed in this study argues that SLM process maturity influences information systems success, which is evaluated in terms of information quality, systems quality, service quality, and net effect proposed by DeLone and McLean. SLM process maturity can be measured in planning process, implementation process and operation and evaluation process. Instruments for measuring the factors in the proposed constructs of information systems success and SL management process maturity were collected from previous researches and evaluated for securing reliability and validity, utilizing appropriate statistical methods and pilot tests before exploring the case study. Four cases from four different companies under one vendor company were utilized for the analysis. All of the cases had been contracted in SLA(Service Level Agreement) and had implemented ITIL(IT Infrastructure Library), Six Sigma and BSC(Balanced Scored Card) methods since last several years, which means that all the client organizations pursued concerted efforts to acquire quality services from IT outsourcing from the organization and users' point of view. For comparing the differences among the four organizations in IT out-sourcing sucess, T-test and non-parametric analysis have been applied on the data set collected from the organization using survey instruments. The process maturities of planning and implementation phases of SLM are found not to influence on any dimensions of information systems success from users' point of view. It was found that the SLM maturity in the phase of operations and evaluation could influence systems quality only from users' view. This result seems to be quite against the arguments in IT outsourcing practices in the fields, which emphasize usually the importance of planning and implementation processes upfront in IT outsourcing projects. According to after-the-fact observation by an expert in an organization participating in the study, their needs and motivations for outsourcing contracts had been quite familiar already to the vendors as long-term partners under a same conglomerate, so that the maturity in the phases of planning and implementation seems not to be differentiating factors for the success of IT outsourcing. This study will be the foundation for the future research in the area of IT outsourcing management and success, in particular in the service level management. And also, it could guide managers in practice in IT outsourcing management to focus on service level management process in operation and evaluation stage especially for long-term outsourcing contracts under very unique context like Korean IT outsourcing projects. This study has some limitations in generalization because the sample size is small and the context itself is confined in an unique environment. For future exploration, survey based research could be designed and implemented.

  • PDF

Probabilistic Anatomical Labeling of Brain Structures Using Statistical Probabilistic Anatomical Maps (확률 뇌 지도를 이용한 뇌 영역의 위치 정보 추출)

  • Kim, Jin-Su;Lee, Dong-Soo;Lee, Byung-Il;Lee, Jae-Sung;Shin, Hee-Won;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.317-324
    • /
    • 2002
  • Purpose: The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal Neurological Institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Materials and Methods: Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the Statistical Probabilistic Anatomical Map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for 4he easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was peformed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Results: Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. Conclusion: These programs will be useful for the result interpretation of the image analysis peformed on MNI coordinate, as done in SPM program.

Effects of Polar Literacy Education Program for Elementary and Middle School Students (초·중학생 대상 극지 소양 교육 프로그램의 효과)

  • Sueim Chung;Donghee Shin
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.3
    • /
    • pp.209-223
    • /
    • 2023
  • This study was conducted to evaluate the effectiveness of a polar literacy education program for elementary and middle school students, and to derive implications for new education to respond to climate change. We developed modular education programs based on the seven principles of polar literacy established by the Polar-ICE team. We divided them into two courses, one emphasizing science concepts and another emphasizing humanities and sociological issues. We then selected and structured detailed programs suitable for the two courses. These two courses were applied to 26 elementary and middle school students for approximately 69 hours in a Saturday science class hosted by the Department of Science Education at a university in Seoul. The 26 students were divided into three groups. Two groups completed the science education program for polar literacy and a humanities and social studies education program for polar literacy, respectively. The third group, the control group, received general science education unrelated to polar literacy. Before and after running the programs, all three groups responded to a polar literacy test and questionnaires that used vocabulary and presented scenes associated with polar regions. The test results were expressed using Wilcoxon signed ranks, which is a non-parametric test method, and improvements made upon completion of the program were analyzed. From a cognitive aspect, all three groups showed improvement after completing the program in the knowledge area; however, the experimental groups showed a greater degree of improvement than the control group, and there was a clear difference in the contents or materials explicitly covered. From an affective aspect, the difference between before and after the program was minor, but the group that focused on humanities and social issues showed a statistically significant improvement. Regarding changes in polar imagery, the two experimental groups tended to diverge from monotonous images to more diverse images compared to the control group. Based on the above results, we suggested methods to increase the effectiveness of polar literacy education programs, the importance of polar literacy as appropriate material for scientific thinking and earth system education, measures to improve attitudes related to the polar region, and the need to link to school curriculums.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.