• Title/Summary/Keyword: Complexity-Weighted

Search Result 156, Processing Time 0.026 seconds

IDS Model using Improved Bayesian Network to improve the Intrusion Detection Rate (베이지안 네트워크 개선을 통한 탐지율 향상의 IDS 모델)

  • Choi, Bomin;Lee, Jungsik;Han, Myung-Mook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.495-503
    • /
    • 2014
  • In recent days, a study of the intrusion detection system collecting and analyzing network data, packet or logs, has been actively performed to response the network threats in computer security fields. In particular, Bayesian network has advantage of the inference functionality which can infer with only some of provided data, so studies of the intrusion system based on Bayesian network have been conducted in the prior. However, there were some limitations to calculate high detection performance because it didn't consider the problems as like complexity of the relation among network packets or continuos input data processing. Therefore, in this paper we proposed two methodologies based on K-menas clustering to improve detection rate by reforming the problems of prior models. At first, it can be improved by sophisticatedly setting interval range of nodes based on K-means clustering. And for the second, it can be improved by calculating robust CPT through applying weighted-leaning based on K-means clustering, too. We conducted the experiments to prove performance of our proposed methodologies by comparing K_WTAN_EM applied to proposed two methodologies with prior models. As the results of experiment, the detection rate of proposed model is higher about 7.78% than existing NBN(Naive Bayesian Network) IDS model, and is higher about 5.24% than TAN(Tree Augmented Bayesian Network) IDS mode and then we could prove excellence our proposing ideas.

Network Structures of The Metropolitan Seoul Subway Systems (서울 대도시권 지하철망의 구조적 특성 분석)

  • Park, Jong-Soo;Lee, Keum-Sook
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.11 no.3
    • /
    • pp.459-475
    • /
    • 2008
  • This study analyzes the network structure of the Metropolitan Seoul subway system by applying complex network analysis methods. For the purpose, we construct the Metropolitan Seoul subway system as a network graph, and then calculate various indices introduced in complex network analysis. Structural characteristics of Metropolitan Seoul subway network are discussed by these indices. In particular, this study determines the shortest paths between nodes based on the weighted distance (physical and time distance) as well as topological network distance, since urban travel movements are more sensitive for them. We introduce an accessibility measurement based on the shortest distance both in terms of physical distance and network distance, and then compare the spatial structure between two. Accessibility levels of the system have been getting up overall, and thus the accessibility gaps have been getting lessen between center located subway stops and remote ones during the last 10 years. Passenger traffic volumes are explored from real passenger transaction databases by utilizing data mining techniques, and mapped by GIS. Clear differences reveal between the spatial patterns of real passenger flows and accessibility. That is, passenger flows of the Metropolitan Seoul subway system are related with population distribution and land use around subway stops as well as the accessibility supported by the subway network.

  • PDF

Fuzzy discretization with spatial distribution of data and Its application to feature selection (데이터의 공간적 분포를 고려한 퍼지 이산화와 특징선택에의 응용)

  • Son, Chang-Sik;Shin, A-Mi;Lee, In-Hee;Park, Hee-Joon;Park, Hyoung-Seob;Kim, Yoon-Nyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.2
    • /
    • pp.165-172
    • /
    • 2010
  • In clinical data minig, choosing the optimal subset of features is such important, not only to reduce the computational complexity but also to improve the usefulness of the model constructed from the given data. Moreover the threshold values (i.e., cut-off points) of selected features are used in a clinical decision criteria of experts for differential diagnosis of diseases. In this paper, we propose a fuzzy discretization approach, which is evaluated by measuring the degree of separation of redundant attribute values in overlapping region, based on spatial distribution of data with continuous attributes. The weighted average of the redundant attribute values is then used to determine the threshold value for each feature and rough set theory is utilized to select a subset of relevant features from the overall features. To verify the validity of the proposed method, we compared experimental results, which applied to classification problem using 668 patients with a chief complaint of dyspnea, based on three discretization methods (i.e., equal-width, equal-frequency, and entropy-based) and proposed discretization method. From the experimental results, we confirm that the discretization methods with fuzzy partition give better results in two evaluation measures, average classification accuracy and G-mean, than those with hard partition.

Denoising on Image Signal in Wavelet Basis with the VisuShrink Technique Using the Estimated Noise Deviation by the Monotonic Transform (웨이블릿 기저의 영상신호에서 단조변환으로 추정된 잡음편차를 사용한 VisuShrink 기법의 잡음제거)

  • 우창용;박남천
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.111-118
    • /
    • 2004
  • Techniques based on thresholding of wavelet coefficients are gaining popularity for denoising data because of the reasonable performance at the low complexity. The VisuShrink which removes the noise with the universal threshold is one of the techniques. The universal threshold is proportional to the noise deviation and the number of data samples. In general, because the noise deviation is not known, one needs to estimate the deviation for determining the value of the universal threshold. But, only for the finest scale wavelet coefficients, it has been known the way of estimating the noise deviation, so the noise in coarse scales cannot be removed with the VisuShrink. We propose here a new denoising method which removes the noise in each scale except the coarsest scale by Visushrink method. The noise deviation at each band is estimated by the monotonic transform and weighted deviation, the product of estimated noise deviation by the weight, is applied to the universal threshold. By making use of the universal threshold and the Soft-Threshold technique, the noise in each band is removed. The denoising characteristics of the proposed method is compared with that of the traditional VisuShrink and SureShrink method. The result showed that the proposed method is effective in denoising on Gaussian noise and quantization noise.

  • PDF

Variation of Hospital Costs and Product Heterogeneity

  • Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.11 no.1
    • /
    • pp.123-127
    • /
    • 1978
  • The major objective of this research is to identify those hospital characteristics that best explain cost variation among hospitals and to formulate linear models that can predict hospital costs. Specific emphasis is placed on hospital output, that is, the identification of diagnosis related patient groups (DRGs) which are medically meaningful and demonstrate similar patterns of hospital resource consumption. A casemix index is developed based on the DRGs identified. Considering the common problems encountered in previous hospital cost research, the following study requirements are estab-lished for fulfilling the objectives of this research: 1. Selection of hospitals that exercise similar medical and fiscal practices. 2. Identification of an appropriate data collection mechanism in which demographic and medical characteristics of individual patients as well as accurate and comparable cost information can be derived. 3. Development of a patient classification system in which all the patients treated in hospitals are able to be split into mutually exclusive categories with consistent and stable patterns of resource consumption. 4. Development of a cost finding mechanism through which patient groups' costs can be made comparable across hospitals. A data set of Medicare patients prepared by the Social Security Administration was selected for the study analysis. The data set contained 27,229 record abstracts of Medicare patients discharged from all but one short-term general hospital in Connecticut during the period from January 1, 1971, to December 31, 1972. Each record abstract contained demographic and diagnostic information, as well as charges for specific medical services received. The 'AUT-OGRP System' was used to generate 198 DRGs in which the entire range of Medicare patients were split into mutually exclusive categories, each of which shows a consistent and stable pattern of resource consumption. The 'Departmental Method' was used to generate cost information for the groups of Medicare patients that would be comparable across hospitals. To fulfill the study objectives, an extensive analysis was conducted in the following areas: 1. Analysis of DRGs: in which the level of resource use of each DRG was determined, the length of stay or death rate of each DRG in relation to resource use was characterized, and underlying patterns of the relationships among DRG costs were explained. 2. Exploration of resource use profiles of hospitals; in which the magnitude of differences in the resource uses or death rates incurred in the treatment of Medicare patients among the study hospitals was explored. 3. Casemix analysis; in which four types of casemix-related indices were generated, and the significance of these indices in the explanation of hospital costs was examined. 4. Formulation of linear models to predict hospital costs of Medicare patients; in which nine independent variables (i. e., casemix index, hospital size, complexity of service, teaching activity, location, casemix-adjusted death. rate index, occupancy rate, and casemix-adjusted length of stay index) were used for determining factors in hospital costs. Results from the study analysis indicated that: 1. The system of 198 DRGs for Medicare patient classification was demonstrated not only as a strong tool for determining the pattern of hospital resource utilization of Medicare patients, but also for categorizing patients by their severity of illness. 2. The wei틴fed mean total case cost (TOTC) of the study hospitals for Medicare patients during the study years was $11,27.02 with a standard deviation of $117.20. The hospital with the highest average TOTC ($1538.15) was 2.08 times more expensive than the hospital with the lowest average TOTC ($743.45). The weighted mean per diem total cost (DTOC) of the study hospitals for Medicare patients during the sutdy years was $107.98 with a standard deviation of $15.18. The hospital with the highest average DTOC ($147.23) was 1.87 times more expensive than the hospital with the lowest average DTOC ($78.49). 3. The linear models for each of the six types of hospital costs were formulated using the casemix index and the eight other hospital variables as the determinants. These models explained variance to the extent of 68.7 percent of total case cost (TOTC), 63.5 percent of room and board cost (RMC), 66.2 percent of total ancillary service cost (TANC), 66.3 percent of per diem total cost (DTOC), 56.9 percent of per diem room and board cost (DRMC), and 65.5 percent of per diem ancillary service cost (DTANC). The casemix index alone explained approximately one half of interhospital cost variation: 59.1 percent for TOTC and 44.3 percent for DTOC. Thsee results demonstrate that the casemix index is the most importand determinant of interhospital cost variation Future research and policy implications in regard to the results of this study is envisioned in the following three areas: 1. Utilization of casemix related indices in the Medicare data systems. 2. Refinement of data for hospital cost evaluation. 3. Development of a system for reimbursement and cost control in hospitals.

  • PDF

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.