• Title/Summary/Keyword: optimal classification method

Search Result 368, Processing Time 0.029 seconds

A Study for Application of Standard and Performance Test According to Purpose and Subject of Respiratory Medical Device (호흡보조의료기기의 사용목적 및 대상에 따른 규격적용 방안 및 성능에 관한 연구)

  • Park, Junhyun;Ho, YeJi;Lee, Duck Hee;Choi, Jaesoon
    • Journal of Biomedical Engineering Research
    • /
    • v.40 no.5
    • /
    • pp.215-221
    • /
    • 2019
  • The respiratory medical device is a medical device that delivers optimal oxygen or a certain amount of humidification to a patient by delivering artificial respiration to a patient through a machine when the patient has lost the ability to breathe spontaneously. These include respirators for use in chronic obstructive pulmonary disease and anesthesia or emergency situations, and positive airway pressure devices for treating sleep apnea, and as the population of COPD (chronic obstructive pulmonary disease) and elderly people worldwide surge, the market for the respiratory medical devices it is getting bigger. As the demand for both airway pressure devices, there is a problem that the ventilator standard is applied because the reference standard has not been established. Therefore, the boundaries between the items are blurred due to the purpose, intended use, and method of use overlapping similar items in a respiratory medical device. In addition, for both airway pressure devices, there is a problem that the ventilator standard is applied because the reference standard has not been established. Therefore, in this study, we propose clear classification criteria for the respiratory medical devices according to the purpose, intended use, and method of use and provide safety and performance evaluation guidelines for those items to help quality control of the medical devices. And to contribute to the rapid regulating and improvement of public health. This study investigated the safety and performance test methods through the principles of the respiratory medical device, national and international standards, domestic and international licensing status, and related literature surveys. The results of this study are derived from the safety and performance test items in the individual ventilator(ISO 80601-2-72), the International Standard for positive airway pressure device (ISO 80601-2-70), The safety and performance of humidifiers (ISO 80601-2-74) and the safety evaluation items related to home healthcare environment (IEC 60601-1-11), In addition, after reviewing the guidelines drawn up through expert consultation bodies including manufacturers and importers, certified test inspection institutions, academia, etc., the final guidelines were established through revision and supplementation. Therefore, in this study, we propose guidelines for evaluating the safety and performance of the respiratory medical device in accordance with growing technology development.

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

The Design and Structural Analysis of the APV Module Structure Using Topology Optimization (위상 최적설계를 이용한 APV Module Structure의 설계 및 구조해석)

  • Kang, Sang-Hoon;Kim, Jun-Su;Park, Young-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.3
    • /
    • pp.22-30
    • /
    • 2017
  • This paper presents the research results of a light weight through topology optimization and structural safety evaluation through structural analysis of a pressure system structure installed in an off-shore plant. Conducting a structure design according to the wind load and the dynamic load at sea in addition to a self-load and structure stability evaluation are very important for structures installed in off-shore plants. In this study, the wind and dynamic load conditions according to the DNV classification rule was applied to the analysis. The topology optimization method was applied to the structure to obtain a lightweight shape. Phase optimization analysis confirmed the stress concentration portion. Topology optimization analysis takes the shape by removing unnecessary elements in the design that have been designed to form a rib shape. Based on the analysis results about the light weight optimal shape, a safety evaluation through structural analysis and suitability of the shape was conducted. This study suggests a design and safety evaluation of an off-shore plant structure that is difficult for structural safety evaluations using an actual test.

Deep Learning Based Floating Macroalgae Classification Using Gaofen-1 WFV Images (Gaofen-1 WFV 영상을 이용한 딥러닝 기반 대형 부유조류 분류)

  • Kim, Euihyun;Kim, Keunyong;Kim, Soo Mee;Cui, Tingwei;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.293-307
    • /
    • 2020
  • Every year, the floating macroalgae, green and golden tide, are massively detected at the Yellow Sea and East China Sea. After influx of them to the aquaculture facility or beach, it occurs enormous economic losses to remove them. Currently, remote sensing is used effectively to detect the floating macroalgae flowed into the coast. But it has difficulties to detect the floating macroalgae exactly because of the wavelength overlapped with other targets in the ocean. Also, it is difficult to distinguish between green and golden tide because they have similar spectral characteristics. Therefore, we tried to distinguish between green and golden tide applying the Deep learning method to the satellite images. To determine the network, the optimal training conditions were searched to train the AlexNet. Also, Gaofen-1 WFV images were used as a dataset to train and validate the network. Under these conditions, the network was determined after training, and used to confirm the test data. As a result, the accuracy of test data is 88.89%, and it can be possible to distinguish between green and golden tide with precision of 66.67% and 100%, respectively. It is interpreted that the AlexNet can be pick up on the subtle differences between green and golden tide. Through this study, it is expected that the green and golden tide can be effectively classified from various objects in the ocean and distinguished each other.

Metastatic Axillary Lymph Node Ratio (LNR) is Prognostically Superior to pN Staging in Patients with Breast Cancer -- Results for 804 Chinese Patients from a Single Institution

  • Xiao, Xiang-Sheng;Tang, Hai-Lin;Xie, Xin-Hua;Li, Lai-Sheng;Kong, Ya-Nan;Wu, Min-Qing;Yang, Lu;Gao, Jie;Wei, Wei-Dong;Xie, Xiaoming
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.14 no.9
    • /
    • pp.5219-5223
    • /
    • 2013
  • The number of axillary lymph nodes involved and retrieved are important prognostic factors in breast cancer. The purpose of our study was to investigate whether the lymph node ratio (LNR) is a better prognostic factor in predicting disease-free survival (DFS) for breast cancer patients as compared with pN staging. The analysis was based on 804 breast cancer patients who had underwent axillary lymph node dissection between 1999 and 2008 in Sun Yat-Sen University Cancer Center. Optimal cutoff points of LNR were calculated using X-tile software and validated by bootstrapping. Patients were then divided into three groups (low-, intermediate-, and high-risk) according to the cutoff points. Predicting risk factors for relapse were performed according to Cox proportional hazards analysis. DFS was estimated using the Kaplan-Meier method and compared by the log-rank test. The 5-year DFS rate decreased significantly with increasing LNRs and pN. Univariate analysis found that the pT, pN, LNR, molecule type, HER2, pTNM stage and radiotherapy well classified patients with significantly different prognosis. By multivariate analysis, only LNR classification was retained as an independent prognostic factor. Furthermore, there was a significant prognostic difference among different LNR categories for pN2 category, but no apparent prognostic difference was seen between different pN categories in any LNR category. Therefore, LNR rather than pN staging is preferable in predicting DFS in node positive breast cancer patients, and routine clinical decision-making should take the LNR into consideration.

DCT Coefficient Block Size Classification for Image Coding (영상 부호화를 위한 DCT 계수 블럭 크기 분류)

  • Gang, Gyeong-In;Kim, Jeong-Il;Jeong, Geun-Won;Lee, Gwang-Bae;Kim, Hyeon-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.3
    • /
    • pp.880-894
    • /
    • 1997
  • In this paper,we propose a new algorithm to perform DCT(Discrete Cosine Transform) withn the area reduced by prdeicting position of quantization coefficients to be zero.This proposed algorithm not only decreases the enoding time and the decoding time by reducing computation amount of FDCT(Forward DCT)and IDCT(Inverse DCT) but also increases comprossion ratio by performing each diffirent horizontal- vereical zig-zag scan assording to the calssified block size for each block on the huffiman coeing.Traditional image coding method performs the samd DCT computation and zig-zag scan over all blocks,however this proposed algorthm reduces FDCT computation time by setting to zero insted of computing DCT for quantization codfficients outside classfified block size on the encoding.Also,the algorithm reduces IDCT computation the by performing IDCT for only dequantization coefficients within calssified block size on the decoding.In addition, the algorithm reduces Run-Length by carrying out horizontal-vertical zig-zag scan approriate to the slassified block chraateristics,thus providing the improverment of the compression ratio,On the on ther hand,this proposed algorithm can be applied to 16*16 block processing in which the compression ratio and the image resolution are optimal but the encoding time and the decoding time take long.Also,the algorithm can be extended to motion image coding requirng real time processing.

  • PDF

Estimation of Soft Ground Piezocone Factors at Gwangyang, Jeonnam (전남 광양지역 연약지반의 피에조콘계수 산정)

  • Oh, Dongchoon;Kim, Kibeom;Baek, Seungcheol
    • Journal of the Korean GEO-environmental Society
    • /
    • v.20 no.2
    • /
    • pp.59-67
    • /
    • 2019
  • Using the results from laboratory soil test, field vane test and piezocone penetration test, the engineering characteristics of the soft ground at east side of Gwangyang Port, which is located at south coast of Jeollanam-do, were investigated and optimal piezocone penetration test depth was derived to calculate piezocone factor. In this paper, the results of 61 laboratory soil tests, 226 times of field vane tests and 26 piezocone penetration tests were used. The result of laboratory soil test suggested that some physical properties such as specific gravity, moisture content, liquid limit and plastic index and others are higher than other south coast regions, meanwhile uniaxial compression strength, undrained shear strength, defined as mechanical property, appeared to be relatively small, distributed widely. According to the plastic chart, the ground was classified as high compressibility clay and low compressibility clay, mostly represent to Type 3 clay by Robertson (1990)'s classification chart. Piezocone factor was calculated by empirical method, based on the undrained shear strength which was obtained by the field vane test. According to the analysis with 3 different depth range, to set the appropriate measured depth range of piezocone penetration for comparation, using average value of the range of 5 times the vane length showed the highest correlation.

The cutoff criterion and the accuracy of the polygraph test for crime investigation (범죄수사를 위한 거짓말탐지 검사(polygraph test)의 판정기준과 정확성)

  • Yu Hwa Han ;Kwangbai Park
    • Korean Journal of Culture and Social Issue
    • /
    • v.14 no.4
    • /
    • pp.103-117
    • /
    • 2008
  • The polygraph test administered by the Korean Prosecutors Office for crime investigations customarily uses the score of -12 as the cutoff point separating the subjects who lie from those who tell the truth. The criterion used by the KPO is different from the one (-13) suggested by Backster (1963) who invented the particular method for lie detection. Based on the signal detection theory applied to the real polygraph test data obtained from real crime suspects by the KPO, the present study identified the score of -8 as an optimal criterion resulting in the highest overall accuracy of the polygraph test. The classification of the subjects with the score of -8 as the criterion resulted in the highest accuracy (83.17%) compared with the accuracies of classifications with the Backster's criterion (76.24%) and the KPO's criterion (80.20%). However, the new criterion was also found to result in more false-positive cases. Based on the results from the present study, it was recommended to use the score of -8 as the criterion when the overall accuracy is important but the score of -12 or -13 when avoiding false-positive is more important than securing the overall accuracy.

  • PDF