• Title/Summary/Keyword: Machine System

Search Result 8,774, Processing Time 0.045 seconds

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

A Study on Usefulness of Specific Agents with Liver Disease at MRI Imaging: Comparison with Ferucarbotran and Gd-EOB-DTPA Contrast Agents (간 병변 특이성 조영제 자기공명영상에 대한 연구: Ferucarbotran과 Gd-EOB-DTPA 조영제의 비교)

  • Lee, Jae-Seung;Goo, Eun-Hoe;Park, Cheol-Soo;Lee, Sun-Yeob;Choi, Yong-Seok
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.235-243
    • /
    • 2009
  • The purpose of this experiment is to know the relation of the detection and characterization of liver's diseases as comparison of finding at MR imaging using a Ferucarbotran (SPIO) and Gd-EOB-DTPA (Primovist) agents in diffuse liver disease. A total of 50 patients (25 men and 25 women, mean age: 50 years) with liver diseases were investigated at 3.0T machine (GE, General Electric Medical System, Excite HD) "with 8 Ch body coil for comparison of diseases and contrast's uptake relation, which used the LAVA, MGRE." All images were performed on the same location with before and after Ferucarbotran and Gd-EOB-DTPA administrations (p<0.05). Contrast to noise ratio of Ferucarbotran and Gd-EOB-DTPA in the HCC were $3.08{\pm}0.12$ and $7.00{\pm}0.27$ with MGRE and LAVA pulse sequence, $3.62{\pm}0.13$ and $2.60{\pm}0.23$ in the hyper-plastic nodule, $1.70{\pm}0.09$ and $2.60{\pm}0.23$ in the meta, $2.12{\pm}0.28$ and $5.86{\pm}0.28$ in the FNH, $4.45{\pm}0.28$ and $1.73{\pm}0.02$ in the abscess and ANOVA test was used to evaluate the diagnostic performance of each disease (p<0.05). In conclusions, two techniques were well demonstrated with the relation of the detection and characterization of liver's diseases.

  • PDF

A development of DS/CDMA MODEM architecture and its implementation (DS/CDMA 모뎀 구조와 ASIC Chip Set 개발)

  • 김제우;박종현;김석중;심복태;이홍직
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.6
    • /
    • pp.1210-1230
    • /
    • 1997
  • In this paper, we suggest an architecture of DS/CDMA tranceiver composed of one pilot channel used as reference and multiple traffic channels. The pilot channel-an unmodulated PN code-is used as the reference signal for synchronization of PN code and data demondulation. The coherent demodulation architecture is also exploited for the reverse link as well as for the forward link. Here are the characteristics of the suggested DS/CDMA system. First, we suggest an interlaced quadrature spreading(IQS) method. In this method, the PN coe for I-phase 1st channel is used for Q-phase 2nd channels and the PN code for Q-phase 1st channel is used for I-phase 2nd channel, and so on-which is quite different from the eisting spreading schemes of DS/CDMA systems, such as IS-95 digital CDMA cellular or W-CDMA for PCS. By doing IQS spreading, we can drastically reduce the zero crossing rate of the RF signals. Second, we introduce an adaptive threshold setting for the synchronization of PN code, an initial acquistion method that uses a single PN code generator and reduces the acquistion time by a half compared the existing ones, and exploit the state machines to reduce the reacquistion time Third, various kinds of functions, such as automatic frequency control(AFC), automatic level control(ALC), bit-error-rate(BER) estimator, and spectral shaping for reducing the adjacent channel interference, are introduced to improve the system performance. Fourth, we designed and implemented the DS/CDMA MODEM to be used for variable transmission rate applications-from 16Kbps to 1.024Mbps. We developed and confirmed the DS/CDMA MODEM architecture through mathematical analysis and various kind of simulations. The ASIC design was done using VHDL coding and synthesis. To cope with several different kinds of applications, we developed transmitter and receiver ASICs separately. While a single transmitter or receiver ASC contains three channels (one for the pilot and the others for the traffic channels), by combining several transmitter ASICs, we can expand the number of channels up to 64. The ASICs are now under use for implementing a line-of-sight (LOS) radio equipment.

  • PDF

Fracture resistance and marginal fidelity of zirconia crown according to the coping design and the cement type (코핑 디자인과 시멘트에 따른 지르코니아 도재관의 파절 저항성)

  • Sim, Hun-Bo;Kim, Yu-Jin;Kim, Min-Jeong;Shin, Mee-Ran;Oh, Sang-Chun
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.48 no.3
    • /
    • pp.194-201
    • /
    • 2010
  • Purpose: The purpose was to compare the marginal fidelity and the fracture resistance of the zirconia crowns according to the various coping designs with different thicknesses and cement types. Materials and methods: Zirconia copings were designed and fabricated with various thicknesses using the CAD/CAM system (Everest, KaVo Dental GmbH, Biberach., Germany). Eighty zirconia copings were divided into 4 groups (Group I: even 0.3 mm thickness, Group II: 0.3 mm thickness on the buccal surface and the buccal half of occlusal surface and the 0.6 mm thickness on the lingual surface and the lingual half of occlusal surface, Group III: even 0.6 mm thickness, Group IV: 0.6 mm thickness on the buccal surface and the buccal half of occlusal surface and the 1.0 mm thickness on the lingual surface and the lingual half of occlusal surface) of 20. By using a putty index, zirconia crowns with the same size and contour were fabricated. Each group was divided into two subgroups by type of cement: Cavitec$^{(R)}$ (Kerr Co, USA) and Panavia-$F^{(R)}$ (Kuraray Medical Inc, Japan). After the cementation of the crowns with a static load compressor, the marginal fidelity of the zirconia crowns were measured at margins on the buccal, lingual, mesial and distal surfaces, using a microscope of microhardness tester (Matsuzawa, MXT-70, Japan, ${\times}100$). The fracture resistance of each crown was measured using a universal testing machine (Z020, Zwick, Germany) at a crosshead speed of 1 mm/min. The results were analyzed statistically by the two-way ANOVA and oneway ANOVA and Duncan's multiple range test at $\alpha$=.05. Results: Group I and III showed the smallest marginal fidelity, while group II demonstrated the largest value in Cavitec$^{(R)}$ subgroup (P<.05). For fracture resistance, group III and IV were significantly higher than group I and II in Cavitec$^{(R)}$ subgroup (P<.05). The fracture resistances of Panavia-$F^{(R)}$ subgroup were not significantly different among the groups (P>.05). Panavia-$F^{(R)}$ subgroup showed significantly higher fracture resistance than Cavitec$^{(R)}$ subgroup in group I and II (P<.05). Conclusion: Within the limitation of this study, considering fracture resistance or marginal fidelity and esthetics, a functional ceramic substructure design of the coping with slim visible surface can be used for esthetic purposes, or a thick invisible surface to support the veneering ceramic can be used depending on the priority.

Design Evaluation Model Based on Consumer Values: Three-step Approach from Product Attributes, Perceived Attributes, to Consumer Values (소비자 가치기반 디자인 평가 모형: 제품 속성, 인지 속성, 소비자 가치의 3단계 접근)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.57-76
    • /
    • 2017
  • Recently, consumer needs are diversifying as information technologies are evolving rapidly. A lot of IT devices such as smart phones and tablet PCs are launching following the trend of information technology. While IT devices focused on the technical advance and improvement a few years ago, the situation is changed now. There is no difference in functional aspects, so companies are trying to differentiate IT devices in terms of appearance design. Consumers also consider design as being a more important factor in the decision-making of smart phones. Smart phones have become a fashion items, revealing consumers' own characteristics and personality. As the design and appearance of the smartphone become important things, it is necessary to examine consumer values from the design and appearance of IT devices. Furthermore, it is crucial to clarify the mechanisms of consumers' design evaluation and develop the design evaluation model based on the mechanism. Since the influence of design gets continuously strong, various and many studies related to design were carried out. These studies can classify three main streams. The first stream focuses on the role of design from the perspective of marketing and communication. The second one is the studies to find out an effective and appealing design from the perspective of industrial design. The last one is to examine the consumer values created by a product design, which means consumers' perception or feeling when they look and feel it. These numerous studies somewhat have dealt with consumer values, but they do not include product attributes, or do not cover the whole process and mechanism from product attributes to consumer values. In this study, we try to develop the holistic design evaluation model based on consumer values based on three-step approach from product attributes, perceived attributes, to consumer values. Product attributes means the real and physical characteristics each smart phone has. They consist of bezel, length, width, thickness, weight and curvature. Perceived attributes are derived from consumers' perception on product attributes. We consider perceived size of device, perceived size of display, perceived thickness, perceived weight, perceived bezel (top - bottom / left - right side), perceived curvature of edge, perceived curvature of back side, gap of each part, perceived gloss and perceived screen ratio. They are factorized into six clusters named as 'Size,' 'Slimness,' 'No-Frame,' 'Roundness,' 'Screen Ratio,' and 'Looseness.' We conducted qualitative research to find out consumer values, which are categorized into two: look and feel values. We identified the values named as 'Silhouette,' 'Neatness,' 'Attractiveness,' 'Polishing,' 'Innovativeness,' 'Professionalism,' 'Intellectualness,' 'Individuality,' and 'Distinctiveness' in terms of look values. Also, we identifies 'Stability,' 'Comfortableness,' 'Grip,' 'Solidity,' 'Non-fragility,' and 'Smoothness' in terms of feel values. They are factorized into five key values: 'Sleek Value,' 'Professional Value,' 'Unique Value,' 'Comfortable Value,' and 'Solid Value.' Finally, we developed the holistic design evaluation model by analyzing each relationship from product attributes, perceived attributes, to consumer values. This study has several theoretical and practical contributions. First, we found consumer values in terms of design evaluation and implicit chain relationship from the objective and physical characteristics to the subjective and mental evaluation. That is, the model explains the mechanism of design evaluation in consumer minds. Second, we suggest a general design evaluation process from product attributes, perceived attributes to consumer values. It is an adaptable methodology not only smart phone but also other IT products. Practically, this model can support the decision-making when companies initiative new product development. It can help product designers focus on their capacities with limited resources. Moreover, if its model combined with machine learning collecting consumers' purchasing data, most preferred values, sales data, etc., it will be able to evolve intelligent design decision support system.

A Study on the RFID's Application Environment and Application Measure for Security (RFID의 보안업무 적용환경과 적용방안에 관한 연구)

  • Chung, Tae-Hwang
    • Korean Security Journal
    • /
    • no.21
    • /
    • pp.155-175
    • /
    • 2009
  • RFID that provide automatic identification by reading a tag attached to material through radio frequency without direct touch has some specification, such as rapid identification, long distance identification and penetration, so it is being used for distribution, transportation and safety by using the frequency of 125KHz, 134KHz, 13.56MHz, 433.92MHz, 900MHz, and 2.45GHz. Also it is one of main part of Ubiquitous that means connecting to net-work any time and any place they want. RFID is expected to be new growth industry worldwide, so Korean government think it as prospective field and promote research project and exhibition business program to linked with industry effectively. RFID could be used for access control of person and vehicle according to section and for personal certify with password. RFID can provide more confident security than magnetic card, so it could be used to prevent forgery of register card, passport and the others. Active RFID could be used for protecting operation service using it's long distance date transmission by application with positioning system. And RFID's identification and tracking function can provide effective visitor management through visitor's register, personal identification, position check and can control visitor's movement in the secure area without their approval. Also RFID can make possible of the efficient management and prevention of loss of carrying equipments and others. RFID could be applied to copying machine to manager and control it's user, copying quantity and It could provide some function such as observation of copy content, access control of user. RFID tag adhered to small storage device prevent carrying out of item using the position tracking function and control carrying-in and carrying-out of material efficiently. magnetic card and smart card have been doing good job in identification and control of person, but RFID can do above functions. RFID is very useful device but we should consider the prevention of privacy during its application.

  • PDF

Effect of Implant Types and Bone Resorption on the Fatigue Life and Fracture Characteristics of Dental Implants (임플란트 형태와 골흡수가 임플란트 피로 수명 및 파절 특성에 미치는 효과에 관한 연구)

  • Won, Ho-Yeon;Choi, Yu-Sung;Cho, In-Ho
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.26 no.2
    • /
    • pp.121-143
    • /
    • 2010
  • To investigate the effect of implant types and bone resorption on the fracture characteristics. 4 types of Osstem$^{(R)}$Implant were chosen and classified into external parallel, internal parallel, external taper, internal taper groups. Finite elements analysis was conducted with ANSYS Multi Physics software. Fatigue fracture test was performed by connecting the mold to the dynamic load fatigue testing machine with maximum load of 600N and minimum load of 60N. The entire fatigue test was performed with frequency of 14Hz and fractured specimens were observed with Hitachi S-3000 H scanning electron microscope. The results were as follows: 1. In the fatigue test of 2 mm exposed implants group, Tapered type and external connected type had higher fatigue life. 2. In the fatigue test of 4 mm exposed implants group, Parallel type and external connected types had higher fatigue life. 3. The fracture patterns of all 4 mm exposed implant system appeared transversely near the dead space of the fixture. With a exposing level of 2 mm, all internally connected implant systems were fractured transversely at the platform of fixture facing the abutment. but externally connected ones were fractured at the fillet of abutment body and hexa of fixture or near the dead space of the fixture. 4. Many fatigue striations were observed near the crack initiation and propagation sites. The cleavage with facet or dimple fractures appeared at the final fracture sites. 5. Effective stress of buccal site with compressive stress is higher than that of lingual site with tensile stress, and effective stress acting on the fixture is higher than that of the abutment screw. Also, maximum effective stress acting on the parallel type fixtures is higher. It is careful to use the internal type implant system in posterior area.

The Effect of Surface Treatment on the Shear Bond Strength of Resin Cement to Zirconia Ceramics (표면처리가 지르코니아와 레진 시멘트의 전단결합강도에 미치는 효과)

  • Jung, Seung-Hyun;Kim, Kye-Soon;Lee, Jae-In;Lee, Jin-Han;Kim, Yu-Lee;Cho, Hye-Won
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.25 no.2
    • /
    • pp.83-94
    • /
    • 2009
  • The aim of this study was to investigate the shear bond strength between zirconia ceramic and resin cement according to various surface treatments. The surface of each zirconia ceramic was subjected to one of the following treatments and then bonded Rely X Unicem or Rely X ARC resin cement; (1) Rocatec system and $50{\mu}m$ surface polishing, (2) No treatment and $50{\mu}m$ surface polishing, (3) Rocatec system and $1{\mu}m$ surface polishing, (4) No treatment and $1{\mu}m$ surface polishing. Each of eight bonding group was tested in shear bond strengths by universal testing machine(Z020, Zwick, Ulm, Germany) with crosshead speed of 1mm/min. The results were as follows; 1. Rocatec treatment groups showed greater bonding strengths than No Rocatec groups. There was significant difference of among groups(P<0.001) 2. For Rocatec groups, $50{\mu}m$ surface roughness groups showed greater bonding strengths than $1{\mu}m$ surface roughness groups.(P<0.001) But for No Rocatec groups, There was no significant difference of among groups(P>0.05) 3. Rely X Unicem groups showed greater bonding strengths than Rely X ARC groups. There was significant difference of among groups(P<0.01) Within the conditions of this study, Rocatec treatment was an effective way of increasing zirconia bonds to a resin cement, even in the case of self-adhesive resin cement.

SHEAR BOND STRENGTH AND MICROLEAKAGE OF COMPOSITE RESIN ACCORDING TO TREATMENT METHODS OF CONTAMINATED SURFACE AFTER APPLYING A BONDING AGENT (접착제 도포후 오염된 표면의 처리방법에 따른 복합레진의 전단결합강도와 미세누출)

  • Park, Joo-Sik;Lee, Suck-Jong;Moon, Joo-Hoon;Cho, Young-Gon
    • Restorative Dentistry and Endodontics
    • /
    • v.24 no.4
    • /
    • pp.647-656
    • /
    • 1999
  • The purpose of this study was to investigate the shear bond strength and marginal microleakage of composite to enamel and dentin according to different treatment methods when the applied bonding agent was contaminated by artificial saliva. For the shear bond strength test, the buccal and occlusal surfaces of one hundred twenty molar teeth were ground to expose enamel(n=60) and dentin surfaces(n=60). The specimens were randomly assigned into control and 5 experimental groups with 10 samples in each group. In control group, a bonding system(Scotchbond$^{TM}$ Multi-Purpose plus) and a composite resin(Z-100$^{TM}$) was bonded on the specimens according to manufacture's directions. Experimental groups were subdivided into 5 groups. After polymerization of an adhesive, they were contaminated with at artificial saliva on enamel and dentin surfaces: Experimental group 1 ; artificial saliva was dried with compressed air. Experimental group 2 ; artificial saliva was rinsed with air-water spray and dried. Experimental group 3 ; artificial saliva was rinsed, dried and applied an adhesive. Experimental group 4 ; artificial saliva was rinsed, dried, and then etched using phosphoric acid followed by an adhesive. Experimental group 5, artificial saliva was rinsed, dried, and then etched with phosphoric acid followed by consecutive application of both a primer and an adhesive. Composite resin(Z-100$^{TM}$) was bonded on saliva-treated enamel and dentin surfaces. The shear bond strengths were measured by universal testing machine(AGS-1000 4D, Shimaduzu Co. Japan) with a crosshead speed of 5mm/minute under 50kg load cell. Failure modes of fracture sites were examined under stereomicroscope. The data were analyzed by one-way ANOVA and Tukey's test. For the marginal microleakage test, Class V cavities were prepared on the buccal surfaces of sixty molars. The specimens were divided into control and experimental groups. Cavities in experimental group were contaminated with artificial saliva and those surfaces in each experimental groups received the same treatments as for the shear test. Cavities were filled with Z-100. Specimens were immersed in 0.5% basic fuchsin dye for 24 hours and embedded in transparent acrylic resin and sectioned buccolingually with diamond wheel saw. Four sections were obtained from the one specimen. Marginal microleakages of enamel and dentin were scored under streomicroscope and averaged from four sections. The data were analyzed by Kruskal-Wallis test and Fisher's LSD. The results of this study were as follows. 1. The shear bond strength to enamel showed lower value in experimental group 1(13.20${\pm}$2.94MPa) and experimental group 2(13.20${\pm}$2.94MPa) than in control(20.03${\pm}$4.47MPa), experimental group 4(20.96${\pm}$4.25MPa) and experimental group 5(21.25${\pm}$4.48MPa) (p<0.05). 2. The shear bond strength to dentin showed lower value in experimental group 1(9.35${\pm}$4.11MPa) and experimental group 2(9.83${\pm}$4.11MPa) than in control group(17.86${\pm}$4.03MPa), experimental group 4(15.04${\pm}$3.22MPa) and experimental group 5(14.33${\pm}$3.00MPa) (p<0.05). 3. Both on enamel and dentin surfaces, experimental group 1 and 2 showed many adhesive failures, but control and experimental group 3, 4 and 5 showed mixed and cohesive failures. 4. Enamel marginal microleakage was the highest in experimental group 1 and there was a significant difference in comparison with other groups (p<0.05). 5. Dentin marginal microleakages of experimental group 1 and 2 were higher than those of other groups (p<0.05). This result suggests that treatment methods, re-etching with 35% phosphoric acid followed by re-application of adhesive or repeating all adhesive procedures, will produce good effect on both shear bond strength and microleakage of composite to enamel and dentin if the polymerized bonding agent was contaminated by saliva.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.