• Title/Summary/Keyword: paper machine

Search Result 9,810, Processing Time 0.04 seconds

A development of DS/CDMA MODEM architecture and its implementation (DS/CDMA 모뎀 구조와 ASIC Chip Set 개발)

  • 김제우;박종현;김석중;심복태;이홍직
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.6
    • /
    • pp.1210-1230
    • /
    • 1997
  • In this paper, we suggest an architecture of DS/CDMA tranceiver composed of one pilot channel used as reference and multiple traffic channels. The pilot channel-an unmodulated PN code-is used as the reference signal for synchronization of PN code and data demondulation. The coherent demodulation architecture is also exploited for the reverse link as well as for the forward link. Here are the characteristics of the suggested DS/CDMA system. First, we suggest an interlaced quadrature spreading(IQS) method. In this method, the PN coe for I-phase 1st channel is used for Q-phase 2nd channels and the PN code for Q-phase 1st channel is used for I-phase 2nd channel, and so on-which is quite different from the eisting spreading schemes of DS/CDMA systems, such as IS-95 digital CDMA cellular or W-CDMA for PCS. By doing IQS spreading, we can drastically reduce the zero crossing rate of the RF signals. Second, we introduce an adaptive threshold setting for the synchronization of PN code, an initial acquistion method that uses a single PN code generator and reduces the acquistion time by a half compared the existing ones, and exploit the state machines to reduce the reacquistion time Third, various kinds of functions, such as automatic frequency control(AFC), automatic level control(ALC), bit-error-rate(BER) estimator, and spectral shaping for reducing the adjacent channel interference, are introduced to improve the system performance. Fourth, we designed and implemented the DS/CDMA MODEM to be used for variable transmission rate applications-from 16Kbps to 1.024Mbps. We developed and confirmed the DS/CDMA MODEM architecture through mathematical analysis and various kind of simulations. The ASIC design was done using VHDL coding and synthesis. To cope with several different kinds of applications, we developed transmitter and receiver ASICs separately. While a single transmitter or receiver ASC contains three channels (one for the pilot and the others for the traffic channels), by combining several transmitter ASICs, we can expand the number of channels up to 64. The ASICs are now under use for implementing a line-of-sight (LOS) radio equipment.

  • PDF

A Study for Reappearance Acording to the Scan Type, the CT Scanning by a Moving Phantom (팬톰을 이용한 전산화 단층촬영방법에 따른 재현성에 대한 고찰)

  • Choi, Jae-Hyock;Jeong, Do-Hyeong;Suk, Choi-Gye;Jang, Yo-Jong;Kim, Jae-Weon;Lee, Hui-Seok
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.19 no.2
    • /
    • pp.123-129
    • /
    • 2007
  • Purpose: CT scan shows that significant tumor movement occurs in lesions located in the proximity of the heart, diaphragm, and lung hilus. There are differences concerning three kinds of type to get images following the Scan type called Axial, Helical, Cine (4D-CT) mode, when the scanning by CT. To know how each protocol describe accurately, this paper is going to give you reappearance using the moving phantom. Materials and Methods: To reconstruct the movement of superior-inferior and anterior-posterior, the manufactured moving phantom and the motor following breathing were used. To distinguish movement from captured images by CT scanning, a localizer adhered to the marker on the motor. The moving phantom fixed the movement of superior-inferior upon 1.3 cm /1 min. The motor following breathing fixed the movement of anterior-posterior upon 0.2 cm /1 min. After fixing each movement, CT scanning was taken by following the CT protocols. The movement of A localizer and volume-reappearance analyzed by RTP machine. Results: Total volume of a marker was 88.2 $cm^3$ considering movement of superior-inferior. Total volume was 184.3 $cm^3$. Total volume according to each CT scan protocol were 135 $cm^3$ by axial mode, 164.9 $cm^3$ by helical mode, 181.7 $cm^3$ by cine (4D-CT) mode. The most closely describable protocol about moving reappearance was cine mode, the marker attached localizer as well. Conclusion: CT scan should reappear concerning a exact organ-description and target, when the moving organ is being scanned by three kinds of CT protocols. The cine (4D-CT) mode has the advantage of the most highly reconstructible ability of the three protocols in reappearance of the marker using a moving phantom. The marker on the phantom has always regular motion but breathing patients don't move like a phantom. Breathing education and devices setting patients were needed so that images reconstruct breathing as exactly as possible. Users should also consider that an amount of radiation to patients is being bombed.

  • PDF

Analysis of Trading Performance on Intelligent Trading System for Directional Trading (방향성매매를 위한 지능형 매매시스템의 투자성과분석)

  • Choi, Heung-Sik;Kim, Sun-Woong;Park, Sung-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.187-201
    • /
    • 2011
  • KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

An identity analysis of Mechanic Design through the Japan Animation (일본 애니메이션<신세기 에반게리온>으로 본 메카닉 디자인의 정체성 분석)

  • Lee, Jong-Han;Liu, Si-Jie
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.275-297
    • /
    • 2018
  • Japan's mechanic animation is widely known throughout the world. 1952년, Japan's first mechanic animation and the first TV animation, , has been popular since it's creation in 1952. Atom, a big hit at the time, has influenced many people. Japanese mechanic animations convey their unique traits and world view to the public In this paper, we are going to discuss the change of the Japanese mechanical design through comparison of the mechanical design, which has been booming since the 1990s in Japan; and the . I expect the results of this analysis to depict Japanese culture and thought reflected in animation, which is a good indication of worldwide cultural view of animation. unexpectedly influenced the Japanese animation industry after it screened in 1995, and there are still people constantly reinterpreting and analyzing it. This is the reaction of the audience to anticipate the mystery and endless conclusions of the work itself. The design elements of Evangelion are distinguished from other mechanical objects. Mechanic design based on human biotechnology can overcome limitations of machine and make you feel more human. The pilot 's boarding structure, which can contain human nature, is reinforced in the form of an enterprising plug, and the attitude of excavation makes humanity more prominent than a straight robot. Thus, pursues a mechanic design that can reflect human identity. can be selected as the mechanic animation of the 80's, and the "Neon Genesis Evangelion" of the 90's shows it with a completely different design. By comparing the mechanical design of two works, therefore, we examine the correlation between the message and the design of the work. presents the close relationship between the identity of the mechanical design and the contents. I would like to point out that mechanical design can be a good example and theoretical basis for the future.

In Vitro Evaluation of Shear Bond Strengths of Zirconia Cerami with Various Types of Cement after Thermocycling on Bovine Dentin Surface (지르코니아 표면 처리와 시멘트 종류에 따른 치면과의 전단 결합 강도 비교 연구)

  • Cho, Soo-Hyun;Cho, In-Ho;Lee, Jong-Hyuk;Nam, Ki-Young;Kim, Jong-Bae;Hwang, Sang-Hee
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.23 no.3
    • /
    • pp.249-257
    • /
    • 2007
  • State of problem : The use of zirconium oxide all-ceramic material provides several advantages, including a high flexural strength(>1000MPa) and desirable optical properties, such as shading adaptation to the basic shades and a reduction in the layer thickness. Along with the strength of the materials, the cementation technique is also important to the clinical success of a restoration. Nevertheless, little information is available on the effect of different surface treatments on the bonding of zirconium high-crystalline ceramics and resin luting agents. Purpose : The aim of this study was to test the effects of surface treatments of zirconium on shear bond strengths between bovine teeth and a zirconia ceramic and evaluate differences among cements Material and methods : 54 sound bovine teeth extracted within a 1 months, were used. They were frozen in distilled water. These were rinsed by tap water to confirm that no granulation tissues have left. These were kept refrigerated at $4^{\circ}C$ until tested. Each tooth was placed horizontally at a plastic cylinder (diameter 20mm), and embedded in epoxy resin. Teeth were sectioned with diamond burs to expose dentin and grinded with #600 silicon carbide paper. To make sure there was no enamel left, each was observed under an optical microscope. 54 prefabricated zirconium oxide ceramic copings(Lava, 3M ESPE, USA) were assigned into 3 groups ; control, airborne-abraded with $110{\mu}m$ $Al_2O_3$ and scratched with diamond burs at 4 directions. They were cemented with a seating force of 10 ㎏ per tooth, using resin luting cement(Panavia $F^{(R)}$), resin cement(Superbond $C&B^{(R)}$), and resin modified GI cement(Rely X $Luting^{(R)}$). Those were thermocycled at $5^{\circ}C$ and $55^{\circ}C$ for 5000 cycles with a 30 second dwell time, and then shear bond strength was determined in a universal test machine(Model 4200, Instron Co., Canton, USA). The crosshead speed was 1 mm/min. The result was analyzed with one-way analysis of variance(ANOVA) and the Tukey test at a significance level of P<0.05. Results : Superbond $C&B^{(R)}$ at scratching with diamond burs showed the highest shear bond strength than others (p<.05). For Panavia $F^{(R)}$, groups of scratching and sandblasting showed significantly higher shear bond strength than control group(p<.05). For Rely X $Luting^{(R)}$, only between scratching & control group, significantly different shear bond strength was observed(p<.05). Conclusion : Within the limitation of this study, Superbond $C&B^{(R)}$ showed clinically acceptable shear bond between bovine teeth & zirconia ceramics regardless of surface treatments. For the surface treatment, scratching increased shear bond strength. Increase of shear bond strength by sandblasting with $110{\mu}m$ $Al_2O_3$ was not statistically different.

P300 speller using a new stimulus presentation paradigm (새로운 자극제시방법을 사용한 P300 문자입력기)

  • Eom, Jin-Sup;Yang, Hye-Ryeon;Park, Mi-Sook;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.107-116
    • /
    • 2013
  • In the implementation of a P300 speller, rows and columns paradigm (RCP) is most commonly used. However, the RCP remains subject to adjacency-distraction error and double-flash problems. This study suggests a novel P300 speller stimuli presentation-the sub-block paradigm (SBP) that is likely to solve the problems effectively. Fifteen subjects participated in this experiment where both SBP and RCP were used to implement the P300 speller. Electroencephalography (EEG) activity was recorded from Fz, Cz, Pz, Oz, P3, P4, PO7, and PO8. Each paradigm consisted of a training phase to train a classifier and a testing phase to evaluate the speller. Eighteen characters were used for the target stimuli in the training phase. Additionally, 5 subjects were required to spell 50 characters and the rest of the subjects were to spell 25 characters in the testing phase. Classification accuracy results show that average accuracy was significantly higher in SBP as of 83.73% than that of RCP as of 66.40%. Grand mean event-related potentials (ERPs) at Pz show that positive peak amplitude for the target stimuli was greater in SBP compared to that of RCP. It was found that subjects tended to attend more to the characters in SBP. According to the participants' ratings on how comfortable they were with using each type of paradigm on 7-point Likert scale, most subjects responded 'very difficult' in RCP while responding 'medium' and 'easy' in SBP. The result showed that SBP was felt more comfortable than RCP by the subjects. In sum, the SBP was more correct in P300 speller performance as well as more convenient for users than the RCP. The actual limitations in the study were discussed in the last part of this paper.

  • PDF

Effects of laser-irradiated dentin on shear bond strength of composite resin (레이저 처리가 상아질과 복합 레진의 결합에 미치는 영향)

  • Kim, Sung-Sook;Park, Jong-Il;Lee, Jae-In;Kim, Gye-Sun;Cho, Hye-Won
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.5
    • /
    • pp.520-527
    • /
    • 2008
  • Purpose: This study was conducted to evaluate the shear bond strength of composite resin to dentin when etched with laser instead of phosphoric acid. Material and methods: Recently extracted forty molars, completely free of dental caries, were embedded into acrylic resin. After exposing dentin with diamond saw, teeth surface were polished with a series of SiC paper. The teeth were divided into four groups composed of 10 specimens each; 1) no surface treated group as a control 2) acid-etched with 35%-phosphoric acid 3) Er:YAG laser treated 4) Er,Cr:YSGG laser treated. A dentin bonding agent (Adapter Single Bond2, 3M/ESPE) was applied to the specimens and then transparent plastic tubes (3 mm of height and diameter) were placed on each dentin. The composite resin was inserted into the tubes and cured. All the specimens were stored in distilled water at $37^{\circ}C$ for 24 hours and the shear bond strength was measured using a universal testing machine (Z020, Zwick, Germany). The data of tensile bond strength were statistically analyzed by one-way ANOVA and Duncan's test at ${\alpha}$= 0.05. Results: The bond strengths of Er:YAG laser-treated group was $3.98{\pm}0.88$ MPa and Er,Cr:YSGG laser-treated group showed $3.70{\pm}1.55$ MPa. There were no significant differences between two laser groups. The control group showed the lowest bond strength, $1.52{\pm}0.42$ MPa and the highest shear bond strength was presented in acid-etched group, $7.10{\pm}1.86$ MPa (P < .05). Conclusion: Laser-etched group exhibited significantly higer bond strength than that of control group, while still weaker than that of the phosphoric acid-etched group.

An Implementation Method of the Character Recognizer for the Sorting Rate Improvement of an Automatic Postal Envelope Sorting Machine (우편물 자동구분기의 구분율 향상을 위한 문자인식기의 구현 방법)

  • Lim, Kil-Taek;Jeong, Seon-Hwa;Jang, Seung-Ick;Kim, Ho-Yon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.4
    • /
    • pp.15-24
    • /
    • 2007
  • The recognition of postal address images is indispensable for the automatic sorting of postal envelopes. The process of the address image recognition is composed of three steps-address image preprocessing, character recognition, address interpretation. The extracted character images from the preprocessing step are forwarded to the character recognition step, in which multiple candidate characters with reliability scores are obtained for each character image extracted. aracters with reliability scores are obtained for each character image extracted. Utilizing those character candidates with scores, we obtain the final valid address for the input envelope image through the address interpretation step. The envelope sorting rate depends on the performance of all three steps, among which character recognition step could be said to be very important. The good character recognizer would be the one which could produce valid candidates with very reliable scores to help the address interpretation step go easy. In this paper, we propose the method of generating character candidates with reliable recognition scores. We utilize the existing MLP(multilayered perceptrons) neural network of the address recognition system in the current automatic postal envelope sorters, as the classifier for the each image from the preprocessing step. The MLP is well known to be one of the best classifiers in terms of processing speed and recognition rate. The false alarm problem, however, might be occurred in recognition results, which made the address interpretation hard. To make address interpretation easy and improve the envelope sorting rate, we propose promising methods to reestimate the recognition score (confidence) of the existing MLP classifier: the generation method of the statistical recognition properties of the classifier and the method of the combination of the MLP and the subspace classifier which roles as a reestimator of the confidence. To confirm the superiority of the proposed method, we have used the character images of the real postal envelopes from the sorters in the post office. The experimental results show that the proposed method produces high reliability in terms of error and rejection for individual characters and non-characters.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.