• Title/Summary/Keyword: Bit error

Search Result 2,261, Processing Time 0.026 seconds

Efficient IoT data processing techniques based on deep learning for Edge Network Environments (에지 네트워크 환경을 위한 딥 러닝 기반의 효율적인 IoT 데이터 처리 기법)

  • Jeong, Yoon-Su
    • Journal of Digital Convergence
    • /
    • v.20 no.3
    • /
    • pp.325-331
    • /
    • 2022
  • As IoT devices are used in various ways in an edge network environment, multiple studies are being conducted that utilizes the information collected from IoT devices in various applications. However, it is not easy to apply accurate IoT data immediately as IoT data collected according to network environment (interference, interference, etc.) are frequently missed or error occurs. In order to minimize mistakes in IoT data collected in an edge network environment, this paper proposes a management technique that ensures the reliability of IoT data by randomly generating signature values of IoT data and allocating only Security Information (SI) values to IoT data in bit form. The proposed technique binds IoT data into a blockchain by applying multiple hash chains to asymmetrically link and process data collected from IoT devices. In this case, the blockchainized IoT data uses a probability function to which a weight is applied according to a correlation index based on deep learning. In addition, the proposed technique can expand and operate grouped IoT data into an n-layer structure to lower the integrity and processing cost of IoT data.

Fundamental Frequency Extraction of Stay Cable based on Energy Equation (에너지방정식에 기초한 사장 케이블 기본진동수 추출)

  • Kim, Hyeon Kyeom;Hwang, Jae Woong;Lee, Myeong Jae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.1A
    • /
    • pp.125-133
    • /
    • 2008
  • According to longer and longer span, dynamic instability of stay cable should be prevented. Dynamic instability occurs mainly symmetric 1st mode and antisymmetric 1st mode in stay cable. Especially symmetric 1st mode has a lot of influence on sag. Therefore fundamental frequency of stay cable is different from that of taut sting. Irvine, Triantafyllou, Ahn etc. analyzed dynamic behavior of taut cable with sag through analytical technical and their researches give important results for large bounds of Irvine parameter. But each research shows mutually different values out of characteristic (cross-over or mode-coupled) point and each solution of frequency equations of all researchers can be very difficultly found because of their very high non-linearity. Presented study focuses on fundamental frequency of stay cable. Generalized mechanical energy with symmetric 1st mode vibration shape satisfied boundary conditions is evolved by Rayleigh-Ritz method. It is possible to give linear analytic solution within characteristic point. Error by this approach shows only below 3% at characteristic point against existing researches. And taut cable don't exceed characteristic point. I.e. high accuracy, easy solving techniques, and a little bit limitations. Therefore presented study can be announced that it is good study ergonomically.

A 2×2 MIMO Spatial Multiplexing 5G Signal Reception in a 500 km/h High-Speed Vehicle using an Augmented Channel Matrix Generated by a Delay and Doppler Profiler

  • Suguru Kuniyoshi;Rie Saotome;Shiho Oshiro;Tomohisa Wada
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.1-10
    • /
    • 2023
  • This paper proposes a method to extend Inter-Carrier Interference (ICI) canceling Orthogonal Frequency Division Multiplexing (OFDM) receivers for 5G mobile systems to spatial multiplexing 2×2 MIMO (Multiple Input Multiple Output) systems to support high-speed ground transportation services by linear motor cars traveling at 500 km/h. In Japan, linear-motor high-speed ground transportation service is scheduled to begin in 2027. To expand the coverage area of base stations, 5G mobile systems in high-speed moving trains will have multiple base station antennas transmitting the same downlink (DL) signal, forming an expanded cell size along the train rails. 5G terminals in a fast-moving train can cause the forward and backward antenna signals to be Doppler-shifted in opposite directions, so the receiver in the train may have trouble estimating the exact channel transfer function (CTF) for demodulation. A receiver in such high-speed train sees the transmission channel which is composed of multiple Doppler-shifted propagation paths. Then, a loss of sub-carrier orthogonality due to Doppler-spread channels causes ICI. The ICI Canceller is realized by the following three steps. First, using the Demodulation Reference Symbol (DMRS) pilot signals, it analyzes three parameters such as attenuation, relative delay, and Doppler-shift of each multi-path component. Secondly, based on the sets of three parameters, Channel Transfer Function (CTF) of sender sub-carrier number n to receiver sub-carrier number l is generated. In case of n≠l, the CTF corresponds to ICI factor. Thirdly, since ICI factor is obtained, by applying ICI reverse operation by Multi-Tap Equalizer, ICI canceling can be realized. ICI canceling performance has been simulated assuming severe channel condition such as 500 km/h, 8 path reverse Doppler Shift for QPSK, 16QAM, 64QAM and 256QAM modulations. In particular, 2×2MIMO QPSK and 16QAM modulation schemes, BER (Bit Error Rate) improvement was observed when the number of taps in the multi-tap equalizer was set to 31 or more taps, at a moving speed of 500 km/h and in an 8-pass reverse doppler shift environment.

A digital Audio Watermarking Algorithm using 2D Barcode (2차원 바코드를 이용한 오디오 워터마킹 알고리즘)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.97-107
    • /
    • 2011
  • Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.

Daily Setup Uncertainties and Organ Motion Based on the Tomoimages in Prostatic Radiotherapy (전립선암 치료 시 Tomoimage에 기초한 Setup 오차에 관한 고찰)

  • Cho, Jeong-Hee;Lee, Sang-Kyu;Kim, Sei-Joon;Na, Soo-Kyung
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.19 no.2
    • /
    • pp.99-106
    • /
    • 2007
  • Purpose: The patient's position and anatomy during the treatment course little bit varies to some extend due to setup uncertainties and organ motions. These factors could affected to not only the dose coverage of the gross tumor but over dosage of normal tissue. Setup uncertainties and organ motions can be minimized by precise patient positioning and rigid immobilization device but some anatomical site such as prostate, the internal organ motion due to physiological processes are challenge. In planning procedure, the clinical target volume is a little bit enlarged to create a planning target volume that accounts for setup uncertainties and organ motion as well. These uncertainties lead to differences between the calculated dose by treatment planning system and the actually delivered dose. The purpose of this study was to evaluate the differences of interfractional displacement of organ and GTV based on the tomoimages. Materials and Methods: Over the course of 3 months, 3 patients, those who has applied rectal balloon, treated for prostatic cancer patient's tomoimage were studied. During the treatment sessions 26 tomoimages per patient, Total 76 tomoimages were collected. Tomoimage had been taken everyday after initial setup with lead marker attached on the patient's skin center to comparing with C-T simulation images. Tomoimage was taken after rectal balloon inflated with 60 cc of air for prostate gland immobilization for daily treatment just before treatment and it was used routinely in each case. The intrarectal balloon was inserted to a depth of 6 cm from the anal verge. MVCT image was taken with 5 mm slice thickness after the intrarectal balloon in place and inflated. For this study, lead balls are used to guide the registration between the MVCT and CT simulation images. There are three image fusion methods in the tomotherapy, bone technique, bone/tissue technique, and full image technique. We used all this 3 methods to analysis the setup errors. Initially, image fusions were based on the visual alignment of lead ball, CT anatomy and CT simulation contours and then the radiation therapist registered the MVCT images with the CT simulation images based on the bone based, rectal balloon based and GTV based respectively and registered image was compared with each others. The average and standard deviation of each X, Y, Z and rotation from the initial planning center was calculated for each patient. The image fusions were based on the visual alignment of lead ball, CT anatomy and CT simulation contours. Results: There was a significant difference in the mean variations of the rectal balloon among the methods. Statistical results based on the bone fusion shows that maximum x-direction shift was 8 mm and 4.2 mm to the y-direction. It was statistically significant (P=<0.0001) in balloon based fusion, maximum X and Y shift was 6 mm, 16mm respectively. One patient's result was more than 16 mm shift and that was derived from the rectal expansions due to the bowl gas and stool. GTV based fusion results ranging from 2.7 to 6.6 mm to the x-direction and 4.3$\sim$7.8 mm to the y-direction respectively. We have checked rotational error in this study but there are no significant differences among fusion methods and the result was 0.37$\pm$0.36 in bone based fusion and 0.34$\pm$0.38 in GTV based fusion.

  • PDF

Why A Multimedia Approach to English Education\ulcorner

  • Keem, Sung-uk
    • Proceedings of the KSPS conference
    • /
    • 1997.07a
    • /
    • pp.176-178
    • /
    • 1997
  • To make a long story short I made up my mind to experiment with a multimedia approach to my classroom presentations two years ago because my ways of giving instructions bored the pants off me as well as my students. My favorite ways used to be sometimes referred to as classical or traditional ones, heavily dependent on the three elements: teacher's mouth, books, and chalk. Some call it the 'MBC method'. To top it off, I tried audio-visuals such as tape recorders, cassette players, VTR, pictures, and you name it, that could help improve my teaching method. And yet I have been unhappy about the results by a trial and error approach. I was determined to look for a better way that would ensure my satisfaction in the first place. What really turned me on was a multimedia CD ROM title, ELLIS (English Language Learning Instructional Systems) developed by Dr. Frank Otto. This is an integrated system of learning English based on advanced computer technology. Inspired by the utility and potential of such a multimedia system for regular classroom or lab instructions, I designed a simple but practical multimedia language learning laboratory in 1994 for the first time in Korea(perhaps for the first time in the world). It was high time that the conventional type of language laboratory(audio-passive) at Hahnnam be replaced because of wear and tear. Prior to this development, in 1991, I put a first CALL(Computer Assisted Language Learning) laboratory equipped with 35 personal computers(286), where students were encouraged to practise English typing, word processing and study English grammar, English vocabulary, and English composition. The first multimedia language learning laboratory was composed of 1) a multimedia personal computer(486DX2 then, now 586), 2) VGA multipliers that enable simultaneous viewing of the screen at control of the instructor, 3) an amplifIer, 4) loud speakers, 5)student monitors, 6) student tables to seat three students(a monitor for two students is more realistic, though), 7) student chairs, 8) an instructor table, and 9) cables. It was augmented later with an Internet hookup. The beauty of this type of multimedia language learning laboratory is the economy of furnishing and maintaining it. There is no need of darkening the facilities, which is a must when an LCD/beam projector is preferred in the laboratory. It is headset free, which proved to make students exasperated when worn more than- twenty minutes. In the previous semester I taught three different subjects: Freshman English Lab, English Phonetics, and Listening Comprehension Intermediate. I used CD ROM titles like ELLIS, Master Pronunciation, English Tripple Play Plus, English Arcade, Living Books, Q-Steps, English Discoveries, Compton's Encyclopedia. On the other hand, I managed to put all teaching materials into PowerPoint, where letters, photo, graphic, animation, audio, and video files are orderly stored in terms of slides. It takes time for me to prepare my teaching materials via PowerPoint, but it is a wonderful tool for the sake of presentations. And it is worth trying as long as I can entertain my students in such a way. Once everything is put into the computer, I feel relaxed and a bit excited watching my students enjoy my presentations. It appears to be great fun for students because they have never experienced this type of instruction. This is how I freed myself from having to manipulate a cassette tape player, VTR, and write on the board. The student monitors in front of them seem to help them concentrate on what they see, combined with what they hear. All I have to do is to simply click a mouse to give presentations and explanations, when necessary. I use a remote mouse, which prevents me from sitting at the instructor table. Instead, I can walk around in the room and enjoy freer interactions with students. Using this instrument, I can also have my students participate in the presentation. In particular, I invite my students to manipulate the computer using the remote mouse from the student's seat not from the instructor's seat. Every student appears to be fascinated with my multimedia approach to English teaching because of its unique nature as a new teaching tool as we face the 21st century. They all agree that the multimedia way is an interesting and fascinating way of learning to satisfy their needs. Above all, it helps lighten their drudgery in the classroom. They feel other subjects taught by other teachers should be treated in the same fashion. A multimedia approach to education is impossible without the advent of hi-tech computers, of which multi functions are integrated into a unified system, i.e., a personal computer. If you have computer-phobia, make quick friends with it; the sooner, the better. It can be a wonderful assistant to you. It is the Internet that I pay close attention to in conjunction with the multimedia approach to English education. Via e-mail system, I encourage my students to write to me in English. I encourage them to enjoy chatting with people all over the world. I also encourage them to visit the sites where they offer study courses in English conversation, vocabulary, idiomatic expressions, reading, and writing. I help them search any subject they want to via World Wide Web. Some day in the near future it will be the hub of learning for everybody. It will eventually free students from books, teachers, libraries, classrooms, and boredom. I will keep exploring better ways to give satisfying instructions to my students who deserve my entertainment.

  • PDF

Implant Isolation Characteristics for 1.25 Gbps Monolithic Integrated Bi-Directional Optoelectronic SoC (1.25 Gbps 단일집적 양방향 광전 SoC를 위한 임플란트 절연 특성 분석)

  • Kim, Sung-Il;Kang, Kwang-Yong;Lee, Hai-Young
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.8
    • /
    • pp.52-59
    • /
    • 2007
  • In this paper, we analyzed and measured implant isolation characteristics for a 1.25 Gbps monolithic integrated hi-directional (M-BiDi) optoelectronic system-on-a-chip, which is a key component to constitute gigabit passive optical networks (PONs) for a fiber-to-the-home (FTTH). Also, we derived an equivalent circuit of the implant structure under various DC bias conditions. The 1.25 Gbps M-BiDi transmit-receive SoC consists of a laser diode with a monitor photodiode as a transmitter and a digital photodiode as a digital data receiver on the same InP wafer According to IEEE 802.3ah and ITU-T G.983.3 standards, a receiver sensitivity of the digital receiver has to satisfy under -24 dBm @ BER=10-12. Therefore, the electrical crosstalk levels have to maintain less than -86 dB from DC to 3 GHz. From analysed and measured results of the implant structure, the M-BiDi SoC with the implant area of 20 mm width and more than 200 mm distance between the laser diode and monitor photodiode, and between the monitor photodiode and digital photodiode, satisfies the electrical crosstalk level. These implant characteristics can be used for the design and fabrication of an optoelectronic SoC design, and expended to a mixed-mode SoC field.

Availability of Statistical Quality Control of Nuclear Medicine Blood Test Using Population Distribution (모집단 분포를 이용한 핵의학 혈액검사의 통계적 품질관리의 유용성)

  • Cheon, Jun Hong;Cho, Eun Bit;Yoo, Seon Hee;Kim, Nyeon Ok
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.1
    • /
    • pp.37-41
    • /
    • 2016
  • Purpose The importance of quality control by the error to a minimum, which for the purpose of enhancing the reliability of the examination is not be emphasized excess. Currently, most nuclear medicine laboratory are conducting the internal and external quality control, and they are applying the Levey-Jennings or Westgard Multi-Rules by using the commercialized quality control materials. The reliability of the nuclear medicine blood test which affects the diagnosis of patients and the treatment policy is being secured through this quality control activity. Therefore, researchers will evaluate the utility of the statistic quality control using the population distribution of the nuclear medicine blood test conducted targeting the checkup examinees by the additional technique of the reliability improvement. Materials and Methods A statistic analysis was performed about 12 items of the nuclear medicine blood test targeting 41,341 peoples who used the health screening and promotion center in Asan Medical Center from January, 2014 to December, 2014. The results of 12 items of the nuclear medicine blood test was divided into the monthly percentage of three groups: within reference values, over reference, and under reference to analyze the average value of the population distribution, standard deviation, and standard deviation index (SDI). Results The standard deviation of the population distribution mostly showed a result within ${\pm}2SD$ in all groups. However, When the standard deviation of the population distribution represented a result over ${\pm}2SD$, it was confirmed SDI was showing a result of SDI > -2 or SDI > 2. As a result of analyzing the population distribution of 12 items(AFP, CEA, CA19-9, CA125, PSA, TSH, FT4, Anti-Tg-Ab, Anti-TPO-Ab, Calcitonin, 25-OH-VitD3, Insulin) of the nuclear medicine blood part basic test, when SDI of the monthly percentage which deviated from the reference values was over ${\pm}2.0$, CA19-9 September was 2.2, Anti-Tg-Ab may was 2.2, Insulin January was 2.3, Insulin March was 2.4. It was confirmed these cases were attributed to the abnormality of the test reagent (maximum combination rate of isotope reagent declined) and the decline of the test response time. Conclusion The population distribution includes the entire attribute which becomes the study object. It is expected the statistic quality management using the population distribution which was conducted targeting the checkup examinees by dividing into three groups: within reference values, over reference, and under reference by means of this characteristics will be able to play a role of complementing the internal quality control program which is being carried out in the laboratory.

  • PDF

Enhancement of Image Contrast in Linacgram through Image Processing (전산처리를 통한 Linacgram의 화질개선)

  • Suh, Hyun-Suk;Shin, Hyun-Kyo;Lee, Re-Na
    • Radiation Oncology Journal
    • /
    • v.18 no.4
    • /
    • pp.345-354
    • /
    • 2000
  • Purpose : Conventional radiation therapy Portal images gives low contrast images. The purpose of this study was to enhance image contrast of a linacgram by developing a low-cost image processing method. Materials and Methods : Chest linacgram was obtained by irradiating humanoid Phantom and scanned using Diagnostic-Pro scanner for image processing. Several types of scan method were used in scanning. These include optical density scan, histogram equalized scan, linear histogram based scan, linear histogram independent scan, linear optical density scan, logarithmic scan, and power square root scan. The histogram distribution of the scanned images were plotted and the ranges of the gray scale were compared among various scan types. The scanned images were then transformed to the gray window by pallette fitting method and the contrast of the reprocessed portal images were evaluated for image improvement. Portal images of patients were also taken at various anatomic sites and the images were processed by Gray Scale Expansion (GSE) method. The patient images were analyzed to examine the feasibility of using the GSE technique in clinic. Results :The histogram distribution showed that minimum and maximum gray scale ranges of 3192 and 21940 were obtained when the image was scanned using logarithmic method and square root method, respectively. Out of 256 gray scale, only 7 to 30$\%$ of the steps were used. After expanding the gray scale to full range, contrast of the portal images were improved. Experiment peformed with patient image showed that improved identification of organs were achieved by GSE in portal images of knee joint, head and neck, lung, and pelvis. Conclusion :Phantom study demonstrated that the GSE technique improved image contrast of a linacgram. This indicates that the decrease in image quality resulting from the dual exposure, could be improved by expanding the gray scale. As a result, the improved technique will make it possible to compare the digitally reconstructed radiographs (DRR) and simulation image for evaluating the patient positioning error.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.