• Title/Summary/Keyword: bit data

Search Result 2,277, Processing Time 0.029 seconds

Multi-purpose Geophysical Measurements System Using PXI (PXI를 이용한 다목적 물리탐사 측정 시스템)

  • Choi Seong-Jun;Kim Jung-Ho;Sung Nak-Hun;Jeong Ji-Min
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.3
    • /
    • pp.224-231
    • /
    • 2005
  • In geophysical field surveys, commercial equipments often fail to resolve the subsurface target or even sometimes fail to be applied because they do not fit to the various field situations or the physical properties of the medium or target. We developed a geophysical measurement system, which can be easily adapted for the various field situations and targets. The system based on PXI with A/D converter and some stand alone equipment such as Network Analyzer was applied to borehole radar survey, borehole sonic measurement and electromagnetic noise measurement. The system for borehole radar survey consists of PXI, Network Analyzer, dipole antennas, GPIB interface is used for PXI to control Network Analyzer. The system for borehole sonic measurement consists of PXI, 24 Bit A/D converter, high voltage pulse generator, transmitting and receiving piezoelectric sensors. The electromagnetic noise measurement system consists of PXI, 24 Bit A/D converter, 2 horizontal component electric field sensors and 2 horizontal and 1 vertical component magnetic filed sensors. The borehole radar system has been successfully applied to detect the width of the artificial tunnel through which the borehole pass and to image buried steel pipe, while the commercial borehole radar equipment failed. The borehole sonic system was tested to detect the width of artificial tunnel and showed a reasonable result. The characteristic of electromagnetic noise was grasped at an urban area with the data from the electromagnetic noise measurement system. The system is also applied to characterize the signal distortion by induction between the electric cables in resistivity survey. The system can be applied various geophysical problems with a simple modification of the system and sensors.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Comparison of Efficiency for Wood Fuels (Chips and Pellets) by Life Cycle Assessment (LCA 접근방법에 의한 목질연료(칩, 펠릿)의 효율성 비교)

  • Choi, Young-Seop;Kim, Joon-Soon;Cha, Du-Song
    • Journal of Korean Society of Forest Science
    • /
    • v.98 no.4
    • /
    • pp.426-434
    • /
    • 2009
  • This study was carried out to derive the most optimal production process for the wood fuels(chip and pellet), by collecting cost data on each procedure through the life cycle assessment approach, and to compare between the profitability and efficiency, from the view points of producers and consumers, irrespectively. The costs accounted in this analysis were based on the opportunity cost. The results show that wood chips are cheaper than wood pellets in production costs. In respect to the process with the lowest production cost, while wood chips should be to crush collected residues into pieces on the spot for merchandizing, wood pellets need to be transported to manufactory for pelletizing. The study findings also include that the profits, which is estimated by subtracting expenses from gained sale revenue, were a bit higher for wood chips than wood pellets. Additionally, the price ratio of wood pellets to wood chips for getting the same caloric value appears to be 1.27. Despite of economic benefits of processing wood chips, there are several problems in practice. For producers, there is a possible increase in not only transportation cost for conveying crushers to the dispersed places, but storage cost due to the lack of the marketplaces in the immediate surroundings. For consumers, on the other hand, there are some challenging issues, such as bulky storage facility requirement, additional labor for fuel supplement, frequent ashes disposal, and decomposition in summer and freezing in winter caused by wood chips' own moisture.

Algorithm and experimental verification of underwater acoustic communication based on passive time reversal mirror in multiuser environment (다중송신채널 환경에서 수동형 시역전에 기반한 수중음향통신 알고리즘 및 실험적 검증)

  • Eom, Min-Jeong;Oh, Sehyun;Kim, J.S.;Kim, Sea-Moon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.3
    • /
    • pp.167-174
    • /
    • 2016
  • Underwater communication is difficult to increase the communication capacity because the carrier frequency is lower than that of radio communications on land. This is limited to the bandwidth of the signal under the influence of the characteristics of an ocean medium. As the high transmission speed and large transmission capacity have become necessary in the limited frequency range, the studies on MIMO (Multiple Input Multiple Output) communication have been actively carried out. The performance of the MIMO communication is lower than that of the SIMO (Single Input Multiple Output) communication because cross-talk occurs due to multiusers along with inter symbol interference resulting from the channel characteristics such as delay spread and doppler spread. Although the adaptive equalizer considering multi-channels is used to mitigate the influence of the cross-talk, the algorithm is normally complicated. In this paper, time reversal mirror technique with the characteristic of a self-equalization will be applied to simplify the compensation algorithm and relieve the cross-talk in order to improve the communication performance when the signal transmitted from two channels is received over interference on one channel in the same time. In addition, the performance of the MIMO communication based on the time reversal mirror is verified using data from the SAVEX15(Shallow-water Acoustic Variability Experiment 2015) conducted at the northern area of East China Sea in May 2015.

The Effect of Social Support Intervention on Mood and Maternal Confidence of Premature's Mothers. (사회적지지 중재가 미숙아 어머니의 정서와 모성역할 자신감에 미치는 영향)

  • Lee, In-Hye
    • Journal of Korean Academy of Nursing
    • /
    • v.30 no.5
    • /
    • pp.1111-1120
    • /
    • 2000
  • The purpose of this study is to examine the effect of social support intervention on mood and maternal confidence of premature's mothers. The social support intervention is known to induce improved mood state and provide information on caretaking so as to increase the maternal confidence in the mother of a premature. To systematically investigate its effect, this study employed a nonequivalent randomized post-repeated quasi-experimental design. The intervention was given individually to mothers of prematures five times spanning five weeks. The sample consisted of the 50 mothers (experimental 27, control 23) of a premature. The data were collected using the structured questionaires twice as post tests. Various instruments were used in this study. The POMS developed by Lee(1990) was used to measure the mothers' mood state, Mother and Baby Scales by Wolke et al (1987). The results are as follows: 1. For the hypothesis test to see the effect of the social support intervention, the mean of the experimental group and the control group was compared by means of t-test and the following results are obtained. Hypothesis I. "The mood state of mothers with social support intervention is more positive than that of the mothers without such intervention." was not statistically supported and thus discarded (t=.799, p=.429). However the mean scores were 49.68 and 51.38 for the experimantal and control group, respectively, indicating more positive mood for the experimental group. Hypothesis II. "The maternal confidence of mothers with social support intervention is higher than that of the mothers without the tervention." was statistically supported (t=3.667, p=.001). 2. The mean score of the mood state was highest before discharge (52.29), meaning most negative, declined to 49.68 shortly after the discharge, again increased a bit to 50.07 at four weeks after the discharge, and stabilized to 49.22 around six weeks after the discharge. On the other hand the mean score of the maternal confidence was continuously increased with time. In view of the above results, it is concluded that the social support intervention with a preprogrammed protocol has the definite positive effect on increasing the maternal confidence and positive effect on improving mother's mood state.

  • PDF

Implementation of WLAN Baseband Processor Based on Space-Frequency OFDM Transmit Diversity Scheme (공간-주파수 OFDM 전송 다이버시티 기법 기반 무선 LAN 기저대역 프로세서의 구현)

  • Jung Yunho;Noh Seungpyo;Yoon Hongil;Kim Jaeseok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.5 s.335
    • /
    • pp.55-62
    • /
    • 2005
  • In this paper, we propose an efficient symbol detection algorithm for space-frequency OFDM (SF-OFDM) transmit diversity scheme and present the implementation results of the SF-OFDM WLAN baseband processor with the proposed algorithm. When the number of sub-carriers in SF-OFDM scheme is small, the interference between adjacent sub-carriers may be generated. The proposed algorithm eliminates this interference in a parallel manner and obtains a considerable performance improvement over the conventional detection algorithm. The bit error rate (BER) performance of the proposed detection algorithm is evaluated by the simulation. In the case of 2 transmit and 2 receive antennas, at $BER=10^{-4}$ the proposed algorithm obtains about 3 dB gain over the conventional detection algorithm. The packet error rate (PER), link throughput, and coverage performance of the SF-OFDM WLAN with the proposed detection algorithm are also estimated. For the target throughput at $80\%$ of the peak data rate, the SF-OFDM WLAN achieves the average SNR gain of about 5.95 dB and the average coverage gain of 3.98 meter. The SF-OFDM WLAN baseband processor with the proposed algorithm was designed in a hardware description language and synthesized to gate-level circuits using 0.18um 1.8V CMOS standard cell library. With the division-free architecture, the total logic gate count for the processor is 945K. The real-time operation is verified and evaluated using a FPGA test system.

A Study on an Error Correction Code Circuit for a Level-2 Cache of an Embedded Processor (임베디드 프로세서의 L2 캐쉬를 위한 오류 정정 회로에 관한 연구)

  • Kim, Pan-Ki;Jun, Ho-Yoon;Lee, Yong-Surk
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.1
    • /
    • pp.15-23
    • /
    • 2009
  • Microprocessors, which need correct arithmetic operations, have been the subject of in-depth research in relation to soft errors. Of the existing microprocessor devices, the memory cell is the most vulnerable to soft errors. Moreover, when soft errors emerge in a memory cell, the processes and operations are greatly affected because the memory cell contains important information and instructions about the entire process or operation. Users do not realize that if soft errors go undetected, arithmetic operations and processes will have unexpected outcomes. In the field of architectural design, the tool that is commonly used to detect and correct soft errors is the error check and correction code. The Itanium, IBM PowerPC G5 microprocessors contain Hamming and Rasio codes in their level-2 cache. This research, however, focuses on huge server devices and does not consider power consumption. As the operating and threshold voltage is currently shrinking with the emergence of high-density and low-power embedded microprocessors, there is an urgent need to develop ECC (error check correction) circuits. In this study, the in-output data of the level-2 cache were analyzed using SimpleScalar-ARM, and a 32-bit H-matrix for the level-2 cache of an embedded microprocessor is proposed. From the point of view of power consumption, the proposed H-matrix can be implemented using a schematic editor of Cadence. Therefore, it is comparable to the modified Hamming code, which uses H-spice. The MiBench program and TSMC 0.18 um were used in this study for verification purposes.

A Design of PRESENT Crypto-Processor Supporting ECB/CBC/OFB/CTR Modes of Operation and Key Lengths of 80/128-bit (ECB/CBC/OFB/CTR 운영모드와 80/128-비트 키 길이를 지원하는 PRESENT 암호 프로세서 설계)

  • Kim, Ki-Bbeum;Cho, Wook-Lae;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.6
    • /
    • pp.1163-1170
    • /
    • 2016
  • A hardware implementation of ultra-lightweight block cipher algorithm PRESENT which was specified as a standard for lightweight cryptography ISO/IEC 29192-2 is described. The PRESENT crypto-processor supports two key lengths of 80 and 128 bits, as well as four modes of operation including ECB, CBC, OFB, and CTR. The PRESENT crypto-processor has on-the-fly key scheduler with master key register, and it can process consecutive blocks of plaintext/ciphertext without reloading master key. In order to achieve a lightweight implementation, the key scheduler was optimized to share circuits for key lengths of 80 bits and 128 bits. The round block was designed with a data-path of 64 bits, so that one round transformation for encryption/decryption is processed in a clock cycle. The PRESENT crypto-processor was verified using Virtex5 FPGA device. The crypto-processor that was synthesized using a $0.18{\mu}m$ CMOS cell library has 8,100 gate equivalents(GE), and the estimated throughput is about 908 Mbps with a maximum operating clock frequency of 454 MHz.

A Frequency Domain DV-to-MPEG-2 Transcoding (DV에서 MPEG-2로의 주파수 영역 변환 부호화)

  • Kim, Do-Nyeon;Yun, Beom-Sik;Choe, Yun-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.2
    • /
    • pp.138-148
    • /
    • 2001
  • Digital Video (DV) coding standards for digital video cassette recorder are based mainly on DCT and variable length coding. DV has low hardware complexity but high compressed bit rate of about 26 Mb/s. Thus, it is necessary to encode video with low complex video coding at the studios and then transcode compressed video into MPEG-2 for video-on-demand system. Because these coding methods exploit DCT, transcoding in the DCT domain can reduce computational complexity by excluding duplicated procedures. In transcoding DV into MPEC-2 intra coding, multiplying matrix by transformed data is used for 4:1:1-to-4:2:2 chroma format conversion and the conversion from 2-4-8 to 8-8 DCT mode, and therefore enables parallel processing. Variance of sub block for MPEG-2 rate control is computed completely in the DCT domain. These are verified through experiments. We estimate motion hierarchically using DCT coefficients for transcoding into MPEG-2 inter coding. First, we estimate motion of a macro block (MB) only with 4 DC values of 4 sub blocks and then estimate motion with 16-point MB using IDCT of 2$\times$2 low frequencies in each sub block, and finish estimation at a sub pixel as the fifth step. ME with overlapped search range shows better PSNR performance than ME without overlapping.

  • PDF

Multi-View Video System using Single Encoder and Decoder (단일 엔코더 및 디코더를 이용하는 다시점 비디오 시스템)

  • Kim Hak-Soo;Kim Yoon;Kim Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.116-129
    • /
    • 2006
  • The progress of data transmission technology through the Internet has spread a variety of realistic contents. One of such contents is multi-view video that is acquired from multiple camera sensors. In general, the multi-view video processing requires encoders and decoders as many as the number of cameras, and thus the processing complexity results in difficulties of practical implementation. To solve for this problem, this paper considers a simple multi-view system utilizing a single encoder and a single decoder. In the encoder side, input multi-view YUV sequences are combined on GOP units by a video mixer. Then, the mixed sequence is compressed by a single H.264/AVC encoder. The decoding is composed of a single decoder and a scheduler controling the decoding process. The goal of the scheduler is to assign approximately identical number of decoded frames to each view sequence by estimating the decoder utilization of a Gap and subsequently applying frame skip algorithms. Furthermore, in the frame skip, efficient frame selection algorithms are studied for H.264/AVC baseline and main profiles based upon a cost function that is related to perceived video quality. Our proposed method has been performed on various multi-view test sequences adopted by MPEG 3DAV. Experimental results show that approximately identical decoder utilization is achieved for each view sequence so that each view sequence is fairly displayed. As well, the performance of the proposed method is examined in terms of bit-rate and PSNR using a rate-distortion curve.