• 제목/요약/키워드: Conventional combine

Search Result 202, Processing Time 0.029 seconds

A Design of Wideband Monopulse Comparator for W-Band mm-Wave Seeker Applications (W-대역 밀리미터파 탐색기용 광대역 모노펄스 비교기 설계)

  • Kim, Dong-Yeon;Lim, Youngjoon;Jung, Chae-Hyun;Park, Chang-Hyun;Nam, Sangwook
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.27 no.2
    • /
    • pp.224-227
    • /
    • 2016
  • This paper proposes a design of W-band mm-wave wideband monopulse comparator using waveguide structure for applications. The main idea of proposed design is to combine a self-compensating phase shifter on $90^{\circ}$ hybrid for wideband $180^{\circ}$ hybrid. Using multiple conventional phase shifters, because of their narrow-band characteristics, tends to restrict working bandwidth of system including antennas. Proposed comparator could relieve the problem since it applies the self-compensating phase shifter. The comparator has waveguide structure so it shows excellent characteristic in loss. It also show wideband characteristic in amplitude and phase response between ports.

Extreme value modeling of structural load effects with non-identical distribution using clustering

  • Zhou, Junyong;Ruan, Xin;Shi, Xuefei;Pan, Chudong
    • Structural Engineering and Mechanics
    • /
    • v.74 no.1
    • /
    • pp.55-67
    • /
    • 2020
  • The common practice to predict the characteristic structural load effects (LEs) in long reference periods is to employ the extreme value theory (EVT) for building limit distributions. However, most applications ignore that LEs are driven by multiple loading events and thus do not have the identical distribution, a prerequisite for EVT. In this study, we propose the composite extreme value modeling approach using clustering to (a) cluster initial blended samples into finite identical distributed subsamples using the finite mixture model, expectation-maximization algorithm, and the Akaike information criterion; (b) combine limit distributions of subsamples into a composite prediction equation using the generalized Pareto distribution based on a joint threshold. The proposed approach was validated both through numerical examples with known solutions and engineering applications of bridge traffic LEs on a long-span bridge. The results indicate that a joint threshold largely benefits the composite extreme value modeling, many appropriate tail approaching models can be used, and the equation form is simply the sum of the weighted models. In numerical examples, the proposed approach using clustering generated accurate extrema prediction of any reference period compared with the known solutions, whereas the common practice of employing EVT without clustering on the mixture data showed large deviations. Real-world bridge traffic LEs are driven by multi-events and present multipeak distributions, and the proposed approach is more capable of capturing the tendency of tailed LEs than the conventional approach. The proposed approach is expected to have wide applications to general problems such as samples that are driven by multiple events and that do not have the identical distribution.

Speaker Recognition Using Dynamic Time Variation fo Orthogonal Parameters (직교인자의 동적 특성을 이용한 화자인식)

  • 배철수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.9
    • /
    • pp.993-1000
    • /
    • 1992
  • Recently, many researchers have found that the speaker recognition rate is high when they perform the speaker recognition using statistical processing method of orthogonal parameter, which are derived from the analysis of speech signal and contain much of the speaker's identity. This method, however, has problems caused by vocalization speed or time varying feature of speed. Thus, to solve these problems, this paper proposes two methods of speaker recognition which combine DTW algorithm with the method using orthogonal parameters extracted from $Karthumem-Lo\'{e}ve$ Transform method which applies orthogonal parameters as feature vector to ETW algorithm and the other is the method which applies orthogonal parameters to the optimal path. In addition, we compare speaker recognition rate obtained from the proposed two method with that from the conventional method of statistical process of orthogonal parameters. Orthogonal parameters used in this paper are derived from both linear prediction coefficients and partial correlation coefficients of speech signal.

  • PDF

Inverse Halftoning of Digital Color Image using Look-Up Table and Vector Adaptive Filter (참조표와 벡터적응필터를 이용한 디지털 컬러영상의 역하프토닝)

  • Kim, Chan-Su;Yi, Tai-Hong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.1C
    • /
    • pp.72-80
    • /
    • 2009
  • Look-up table based inverse halftoning from the digital color halftone image is proposed in this paper, which uses vector adaptive filter for the nonexisting patterns in the table. Halftone image is obtained from a continuous -tone image, which can be restored into continuous one from the digital binary image by way of inverse halftoning method. Look-up table based method usually processes fast and has even performances over the various halftoning. The numbers of pixels in the pattern of table and the method how to define the table elements for each R, G, B channels can effect largely for its performance. The proposed method uses 16 pixels in the table considering the diversity of the expressions from their patterns and with memory size as well. This also proposed how to combine R, G, B channels into one. Experimental results showed the better performance in the expression of colors, better color restoration and the short processing time compared with the conventional ones.

A Slice Information Based Labeling Algorithm for 3-D Volume Data (Slice 정보에 기반한 3차원 볼륨 데이터의 레이블링 알고리즘)

  • 최익환;최현주;이병일;최흥국
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.922-928
    • /
    • 2004
  • We propose a new 3 dimensional labeling method based on slice information for the volume data. This method is named SIL (Slice Information based Labeling). Compare to the conventional algorithms, it has advantages that the use of memory is efficient and it Is possible to combine with a variety of 2 dimensional labeling algorithms for finding an appropriate labeling algorithm to its application. In this study, we applied SIL to confocal microscopy images of cervix cancer cell and compared the results of labeling. According to the measurement, we found that the speed of Sd combined with, CCCL (Contour based Connected Component Labeling) is almost 2 times higher than that of other methods. In conclusion, considering that the performance of labeling depends on a kind of image, we obtained that the proposed method provide better result for the confocal microscopy cell volume data.

Face Detection Based on Incremental Learning from Very Large Size Training Data (대용량 훈련 데이타의 점진적 학습에 기반한 얼굴 검출 방법)

  • 박지영;이준호
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.949-958
    • /
    • 2004
  • race detection using a boosting based algorithm requires a very large size of face and nonface data. In addition, the fact that there always occurs a need for adding additional training data for better detection rates demands an efficient incremental teaming algorithm. In the design of incremental teaming based classifiers, the final classifier should represent the characteristics of the entire training dataset. Conventional methods have a critical problem in combining intermediate classifiers that weight updates depend solely on the performance of individual dataset. In this paper, for the purpose of application to face detection, we present a new method to combine an intermediate classifier with previously acquired ones in an optimal manner. Our algorithm creates a validation set by incrementally adding sampled instances from each dataset to represent the entire training data. The weight of each classifier is determined based on its performance on the validation set. This approach guarantees that the resulting final classifier is teamed by the entire training dataset. Experimental results show that the classifier trained by the proposed algorithm performs better than by AdaBoost which operates in batch mode, as well as by ${Learn}^{++}$.

The Proposal and Performance Analysis for the Detection Scheme of D-STTD using Iterative Algorithm (반복 알고리즘을 적용한 D-STTD 시스템의 검출 기법 제안 및 성능 분석)

  • Yoon, Gil-Sang;Lee, Jeong-Hwan;You, Cheol-Woo;Hwang, In-Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.9A
    • /
    • pp.917-923
    • /
    • 2008
  • The D-STTD system obtains the diversity gain through the STTD scheme and the Multiplexing gain through parallel structure of the encoder using the STTD scheme known Alamouti Code. We are difficult to use Combining scheme of the STTD scheme for the D-STTD detection in the decoder because the D-STTD system transmits mutually different data in each other STTD encoder for multiplexing gain. Therefore, in this paper we combine the D-STTD system with Linear algorithm, SIC algorithm and OSIC algorithm known multiplexing detection scheme based on MMSE scheme and compare the performance of each system. And we propose the detection scheme of the D-STTD using MAP Algorithm and analyze the performance of each system. The simulation results show that the detector using iterative algorithm has better performance than Linear MMSE Detector. Especially, we show that the detector using MAP algorithm outperforms conventional detector.

Image warping using an adaptive partial matching method (적응적 부분 정합 방법을 이용한 영상 비틀림 방법)

  • 임동근;호요성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.12
    • /
    • pp.2783-2797
    • /
    • 1997
  • This paper proposes a new motion estimation algorithm that employs matching in a variable search area. Instead of uisg a fixed search range for coarse motion estimation, we examine a varying search range, which is determined adaptively by the peak signal to noise ratio (PSNR) of the frame difference. The hexagonal matching method is one of the refined methods in image warping. It produces improved image quality, but it requires a large amount of computataions. The proposed adaptive partial matching method reduces computational complexity below about 50% of the hexagonal matching method, while maintaining the image quality comparable. The performance of two motion compensation methods, which combine the affine or bilinear transformation with the proposed motion estimation algorithm, is evaluated based on the following criteria:computtational complexity, number of coding bits, and reconstructed image quality. The quality of reconstructed images by the proposed method is substantially improved relative to the conventional BMA method, and is comparable to the full hexagonal matching method;in addition, computational complexity and the number of coding bits are reduced significantly.

  • PDF

Outage Analysis and Optimal Power allocation for Network-coding-based Hybrid AF and DF (네트워크 코딩 기반의 협력통신에서 Hybrid AF and DF 방식의 아웃티지 성능 분석 및 최적 파워 할당 기법)

  • Bek, Joo-Ha;Lee, Dong-Hoon;Lee, Jae-Young;Heo, Jun
    • Journal of Broadcast Engineering
    • /
    • v.17 no.1
    • /
    • pp.95-107
    • /
    • 2012
  • Network coding was proposed to increase the achievable throughput of multicast in a network. Recently, combining network coding into user cooperation has attracted research attention. For cooperative transmission schemes with network coding, users combine their own and their partners messages by network coding. In previous works, it was shown that adaptive DF with network coding can achieve diversity gain and additional throughput gain. In this paper, to improve performance of conventional protocols and maximize advantage of using network coding, we propose a new network coding based user cooperation scheme which uses adaptively amplify-and-forward and decode-and-forward according to interuser channel status. We derive outage probability bound of proposed scheme and prove that it has full diversity order in the high SNR regime. Moreover, based on the outage bound, we compute optimal power allocation for the proposed scheme.

Hardware/Software Co-verification with Integrated Verification (집적검증 기법을 채용한 하드웨어/소프트웨어 동시검증)

  • Lee, Young-Soo;Yang, Se-Yang
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.3
    • /
    • pp.261-267
    • /
    • 2002
  • In SOC(System On a Chip) designs, reducing time and cast for design verification is the most critical to improve the design productivity. this is mainly because the designs require co-verifying HW together with SW, which results in the increase of verification complexity drastically. In this paper, to cope with the verification crisis in SOC designs, we propose a new verification methodology, so called integrated co-verification, which lightly combine both co-simulation and co-emulation in unified and seamless way. We have applied our integrated co-verification to ARM/AMBA platform-based co-verification environment with a commercial co-verification tool, Seamless CVE, and a physical prototyping board. The experiments has shown clear advantage of the proposed technique over conventional ones.