• Title/Summary/Keyword: input coefficient

Search Result 1,039, Processing Time 0.027 seconds

Sign-Extension Overhead Reduction by Propagated-Carry Selection (전파캐리의 선택에 의한 부호확장 오버헤드의 감소)

  • 조경주;김명순;유경주;정진균
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.6C
    • /
    • pp.632-639
    • /
    • 2002
  • To reduce the area and power consumption in constant coefficient multiplications, the constant coefficient can be encoded using canonic signed digit(CSD) representation. When the partial product terms are added depending on the nonzero bit(1 or -1) positions in the CSD-encoded multiplier, all sign bits are properly extended before the addition takes place. In this paper, to reduce the overhead due to sign extension, a new method is proposed based on the fact that carry propagation in the sign extension part can be controlled such that a desired input bit can be propagated as a carry. Also, a fixed-width multiplier design method suitable for CSD multiplication is proposed. As an application, 43-tap filbert transformer for SSB/BPSK-DS/CDMA is implemented. It is shown that, about 16∼28% adders can be saved by the proposed method compared with the conventional methods.

Design of Face Recognition algorithm Using PCA&LDA combined for Data Pre-Processing and Polynomial-based RBF Neural Networks (PCA와 LDA를 결합한 데이터 전 처리와 다항식 기반 RBFNNs을 이용한 얼굴 인식 알고리즘 설계)

  • Oh, Sung-Kwun;Yoo, Sung-Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.5
    • /
    • pp.744-752
    • /
    • 2012
  • In this study, the Polynomial-based Radial Basis Function Neural Networks is proposed as an one of the recognition part of overall face recognition system that consists of two parts such as the preprocessing part and recognition part. The design methodology and procedure of the proposed pRBFNNs are presented to obtain the solution to high-dimensional pattern recognition problems. In data preprocessing part, Principal Component Analysis(PCA) which is generally used in face recognition, which is useful to express some classes using reduction, since it is effective to maintain the rate of recognition and to reduce the amount of data at the same time. However, because of there of the whole face image, it can not guarantee the detection rate about the change of viewpoint and whole image. Thus, to compensate for the defects, Linear Discriminant Analysis(LDA) is used to enhance the separation of different classes. In this paper, we combine the PCA&LDA algorithm and design the optimized pRBFNNs for recognition module. The proposed pRBFNNs architecture consists of three functional modules such as the condition part, the conclusion part, and the inference part as fuzzy rules formed in 'If-then' format. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of pRBFNNs is represented as two kinds of polynomials such as constant, and linear. The coefficients of connection weight identified with back-propagation using gradient descent method. The output of the pRBFNNs model is obtained by fuzzy inference method in the inference part of fuzzy rules. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of Differential Evolution. The proposed pRBFNNs are applied to face image(ex Yale, AT&T) datasets and then demonstrated from the viewpoint of the output performance and recognition rate.

Numerical Analysis on the Determination of Hydraulic Characteristics of Rubble Mound Breakwater (경사식 방파제의 수리특성 결정을 위한 수치해석)

  • 박현주;전인식;이달수
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.14 no.1
    • /
    • pp.19-33
    • /
    • 2002
  • A numerical method to efficiently secure necessary design informations of the hydraulic characteristics of rubble mound breakwater was attempted here. The method combines the exterior wave field with the interior wave field which is formulated incorporating porous media flow inside the breakwaters. An approximate method based on the long wave assumption was used for the exterior wave field while a boundary element method was used for the interior wave field. A hydraulic experiment was also performed to verify the validity of the numerical analysis. The numerical results were compared with experimental data and results from existing formulae. They generally agreed in both reflection and transmission coefficients. The calculated pore pressures also showed a similar pattern with experimental data, even if they gave some significant differences in their values fur some cases. The main cause of such differences can be attributed to the strongly nonlinear wave field occurring on the frontal slope of the breakwater. The direct input of dynamic pressures(measured from hydraulic experiment) into the numerical method was suggested as a promising method to enhance the predictability of pore pressures.

A Study on Polynomial Neural Networks for Stabilized Deep Networks Structure (안정화된 딥 네트워크 구조를 위한 다항식 신경회로망의 연구)

  • Jeon, Pil-Han;Kim, Eun-Hu;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.12
    • /
    • pp.1772-1781
    • /
    • 2017
  • In this study, the design methodology for alleviating the overfitting problem of Polynomial Neural Networks(PNN) is realized with the aid of two kinds techniques such as L2 regularization and Sum of Squared Coefficients (SSC). The PNN is widely used as a kind of mathematical modeling methods such as the identification of linear system by input/output data and the regression analysis modeling method for prediction problem. PNN is an algorithm that obtains preferred network structure by generating consecutive layers as well as nodes by using a multivariate polynomial subexpression. It has much fewer nodes and more flexible adaptability than existing neural network algorithms. However, such algorithms lead to overfitting problems due to noise sensitivity as well as excessive trainning while generation of successive network layers. To alleviate such overfitting problem and also effectively design its ensuing deep network structure, two techniques are introduced. That is we use the two techniques of both SSC(Sum of Squared Coefficients) and $L_2$ regularization for consecutive generation of each layer's nodes as well as each layer in order to construct the deep PNN structure. The technique of $L_2$ regularization is used for the minimum coefficient estimation by adding penalty term to cost function. $L_2$ regularization is a kind of representative methods of reducing the influence of noise by flattening the solution space and also lessening coefficient size. The technique for the SSC is implemented for the minimization of Sum of Squared Coefficients of polynomial instead of using the square of errors. In the sequel, the overfitting problem of the deep PNN structure is stabilized by the proposed method. This study leads to the possibility of deep network structure design as well as big data processing and also the superiority of the network performance through experiments is shown.

Image Compression using Validity and Zero Coefficients by DCT(Discrete Cosine Transform) (DCT에서 유효계수와 Zero계수를 이용한 영상 압축)

  • Kim, Jang Won;Han, Sang Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.1 no.3
    • /
    • pp.97-103
    • /
    • 2008
  • In this paper, $256{\times}256$ input image is classified into a validity block and an edge block of $8{\times}8$ block for image compression. DCT(Discrete Cosine Transform) is executed only for the DC coefficient that is validity coefficients for a validity block. Predict the position where a quantization coefficient becomes 0 for an edge block, I propose new algorithm to execute DCT in the reduced region. Not only this algorithm that I proposed reduces computational complexity of FDCT(Forward DCT) and IDCT(Inverse DCT) and decreases encoding time and decoding time. I let compressibility increase by accomplishing other stability verticality zigzag scan by the block size that was classified for each block at the time of huffman encoding each. In addition, the algorithm that I suggested reduces Run-Length by accomplishing the level verticality zigzag scan that is good for a classified block characteristic and, I offer the compressibility that improved thereby.

  • PDF

Implementation of Real-time Vowel Recognition Mouse based on Smartphone (스마트폰 기반의 실시간 모음 인식 마우스 구현)

  • Jang, Taeung;Kim, Hyeonyong;Kim, Byeongman;Chung, Hae
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.531-536
    • /
    • 2015
  • The speech recognition is an active research area in the human computer interface (HCI). The objective of this study is to control digital devices with voices. In addition, the mouse is used as a computer peripheral tool which is widely used and provided in graphical user interface (GUI) computing environments. In this paper, we propose a method of controlling the mouse with the real-time speech recognition function of a smartphone. The processing steps include extracting the core voice signal after receiving a proper length voice input with real time, to perform the quantization by using the learned code book after feature extracting with mel frequency cepstral coefficient (MFCC), and to finally recognize the corresponding vowel using hidden markov model (HMM). In addition a virtual mouse is operated by mapping each vowel to the mouse command. Finally, we show the various mouse operations on the desktop PC display with the implemented smartphone application.

Investigation of gamma radiation shielding capability of two clay materials

  • Olukotun, S.F.;Gbenu, S.T.;Ibitoye, F.I.;Oladejo, O.F.;Shittu, H.O.;Fasasi, M.K.;Balogun, F.A.
    • Nuclear Engineering and Technology
    • /
    • v.50 no.6
    • /
    • pp.957-962
    • /
    • 2018
  • The gamma radiation shielding capability (GRSC) of two clay-materials (Ball clay and Kaolin)of Southwestern Nigeria ($7.49^{\circ}N$, $4.55^{\circ}E$) have been investigated by determine theoretically and experimentally the mass attenuation coefficient, ${\mu}/{\rho}(cm^2g^{-1})$ of the clay materials at photon energies of 609.31, 1120.29, 1173.20, 1238.11, 1332.50 and 1764.49 keV emitted from $^{214}Bi$ ore and $^{60}Co$ point source. The mass attenuation coefficients were theoretically evaluated using the elemental compositions of the clay-materials obtained by Particle-Induced X-ray Emission (PIXE) elemental analysis technique as input data for WinXCom software. While gamma ray transmission experiment using Hyper Pure Germanium (HPGe) spectrometer detector to experimentally determine the mass attenuation coefficients, ${\mu}/{\rho}(cm^2g^{-1})$ of the samples. The experimental results are in good agreement with the theoretical calculations of WinXCom software. Linear attenuation coefficient (${\mu}$), half value layer (HVL) and mean free path (MFP) were also evaluated using the obtained ${\mu}/{\rho}$ values for the investigated samples. The GRSC of the selected clay-materials have been compared with other studied shielding materials. The cognizance of various factors such as availability, thermo-chemical stability and water retaining ability by the clay-samples can be analyzed for efficacy of the material for their GRSC.

Object-based Image Classification by Integrating Multiple Classes in Hue Channel Images (Hue 채널 영상의 다중 클래스 결합을 이용한 객체 기반 영상 분류)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.2011-2025
    • /
    • 2021
  • In high-resolution satellite image classification, when the color values of pixels belonging to one class are different, such as buildings with various colors, it is difficult to determine the color information representing the class. In this paper, to solve the problem of determining the representative color information of a class, we propose a method to divide the color channel of HSV (Hue Saturation Value) and perform object-based classification. To this end, after transforming the input image of the RGB color space into the components of the HSV color space, the Hue component is divided into subchannels at regular intervals. The minimum distance-based image classification is performed for each hue subchannel, and the classification result is combined with the image segmentation result. As a result of applying the proposed method to KOMPSAT-3A imagery, the overall accuracy was 84.97% and the kappa coefficient was 77.56%, and the classification accuracy was improved by more than 10% compared to a commercial software.

A Study on Selecting Principle Component Variables Using Adaptive Correlation (적응적 상관도를 이용한 주성분 변수 선정에 관한 연구)

  • Ko, Myung-Sook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.79-84
    • /
    • 2021
  • A feature extraction method capable of reflecting features well while mainaining the properties of data is required in order to process high-dimensional data. The principal component analysis method that converts high-level data into low-dimensional data and express high-dimensional data with fewer variables than the original data is a representative method for feature extraction of data. In this study, we propose a principal component analysis method based on adaptive correlation when selecting principal component variables in principal component analysis for data feature extraction when the data is high-dimensional. The proposed method analyzes the principal components of the data by adaptively reflecting the correlation based on the correlation between the input data. I want to exclude them from the candidate list. It is intended to analyze the principal component hierarchy by the eigen-vector coefficient value, to prevent the selection of the principal component with a low hierarchy, and to minimize the occurrence of data duplication inducing data bias through correlation analysis. Through this, we propose a method of selecting a well-presented principal component variable that represents the characteristics of actual data by reducing the influence of data bias when selecting the principal component variable.

Ductility demands of steel frames equipped with self-centring fuses under near-fault earthquake motions considering multiple yielding stages

  • Lu Deng;Min Zhu;Michael C.H. Yam;Ke Ke;Zhongfa Zhou;Zhonghua Liu
    • Structural Engineering and Mechanics
    • /
    • v.86 no.5
    • /
    • pp.589-605
    • /
    • 2023
  • This paper investigates the ductility demands of steel frames equipped with self-centring fuses under near-fault earthquake motions considering multiple yielding stages. The study is commenced by verifying a trilinear self-centring hysteretic model accounting for multiple yielding stages of steel frames equipped with self-centring fuses. Then, the seismic response of single-degree-of-freedom (SDOF) systems following the validated trilinear self-centring hysteretic law is examined by a parametric study using a near-fault earthquake ground motion database composed of 200 earthquake records as input excitations. Based on a statistical investigation of more than fifty-two (52) million inelastic spectral analyses, the effect of the post-yield stiffness ratios, energy dissipation coefficient and yielding displacement ratio on the mean ductility demand of the system is examined in detail. The analysis results indicate that the increase of post-yield stiffness ratios, energy dissipation coefficient and yielding displacement ratio reduces the ductility demands of the self-centring oscillators responding in multiple yielding stages. A set of empirical expressions for quantifying the ductility demands of trilinear self-centring hysteretic oscillators are developed using nonlinear regression analysis of the analysis result database. The proposed regression model may offer a practical tool for designers to estimate the ductility demand of a low-to-medium rise self-centring steel frame equipped with self-centring fuses progressing in the ultimate stage under near-fault earthquake motions in design and evaluation.