• Title/Summary/Keyword: Probability Vector

Search Result 284, Processing Time 0.021 seconds

The Gentan Probability, A Model for the Improvement of the Normal Wood Concept and for the Forest Planning

  • Suzuki, Tasiti
    • Journal of Korean Society of Forest Science
    • /
    • v.67 no.1
    • /
    • pp.52-59
    • /
    • 1984
  • A Gentan probability q(j) is the probability that a newly planted forest will be felled at age-class j. A future change in growing stock and yield of the forests can be predicted by means of this probability. On the other hand a state of the forests is described in terms of an n-vector whose components are the areas of each age-class. This vector, called age-class vector, flows in a n-1 dimensional simplex by means of $n{\times}n$ matrices, whose components are the age-class transition probabilities derived from the Gentan probabilities. In the simplex there exists a fixed point, into which an arbitrary forest age vector sinks. Theoretically this point means a normal state of the forest. To each age-class-transition matrix there corresponds a single normal state; this means that there are infinitely many normal states of the forests.

  • PDF

Concept Drift Based on CNN Probability Vector in Data Stream Environment

  • Kim, Tae Yeun;Bae, Sang Hyun
    • Journal of Integrative Natural Science
    • /
    • v.13 no.4
    • /
    • pp.147-151
    • /
    • 2020
  • In this paper, we propose a method to detect concept drift by applying Convolutional Neural Network (CNN) in a data stream environment. Since the conventional method compares only the final output value of the CNN and detects it as a concept drift if there is a difference, there is a problem in that the actual input value of the data stream reacts sensitively even if there is no significant difference and is incorrectly detected as a concept drift. Therefore, in this paper, in order to reduce such errors, not only the output value of CNN but also the probability vector are used. First, the data entered into the data stream is patterned to learn from the neural network model, and the difference between the output value and probability vector of the current data and the historical data of these learned neural network models is compared to detect the concept drift. The proposed method confirmed that only CNN output values could be used to reduce detection errors compared to how concept drift were detected.

Verification and estimation of a posterior probability and probability density function using vector quantization and neural network (신경회로망과 벡터양자화에 의한 사후확률과 확률 밀도함수 추정 및 검증)

  • 고희석;김현덕;이광석
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.45 no.2
    • /
    • pp.325-328
    • /
    • 1996
  • In this paper, we proposed an estimation method of a posterior probability and PDF(Probability density function) using a feed forward neural network and code books of VQ(vector quantization). In this study, We estimates a posterior probability and probability density function, which compose a new parameter with well-known Mel cepstrum and verificate the performance for the five vowels taking from syllables by NN(neural network) and PNN(probabilistic neural network). In case of new parameter, showed the best result by probabilistic neural network and recognition rates are average 83.02%.

  • PDF

Double Faults Isolation Based on the Reduced-Order Parity Vectors in Redundant Sensor Configuration

  • Yang, Cheol-Kwan;Shim, Duk-Sun
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.2
    • /
    • pp.155-160
    • /
    • 2007
  • A fault detection and isolation (FDI) problem is considered for inertial sensors, such as gyroscopes and accelerometers and a new FDI method for double faults is proposed using reduced-order parity vector. The reduced-order parity vector (RPV) algorithm enables us to isolate double faults with 7 sensors. Averaged parity vector is used to reduce false alarm and wrong isolation, and to improve correct isolation. The RPV algorithm is analyzed by Monte-Carlo simulation and the performance is given through fault detection probability, correct isolation probability, and wrong isolation probability.

A Real-Time Concept-Based Text Categorization System using the Thesauraus Tool (시소러스 도구를 이용한 실시간 개념 기반 문서 분류 시스템)

  • 강원석;강현규
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.167-167
    • /
    • 1999
  • The majority of text categorization systems use the term-based classification method. However, because of too many terms, this method is not effective to classify the documents in areal-time environment. This paper presents a real-time concept-based text categorization system,which classifies texts using thesaurus. The system consists of a Korean morphological analyzer, athesaurus tool, and a probability-vector similarity measurer. The thesaurus tool acquires the meaningsof input terms and represents the text with not the term-vector but the concept-vector. Because theconcept-vector consists of semantic units with the small size, it makes the system enable to analyzethe text with real-time. As representing the meanings of the text, the vector supports theconcept-based classification. The probability-vector similarity measurer decides the subject of the textby calculating the vector similarity between the input text and each subject. In the experimentalresults, we show that the proposed system can effectively analyze texts with real-time and do aconcept-based classification. Moreover, the experiment informs that we must expand the thesaurustool for the better system.

Isolated word recognition using the SOFM-HMM and the Inertia (관성과 SOFM-HMM을 이용한 고립단어 인식)

  • 윤석현;정광우;홍광석;박병철
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.6
    • /
    • pp.17-24
    • /
    • 1994
  • This paper is a study on Korean word recognition and suggest the method that stabilizes the state-transition in the HMM by applying the `inertia' to the feature vector sequences. In order to reduce the quantized distortion considering probability distribution of input vectors, we used SOFM, an unsupervised learning method, as a vector quantizer, By applying inertia to the feature vector sequences, the overlapping of probability distributions for the response path of each word on the self organizing feature map can be reduced and the state-transition in the Hmm can be Stabilized. In order to evaluate the performance of the method, we carried out experiments for 50 DDD area names. The results showed that applying inertia to the feature vector sequence improved the recognition rate by 7.4% and can make more HMMs available without reducing the recognition rate for the SOFM having the fixed number of neuron.

  • PDF

Probabilistic structural damage detection approaches based on structural dynamic response moments

  • Lei, Ying;Yang, Ning;Xia, Dandan
    • Smart Structures and Systems
    • /
    • v.20 no.2
    • /
    • pp.207-217
    • /
    • 2017
  • Because of the inevitable uncertainties such as structural parameters, external excitations and measurement noises, the effects of uncertainties should be taken into consideration in structural damage detection. In this paper, two probabilistic structural damage detection approaches are proposed to account for the underlying uncertainties in structural parameters and external excitation. The first approach adopts the statistical moment-based structural damage detection (SMBDD) algorithm together with the sensitivity analysis of the damage vector to the uncertain parameters. The approach takes the advantage of the strength SMBDD, so it is robust to measurement noise. However, it requests the number of measured responses is not less than that of unknown structural parameters. To reduce the number of measurements requested by the SMBDD algorithm, another probabilistic structural damage detection approach is proposed. It is based on the integration of structural damage detection using temporal moments in each time segment of measured response time history with the sensitivity analysis of the damage vector to the uncertain parameters. In both approaches, probability distribution of damage vector is estimated from those of uncertain parameters based on stochastic finite element model updating and probabilistic propagation. By comparing the two probability distribution characteristics for the undamaged and damaged models, probability of damage existence and damage extent at structural element level can be detected. Some numerical examples are used to demonstrate the performances of the two proposed approaches, respectively.

Using Estimated Probability from Support Vector Machines for Credit Rating in IT Industry

  • Hong, Tae-Ho;Shin, Taek-Soo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.11a
    • /
    • pp.509-515
    • /
    • 2005
  • Recently, support vector machines (SVMs) are being recognized as competitive tools as compared with other data mining techniques for solving pattern recognition or classification decision problems. Furthermore, many researches, in particular, have proved it more powerful than traditional artificial neural networks (ANNs)(Amendolia et al., 2003; Huang et al., 2004, Huang et al., 2005; Tay and Cao, 2001; Min and Lee, 2005; Shin et al, 2005; Kim, 2003). The classification decision, such as a binary or multi-class decision problem, used by any classifier, i.e. data mining techniques is cost-sensitive. Therefore, it is necessary to convert the output of the classifier into well-calibrated posterior probabilities. However, SVMs basically do not provide such probabilities. So it required to use any method to create probabilities (Platt, 1999; Drish, 2001). This study applies a method to estimate the probability of outputs of SVM to bankruptcy prediction and then suggests credit scoring methods using the estimated probability for bank's loan decision making.

  • PDF

A model-free soft classification with a functional predictor

  • Lee, Eugene;Shin, Seung Jun
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.6
    • /
    • pp.635-644
    • /
    • 2019
  • Class probability is a fundamental target in classification that contains complete classification information. In this article, we propose a class probability estimation method when the predictor is functional. Motivated by Wang et al. (Biometrika, 95, 149-167, 2007), our estimator is obtained by training a sequence of functional weighted support vector machines (FWSVM) with different weights, which can be justified by the Fisher consistency of the hinge loss. The proposed method can be extended to multiclass classification via pairwise coupling proposed by Wu et al. (Journal of Machine Learning Research, 5, 975-1005, 2004). The use of FWSVM makes our method model-free as well as computationally efficient due to the piecewise linearity of the FWSVM solutions as functions of the weight. Numerical investigation to both synthetic and real data show the advantageous performance of the proposed method.

The Convergence Characteristics of The Time- Averaged Distortion in Vector Quantization: Part I. Theory Based on The Law of Large Numbers (벡터 양자화에서 시간 평균 왜곡치의 수렴 특성 I. 대수 법칙에 근거한 이론)

  • 김동식
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.7
    • /
    • pp.107-115
    • /
    • 1996
  • The average distortio of the vector quantizer is calcualted using a probability function F of the input source for a given codebook. But, since the input source is unknown in geneal, using the sample vectors that is realized from a random vector having probability function F, a time-average opeation is employed so as to obtain an approximation of the average distortion. In this case the size of the smple set should be large so that the sample vectors represent true F reliably. The theoretical inspection about the approximation, however, is not perfomed rigorously. Thus one might use the time-average distortion without any verification of the approximation. In this paper, the convergence characteristics of the time-average distortions are theoretically investigated when the size of sample vectors or the size of codebook gets large. It has been revealed that if codebook size is large enough, then small sample set is enough to obtain the average distortion by approximatio of the calculated tiem-averaged distortion. Experimental results on synthetic data, which are supporting the analysis, are also provided and discussed.

  • PDF