• Title/Summary/Keyword: binary model

Search Result 1,069, Processing Time 0.026 seconds

An Enthalpy Model for the Solidification of Binary Mixture (엔탈피방법을 적용한 이원용액의 응고과정 해석 방법)

  • Yoo, J.S.
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.5 no.1
    • /
    • pp.35-43
    • /
    • 1993
  • A numerical model for the solidification of binary mixture is proposed. Numerical model, which employs enthalpy method, is modified from Continuum model, that is, improved relation is proposed for the Enthalpy - Temperature - Concentration - Liquid Mass Fraction. One dimensional example was selected to verify the proposed model. The results show that the new relation can be applied successfully to the solidification or melting of binary mixture.

  • PDF

CONVERGENCE AND POWER SPECTRUM DENSITY OF ARIMA MODEL AND BINARY SIGNAL

  • Kim, Joo-Mok
    • Korean Journal of Mathematics
    • /
    • v.17 no.4
    • /
    • pp.399-409
    • /
    • 2009
  • We study the weak convergence of various models to Fractional Brownian motion. First, we consider arima process and ON/OFF source model which allows for long packet trains and long inter-train distances. Finally, we figure out power spectrum density as a Fourier transform of autocorrelation function of arima model and binary signal model.

  • PDF

Comparison of Three Binomial-related Models in the Estimation of Correlations

  • Moon, Myung-Sang
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.2
    • /
    • pp.585-594
    • /
    • 2003
  • It has been generally recognized that conventional binomial or Poisson model provides poor fits to the actual correlated binary data due to the extra-binomial variation. A number of generalized statistical models have been proposed to account for this additional variation. Among them, beta-binomial, correlated-binomial, and modified-binomial models are binomial-related models which are frequently used in modeling the sum of n correlated binary data. In many situations, it is reasonable to assume that n correlated binary data are exchangeable, which is a special case of correlated binary data. The sum of n exchangeable correlated binary data is modeled relatively well when the above three binomial-related models are applied. But the estimation results of correlation coefficient turn to be quite different. Hence, it is important to identify which model provides better estimates of model parameters(success probability, correlation coefficient). For this purpose, a small-scale simulation study is performed to compare the behavior of above three models.

Double Anchors Preference Model (DAPM) : A Decision Model for Non-binary Data Retrieval (양기준 선호모형: 비 정형적 자료검색을 위한 의사결정 모형)

  • Lee, Chun-Yeol
    • Asia pacific journal of information systems
    • /
    • v.2 no.1
    • /
    • pp.3-15
    • /
    • 1992
  • This paper proposes a new referential model for data retrieval as an alternative to exact matching. While exact matching is an effective data retrieval model, it is based on fairly strict assumptions and limits our capabilities in data retrieval. This study redefines data retrieval to include non-binary data retrieval in addition to binary data retrieval, proposes Double Anchor Preference Model (DAPM), and analyzes its logical charateristics. DAPM supports non-binary data retrieval. Further, it produces the same result as exact matching for the conventional binary data retrieval. These findings show that, at the logical level, the proposed DAPM retains all the desirable features for data retrieval.

  • PDF

Reassessment on numerical results by the continuum model (연속체모델에 의한 수치해석결과에 대한 재평가)

  • Jeong, Jae-Dong;Yu, Ho-Seon;No, Seung-Tak;Lee, Jun-Sik
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.20 no.12
    • /
    • pp.3926-3937
    • /
    • 1996
  • In recent years there has been increased interest in the continuum model associated with the solidification of binary mixtures. A review of the literature, however, shows that the model verification was not sufficient or only qualitative. Present work is conducted for the reassessment of continuum model on the solidification problems of binary mixtures widely used for model validation. In spite of using the same continuum model, the results do not agree well with those of Incropera and co-workers which are benchmark problems typically used for validation of binary mixture solidification. Inferring from the agreement of present results with the analytic, experimental and other model's numerical results, this discrepancy seems to be caused by numerical errors in applying continuum model developed by Incropera and co-workers, not by the model itself. Careful examination should be preceded before selecting validation problems.

Cross-architecture Binary Function Similarity Detection based on Composite Feature Model

  • Xiaonan Li;Guimin Zhang;Qingbao Li;Ping Zhang;Zhifeng Chen;Jinjin Liu;Shudan Yue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2101-2123
    • /
    • 2023
  • Recent studies have shown that the neural network-based binary code similarity detection technology performs well in vulnerability mining, plagiarism detection, and malicious code analysis. However, existing cross-architecture methods still suffer from insufficient feature characterization and low discrimination accuracy. To address these issues, this paper proposes a cross-architecture binary function similarity detection method based on composite feature model (SDCFM). Firstly, the binary function is converted into vector representation according to the proposed composite feature model, which is composed of instruction statistical features, control flow graph structural features, and application program interface calling behavioral features. Then, the composite features are embedded by the proposed hierarchical embedding network based on a graph neural network. In which, the block-level features and the function-level features are processed separately and finally fused into the embedding. In addition, to make the trained model more accurate and stable, our method utilizes the embeddings of predecessor nodes to modify the node embedding in the iterative updating process of the graph neural network. To assess the effectiveness of composite feature model, we contrast SDCFM with the state of art method on benchmark datasets. The experimental results show that SDCFM has good performance both on the area under the curve in the binary function similarity detection task and the vulnerable candidate function ranking in vulnerability search task.

Application of GLIM to the Binary Categorical Data

  • Sok, Yong-U
    • Journal of the military operations research society of Korea
    • /
    • v.25 no.2
    • /
    • pp.158-169
    • /
    • 1999
  • This paper is concerned with the application of generalized linear interactive modelling(GLIM) to the binary categorical data. To analyze the categorical data given by a contingency table, finding a good-fitting loglinear model is commonly adopted. In the case of a contingency table with a response variable, we can fit a logit model to find a good-fitting loglinear model. For a given $2^4$ contingency table with a binary response variable, we show the process of fitting a loglinear model by fitting a logit model using GLIM and SAS and then we estimate parameters to interpret the nature of associations implied by the model.

  • PDF

Binary regression model using skewed generalized t distributions (기운 일반화 t 분포를 이용한 이진 데이터 회귀 분석)

  • Kim, Mijeong
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.5
    • /
    • pp.775-791
    • /
    • 2017
  • We frequently encounter binary data in real life. Logistic, Probit, Cauchit, Complementary log-log models are often used for binary data analysis. In order to analyze binary data, Liu (2004) proposed a Robit model, in which the inverse of cdf of the Student's t distribution is used as a link function. Kim et al. (2008) also proposed a generalized t-link model to make the binary regression model more flexible. The more flexible skewed distributions allow more flexible link functions in generalized linear models. In the sense, we propose a binary data regression model using skewed generalized t distributions introduced in Theodossiou (1998). We implement R code of the proposed models using the glm function included in R base and R sgt package. We also analyze Pima Indian data using the proposed model in R.

Sampling Based Approach to Bayesian Analysis of Binary Regression Model with Incomplete Data

  • Chung, Young-Shik
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.4
    • /
    • pp.493-505
    • /
    • 1997
  • The analysis of binary data appears to many areas such as statistics, biometrics and econometrics. In many cases, data are often collected in which some observations are incomplete. Assume that the missing covariates are missing at random and the responses are completely observed. A method to Bayesian analysis of the binary regression model with incomplete data is presented. In particular, the desired marginal posterior moments of regression parameter are obtained using Meterpolis algorithm (Metropolis et al. 1953) within Gibbs sampler (Gelfand and Smith, 1990). Also, we compare logit model with probit model using Bayes factor which is approximated by importance sampling method. One example is presented.

  • PDF

Complex Segregation Analysis of Categorical Traits in Farm Animals: Comparison of Linear and Threshold Models

  • Kadarmideen, Haja N.;Ilahi, H.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.18 no.8
    • /
    • pp.1088-1097
    • /
    • 2005
  • Main objectives of this study were to investigate accuracy, bias and power of linear and threshold model segregation analysis methods for detection of major genes in categorical traits in farm animals. Maximum Likelihood Linear Model (MLLM), Bayesian Linear Model (BALM) and Bayesian Threshold Model (BATM) were applied to simulated data on normal, categorical and binary scales as well as to disease data in pigs. Simulated data on the underlying normally distributed liability (NDL) were used to create categorical and binary data. MLLM method was applied to data on all scales (Normal, categorical and binary) and BATM method was developed and applied only to binary data. The MLLM analyses underestimated parameters for binary as well as categorical traits compared to normal traits; with the bias being very severe for binary traits. The accuracy of major gene and polygene parameter estimates was also very low for binary data compared with those for categorical data; the later gave results similar to normal data. When disease incidence (on binary scale) is close to 50%, segregation analysis has more accuracy and lesser bias, compared to diseases with rare incidences. NDL data were always better than categorical data. Under the MLLM method, the test statistics for categorical and binary data were consistently unusually very high (while the opposite is expected due to loss of information in categorical data), indicating high false discovery rates of major genes if linear models are applied to categorical traits. With Bayesian segregation analysis, 95% highest probability density regions of major gene variances were checked if they included the value of zero (boundary parameter); by nature of this difference between likelihood and Bayesian approaches, the Bayesian methods are likely to be more reliable for categorical data. The BATM segregation analysis of binary data also showed a significant advantage over MLLM in terms of higher accuracy. Based on the results, threshold models are recommended when the trait distributions are discontinuous. Further, segregation analysis could be used in an initial scan of the data for evidence of major genes before embarking on molecular genome mapping.