• Title/Summary/Keyword: random set theory

Search Result 36, Processing Time 0.024 seconds

RANDOM GENERALIZED SET-VALUED COMPLEMENTARITY PROBLEMS

  • Lee, Byung-Soo;Huang, Nan-Jing
    • Journal of the Korean Mathematical Society
    • /
    • v.34 no.1
    • /
    • pp.1-12
    • /
    • 1997
  • Complementaity problem theory developed by Lemke [10], Cottle and Dantzig [8] and others in the early 1960s and thereafter, has numerous applications in diverse fields of mathematical and engineering sciences. And it is closely related to variational inquality theory and fixed point theory. Recently, fixed point methods for the solving of nonlinear complementarity problems were considered by Noor et al. [11, 12]. Also complementarity problems related to variational inequality problems were investigated by Chang [1], Cottle [7] and others.

  • PDF

INNOVATION OF SOME RANDOM FIELDS

  • Si, Si
    • Journal of the Korean Mathematical Society
    • /
    • v.35 no.3
    • /
    • pp.793-802
    • /
    • 1998
  • We apply the generalization of Levy's infinitesimal equation $\delta$X(t) = $\psi$(X(s), s $\leq$ t, $Y_{t}$, t, dt), $t\in R^1$, for a random field X (C) indexed by a contour C or by a more general set. Assume that the X(C) is homogeneous in x, say of degree n, then we can appeal to the classical theory of variational calculus and to the modern theory of white noise analysis in order to discuss the innovation for the X (C.)

  • PDF

Applying the Nash Equilibrium to Constructing Covert Channel in IoT

  • Ho, Jun-Won
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.243-248
    • /
    • 2021
  • Although many different types of covert channels have been suggested in the literature, there are little work in directly applying game theory to building up covert channel. This is because researchers have mainly focused on tailoring game theory for covert channel analysis, identification, and covert channel problem solving. Unlike typical adaptation of game theory to covert channel, we show that game theory can be utilized to establish a new type of covert channel in IoT devices. More specifically, we propose a covert channel that can be constructed by utilizing the Nash Equilibrium with sensor data collected from IoT devices. For covert channel construction, we set random seed to the value of sensor data and make payoff from random number created by running pseudo random number generator with the configured random seed. We generate I × J (I ≥ 2, J ≥ 2) matrix game with these generated payoffs and attempt to obtain the Nash Equilibrium. Covert channel construction method is distinctly determined in accordance with whether or not to acquire the Nash Equilibrium.

Neighbor Discovery in a Wireless Sensor Network: Multipacket Reception Capability and Physical-Layer Signal Processing

  • Jeon, Jeongho;Ephremides, Anthony
    • Journal of Communications and Networks
    • /
    • v.14 no.5
    • /
    • pp.566-577
    • /
    • 2012
  • In randomly deployed networks, such as sensor networks, an important problem for each node is to discover its neighbor nodes so that the connectivity amongst nodes can be established. In this paper, we consider this problem by incorporating the physical layer parameters in contrast to the most of the previous work which assumed a collision channel. Specifically, the pilot signals that nodes transmit are successfully decoded if the strength of the received signal relative to the interference is sufficiently high. Thus, each node must extract signal parameter information from the superposition of an unknown number of received signals. This problem falls naturally in the purview of random set theory (RST) which generalizes standard probability theory by assigning sets, rather than values, to random outcomes. The contributions in the paper are twofold: First, we introduce the realistic effect of physical layer considerations in the evaluation of the performance of logical discovery algorithms; such an introduction is necessary for the accurate assessment of how an algorithm performs. Secondly, given the double uncertainty of the environment (that is, the lack of knowledge of the number of neighbors along with the lack of knowledge of the individual signal parameters), we adopt the viewpoint of RST and demonstrate its advantage relative to classical matched filter detection method.

Reliability-based fragility analysis of nonlinear structures under the actions of random earthquake loads

  • Salimi, Mohammad-Rashid;Yazdani, Azad
    • Structural Engineering and Mechanics
    • /
    • v.66 no.1
    • /
    • pp.75-84
    • /
    • 2018
  • This study presents the reliability-based analysis of nonlinear structures using the analytical fragility curves excited by random earthquake loads. The stochastic method of ground motion simulation is combined with the random vibration theory to compute structural failure probability. The formulation of structural failure probability using random vibration theory, based on only the frequency information of the excitation, provides an important basis for structural analysis in places where there is a lack of sufficient recorded ground motions. The importance of frequency content of ground motions on probability of structural failure is studied for different levels of the nonlinear behavior of structures. The set of simulated ground motion for this study is based on the results of probabilistic seismic hazard analysis. It is demonstrated that the scenario events identified by the seismic risk differ from those obtained by the disaggregation of seismic hazard. The validity of the presented procedure is evaluated by Monte-Carlo simulation.

The Evaluation of Failure Probability for Rock Slope Based on Fuzzy Set Theory and Monte Carlo Simulation (Fuzzy Set Theory와 Monte Carlo Simulation을 이용한 암반사면의 파괴확률 산정기법 연구)

  • Park, Hyuck-Jin
    • Journal of the Korean Geotechnical Society
    • /
    • v.23 no.11
    • /
    • pp.109-117
    • /
    • 2007
  • Uncertainty is pervasive in rock slope stability analysis due to various reasons and subsequently it may cause serious rock slope failures. Therefore, the importance of uncertainty has been recognized and subsequently the probability theory has been used to quantify the uncertainty since 1980's. However, some uncertainties, due to incomplete information, cannot be handled satisfactorily in the probability theory and the fuzzy set theory is more appropriate for those uncertainties. In this study the random variable is considered as fuzzy number and the fuzzy set theory is employed in rock slope stability analysis. However, the previous fuzzy analysis employed the approximate method, which is first order second moment method and point estimate method. Since previous studies used only the representative values from membership function to evaluate the stability of rock slope, the approximated analysis results have been obtained in previous studies. Therefore, the Monte Carlo simulation technique is utilized to evaluate the probability of failure for rock slope in the current study. This overcomes the shortcomings of previous studies, which are employed vertex method. With Monte Carlo simulation technique, more complete analysis results can be secured in the proposed method. The proposed method has been applied to the practical example. According to the analysis results, the probabilities of failure obtained from the fuzzy Monte Carlo simulation coincide with the probabilities of failure from the probabilistic analysis.

Relationships Between the Characteristics of the Business Data Set and Forecasting Accuracy of Prediction models (시계열 데이터의 성격과 예측 모델의 예측력에 관한 연구)

  • 이원하;최종욱
    • Journal of Intelligence and Information Systems
    • /
    • v.4 no.1
    • /
    • pp.133-147
    • /
    • 1998
  • Recently, many researchers have been involved in finding deterministic equations which can accurately predict future event, based on chaotic theory, or fractal theory. The theory says that some events which seem very random but internally deterministic can be accurately predicted by fractal equations. In contrast to the conventional methods, such as AR model, MA, model, or ARIMA model, the fractal equation attempts to discover a deterministic order inherent in time series data set. In discovering deterministic order, researchers have found that neural networks are much more effective than the conventional statistical models. Even though prediction accuracy of the network can be different depending on the topological structure and modification of the algorithms, many researchers asserted that the neural network systems outperforms other systems, because of non-linear behaviour of the network models, mechanisms of massive parallel processing, generalization capability based on adaptive learning. However, recent survey shows that prediction accuracy of the forecasting models can be determined by the model structure and data structures. In the experiments based on actual economic data sets, it was found that the prediction accuracy of the neural network model is similar to the performance level of the conventional forecasting model. Especially, for the data set which is deterministically chaotic, the AR model, a conventional statistical model, was not significantly different from the MLP model, a neural network model. This result shows that the forecasting model. This result shows that the forecasting model a, pp.opriate to a prediction task should be selected based on characteristics of the time series data set. Analysis of the characteristics of the data set was performed by fractal analysis, measurement of Hurst index, and measurement of Lyapunov exponents. As a conclusion, a significant difference was not found in forecasting future events for the time series data which is deterministically chaotic, between a conventional forecasting model and a typical neural network model.

  • PDF

QUALITY IMPROVEMENT FOR EXPERT BASE WITH CONTROL CHART TECHNIQUES

  • Liu Yumin;Xu Jichao
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 1998.11a
    • /
    • pp.189-197
    • /
    • 1998
  • The axiomatic hypothesis of the objective distribution of evaluation subjection will be proposed in this paper. On the basis of that, set up the random response model of the expert evaluation system and the quality control principle of expert base. Under this principle, develop the statistical quality control theory of expert base, further; provide the quality improvement technology for expert base.

  • PDF

An Investigation on the Effect of Utility Variance on Choice Probability without Assumptions on the Specific Forms of Probability Distributions (특정한 확률분포를 가정하지 않는 경우에 효용의 분산이 제품선택확률에 미치는 영향에 대한 연구)

  • Won, Jee-Sung
    • Korean Management Science Review
    • /
    • v.28 no.1
    • /
    • pp.159-167
    • /
    • 2011
  • The theory of random utility maximization (RUM) defines the probability of an alternative being chosen as the probability of its utility being perceived as higher than those of all the other competing alternatives in the choice set (Marschak 1960). According to this theory, consumers perceive the utility of an alternative not as a constant but as a probability distribution. Over the last two decades, there have been an increasing number of studies on the effect of utility variance on choice probability. The common result of the previous studies is that as the utility variance increases, the effect of the mean value of the utility (the deterministic component of the utility) on choice probability is reduced. This study provides a theoretical investigation on the effect of utility variance on choice probability without any assumptions on the specific forms of probability distributions. This study suggests that without assumptions of the probability distribution functions, firms cannot apply the marketing strategy of maximizing choice probability (or market share), but can only adopt the strategy of maximizing the minimum or maximum value of the expected choice probability. This study applies the Chebyshef inequality and shows how the changes in utility variances affect the maximum of minimum of choice probabilities and provides managerial implications.

ASSVD: Adaptive Sparse Singular Value Decomposition for High Dimensional Matrices

  • Ding, Xiucai;Chen, Xianyi;Zou, Mengling;Zhang, Guangxing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2634-2648
    • /
    • 2020
  • In this paper, an adaptive sparse singular value decomposition (ASSVD) algorithm is proposed to estimate the signal matrix when only one data matrix is observed and there is high dimensional white noise, in which we assume that the signal matrix is low-rank and has sparse singular vectors, i.e. it is a simultaneously low-rank and sparse matrix. It is a structured matrix since the non-zero entries are confined on some small blocks. The proposed algorithm estimates the singular values and vectors separable by exploring the structure of singular vectors, in which the recent developments in Random Matrix Theory known as anisotropic Marchenko-Pastur law are used. And then we prove that when the signal is strong in the sense that the signal to noise ratio is above some threshold, our estimator is consistent and outperforms over many state-of-the-art algorithms. Moreover, our estimator is adaptive to the data set and does not require the variance of the noise to be known or estimated. Numerical simulations indicate that ASSVD still works well when the signal matrix is not very sparse.