• Title/Summary/Keyword: random set theory

Search Result 36, Processing Time 0.023 seconds

Semiotic mediation through technology: The case of fraction reasoning (초등학생들의 측정으로서 분수에 대한 이해 : 공학도구를 활용한 기호적 중재)

  • Yeo, Sheunghyun
    • The Mathematical Education
    • /
    • v.60 no.1
    • /
    • pp.1-19
    • /
    • 2021
  • This study investigates students' conceptions of fractions from a measurement approach while providing a technological environment designed to support students' understanding of the relationships between quantities and adjustable units. 13 third-graders participated in this study and they were involved in a series of measurement tasks through task-based interviews. The tasks were devised to investigate the relationship between units and quantity through manipulations. Screencasting videos were collected including verbal explanations and manipulations. Drawing upon the theory of semiotic mediation, students' constructed concepts during interviews were coded as mathematical words and visual mediators to identify conceptual profiles using a fine-grained analysis. Two students changed their strategies to solve the tasks were selected as a representative case of the two profiles: from guessing to recursive partitioning; from using random units to making a relation to the given unit. Dragging mathematical objects plays a critical role to mediate and formulate fraction understandings such as unitizing and partitioning. In addition, static and dynamic representations influence the development of unit concepts in measurement situations. The findings will contribute to the field's understanding of how students come to understand the concept of fraction as measure and the role of technology, which result in a theory-driven, empirically-tested set of tasks that can be used to introduce fractions as an alternative way.

Haplotype Assembly from Weighted SNP Fragments and Related Genotype Information (신뢰도를 가진 SNP 단편들과 유전자형으로부터 일배체형 조합)

  • Kang, Seung-Ho;Jeong, In-Seon;Choi, Mun-Ho;Lim, Hyeong-Seok
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.11
    • /
    • pp.509-516
    • /
    • 2008
  • The Minimum Letter Flips (MLF) model and the Weighted Minimum Letter Flips (WMLF) model are for solving the haplotype assembly problem. But these two models are effective only when the error rate in SNP fragments is low. In this paper, we first establish a new computational model that employs the related genotype information as an improvement of the WMLF model and show its NP-hardness, and then propose an efficient genetic algorithm to solve the haplotype assembly problem. The results of experiments on random data set and a real data set indicate that the introduction of genotype information to the WMLF model is quite effective in improving the reconstruction rate especially when the error rate in SNP fragments is high. And the results also show that genotype information increases the convergence speed of the genetic algorithm.

Robust Design Method for Complex Stochastic Inventory Model

  • Hwang, In-Keuk;Park, Dong-Jin
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1999.04a
    • /
    • pp.426-426
    • /
    • 1999
  • ;There are many sources of uncertainty in a typical production and inventory system. There is uncertainty as to how many items customers will demand during the next day, week, month, or year. There is uncertainty about delivery times of the product. Uncertainty exacts a toll from management in a variety of ways. A spurt in a demand or a delay in production may lead to stockouts, with the potential for lost revenue and customer dissatisfaction. Firms typically hold inventory to provide protection against uncertainty. A cushion of inventory on hand allows management to face unexpected demands or delays in delivery with a reduced chance of incurring a stockout. The proposed strategies are used for the design of a probabilistic inventory system. In the traditional approach to the design of an inventory system, the goal is to find the best setting of various inventory control policy parameters such as the re-order level, review period, order quantity, etc. which would minimize the total inventory cost. The goals of the analysis need to be defined, so that robustness becomes an important design criterion. Moreover, one has to conceptualize and identify appropriate noise variables. There are two main goals for the inventory policy design. One is to minimize the average inventory cost and the stockouts. The other is to the variability for the average inventory cost and the stockouts The total average inventory cost is the sum of three components: the ordering cost, the holding cost, and the shortage costs. The shortage costs include the cost of the lost sales, cost of loss of goodwill, cost of customer dissatisfaction, etc. The noise factors for this design problem are identified to be: the mean demand rate and the mean lead time. Both the demand and the lead time are assumed to be normal random variables. Thus robustness for this inventory system is interpreted as insensitivity of the average inventory cost and the stockout to uncontrollable fluctuations in the mean demand rate and mean lead time. To make this inventory system for robustness, the concept of utility theory will be used. Utility theory is an analytical method for making a decision concerning an action to take, given a set of multiple criteria upon which the decision is to be based. Utility theory is appropriate for design having different scale such as demand rate and lead time since utility theory represents different scale across decision making attributes with zero to one ranks, higher preference modeled with a higher rank. Using utility theory, three design strategies, such as distance strategy, response strategy, and priority-based strategy. for the robust inventory system will be developed.loped.

  • PDF

Evaluation of the Probability of Failure in Rock Slope Using Fuzzy Reliability Analysis (퍼지신뢰도(fuzzy reliability) 해석기법을 이용한 암반사면의 파괴확률 산정)

  • Park, Hyuck-Jin
    • Economic and Environmental Geology
    • /
    • v.41 no.6
    • /
    • pp.763-771
    • /
    • 2008
  • Uncertainties are pervasive in engineering geological problems. Therefore, the presence of uncertainties and their significance in analysis and design of slopes have been recognized. Since the uncertainties cannot be taken into account by the conventional deterministic approaches in slope stability analysis, the probabilistic analysis has been considered as the primary tool for representing uncertainties in mathematical models. However, some uncertainties are caused by incomplete information due to lack of information, and those uncertainties cannot be handled appropriately by the probabilistic approach. For those uncertainties, the theory of fuzzy sets is more appropriate. Therefore, in this study, fuzzy reliability analysis has been proposed in order to deal with the uncertainties which cannot be quantified in the probabilistic analysis due to the limited information. For the practical example, a slope is selected in this study and both the probabilistic analysis and the fuzzy reliability analysis have been carried out for planar failure. In the fuzzy reliability analysis, the dip angle and internal friction angle of discontinuity are considered as triangular fuzzy numbers since the random properties of the variables cannot be obtained completely under the conditions of limited information. In the study, the fuzzy reliability index and the probabilities of failure are evaluated from fuzzy arithmetic and compared to those from the probabilistic approach using Monte Carlo simulation and point estimate method. The analysis results show that the fuzzy reliability analysis is more appropriate for the condition that the uncertainties arise due to incomplete information.

A study on object recognition using morphological shape decomposition

  • Ahn, Chang-Sun;Eum, Kyoung-Bae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.05a
    • /
    • pp.185-191
    • /
    • 1999
  • Mathematical morphology based on set theory has been applied to various areas in image processing. Pitas proposed a object recognition algorithm using Morphological Shape Decomposition(MSD), and a new representation scheme called Morphological Shape Representation(MSR). The Pitas's algorithm is a simple and adequate approach to recognize objects that are rotated 45 degree-units with respect to the model object. However, this recognition scheme fails in case of random rotation. This disadvantage may be compensated by defining small angle increments. However, this solution may greatly increase computational complexity because the smaller the step makes more number of rotations to be necessary. In this paper, we propose a new method for object recognition based on MSD. The first step of our method decomposes a binary shape into a union of simple binary shapes, and then a new tree structure is constructed which ran represent the relations of binary shapes in an object. finally, we obtain the feature informations invariant to the rotation, translation, and scaling from the tree and calculate matching scores using efficient matching measure. Because our method does not need to rotate the object to be tested, it could be more efficient than Pitas's one. MSR has an intricate structure so that it might be difficult to calculate matching scores even for a little complex object. But our tree has simpler structure than MSR, and easier to calculated the matchng score. We experimented 20 test images scaled, rotated, and translated versions of five kinds of automobile images. The simulation result using octagonal structure elements shows 95% correct recognition rate. The experimental results using approximated circular structure elements are examined. Also, the effect of noise on MSR scheme is considered.

  • PDF

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF