• Title/Summary/Keyword: Final set

Search Result 989, Processing Time 0.024 seconds

Estimation of Setting Time of Cement Mortar combined with Recycled Aggregate Powder and Cement Kiln Dust based on Equivalent Age

  • Han, Min-Cheol
    • Journal of the Korea Institute of Building Construction
    • /
    • v.12 no.1
    • /
    • pp.87-97
    • /
    • 2012
  • This paper presents a method of estimating the setting time of cement mortar incorporating recycled aggregate powder (RP) and cement kiln dust (CKD) at various curing temperatures by applying an equivalent age method. To estimate setting time, the equivalent age using apparent activation energy (Ea) was applied. Increasing RP and CKD leads to a shortened initial and final set. Ea at the initial set and final set obtained by Arrhenius function showed differences in response to mixture type. These were estimated to be from 10~19 KJ/mol in all mixtures, which is smaller than those of conventional mixture ranging from 30~50 KJ/mol. Based on the application of Ea to Freisleben Hansen and Pederson's equivalent age function, equivalent age is nearly constant, regardless of curing temperature and RP contents. This implies that the concept of maturity is applicable in estimating the setting time of concrete containing RP and CKD. A high correlation was observed between estimated setting time and measured setting time. A multiregression model was provided to determine setting time reflecting RP and CKD. Thus, the setting time estimation method studied herein can be applicable to concrete incorporating RP and CKD in the construction field.

A Filter Lining Scheme for Efficient Skyline Computation

  • Kim, Ji-Hyun;Kim, Myung
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.12
    • /
    • pp.1591-1600
    • /
    • 2011
  • The skyline of a multidimensional data set is the maximal subset whose elements are not dominated by other elements of the set. Skyline computation is considered to be very useful for a decision making system that deals with multidimensional data analyses. Recently, a great deal of interests has been shown to improve the performance of skyline computation algorithms. In order to speedup, the number of comparisons between data elements should be reduced. In this paper, we propose a filter lining scheme to accomplish such objectives. The scheme divides the multidimensional data space into angle-based partitions, and places a filter for each partition, and then connects them together in order to establish the final filter line. The filter line can be used to eliminate data, that are not part of the skyline, from the original data set in the preprocessing stage. The filter line is adaptively improved during the data scanning stage. In addition, skylines are computed for each remaining data partition, and are then merged to form the final skyline. Our scheme is an improvement of the previously reported simple preprocessing scheme using simple filters. The performance of the scheme is shown by experiments.

Design and Implementation of Object Classes for Terrain Simulation (지형형상화를 위한 객체 클래스 설계 및 구현)

  • 노용덕
    • Journal of the Korea Society for Simulation
    • /
    • v.6 no.1
    • /
    • pp.61-69
    • /
    • 1997
  • In 3D computer graphics, fractal techniques have been applied to terrain models. Even though fractal models are convenient way to get the data of terrain models, it is not easy to gain the final results by manipulating the data of terrain model. However, by using the object oriented programming techniques, we could reduce the effort of programming job to find the final result. In this paper, a set of classes made by object oriented programming technique is presented. To show the results, the data of a terrain model were made by a fractal technique, namely, the midpoint displacement methods with square lattices of points.

  • PDF

A Study on the set-up of Time Range for Typology of Space Observation Characters (공간주시특성의 유형화를 위한 시간범위설정에 관한 연구)

  • Kim, Jong-Ha;Jung, Jae-Young
    • Korean Institute of Interior Design Journal
    • /
    • v.21 no.4
    • /
    • pp.87-95
    • /
    • 2012
  • This study is for the analysis to which element of space the users observing the lobby at a public space pay more attention for their visual perception. It is focused on the typology process of observation characters. The subjects, in the observation process, came to be interested in the circumstantial clues for space perception and the detailed characters drawing their interest. I could analyze the observation characters of the subjects observing the space by the comprehension and typology of their observation characters. First, from the viewpoint of successive 9 times of observations, each subject observed for 0.32 second to get the visual perception in the applied space, but spent another 0.39 second for the exploration of another observation object or the space roaming. The observation character of the subjects at the lobby of the public space selected for this experiment was that they spent more time on space exploration than on concentration on a point in the space. Second, I analyzed the typology process through the time range. Since the subjects' frequency varied depending on the way to set up the time range, the necessity was proposed that the time range for the analysis of observation characters should be set up more objectively. Third, in case of analyzing the observation characters by 10-second-unit time range, the concentration in the beginning and the middle was 25%, and that in the beginning and the final 41.7%, which showed that 75% of the subjects concentrated in the beginning of the observation time when the concentration in the beginning is added to it. Fourth, the type 3 categorized as "concentration in the beginning and the middle" is the group 47.1 percent of the subjects belong to, and each subject concentrated 1.1 times in the beginning and 2.1 times in the final, which showed that the concentration in the final was 1.75 times as high as that in the beginning.

  • PDF

Image Edge Detection Applying the Toll Set and Entropy Concepts (톨연산과 엔트로피 개념에 기초한 화상의 경계선 추출)

  • Cho, Dong-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.3
    • /
    • pp.471-477
    • /
    • 1996
  • An image edge detection method based on the toll set concept is proposed. Initially the edge structure is established for an image following human perception n model. Then toll set membership values are computed and the toll set intersection and union operators are applied to them. The final toll set membership values are normalized to get the vagueness degrees and the thresholding operation based on entropy concept is performed on them to determine the edge of an image.

  • PDF

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

Estimation in Mixture of Shifted Poisson Distributions

  • Oh, Chang-Hyuck
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.4
    • /
    • pp.1209-1217
    • /
    • 2006
  • For the mixture of shifted Poisson distributions, a method of parameter estimation is proposed. The range of the shifted parameters are estimated first and for each shifted parameter set EM algorithm is applied to estimate the other parameters of the distribution. Among the estimated parameter sets, one with minimum likelihood for given data is to be set as the final estimate. In simulation experiments, the suggested estimation method shows to have a good performance.

  • PDF

Classification and Regression Tree Analysis for Molecular Descriptor Selection and Binding Affinities Prediction of Imidazobenzodiazepines in Quantitative Structure-Activity Relationship Studies

  • Atabati, Morteza;Zarei, Kobra;Abdinasab, Esmaeil
    • Bulletin of the Korean Chemical Society
    • /
    • v.30 no.11
    • /
    • pp.2717-2722
    • /
    • 2009
  • The use of the classification and regression tree (CART) methodology was studied in a quantitative structure-activity relationship (QSAR) context on a data set consisting of the binding affinities of 39 imidazobenzodiazepines for the α1 benzodiazepine receptor. The 3-D structures of these compounds were optimized using HyperChem software with semiempirical AM1 optimization method. After optimization a set of 1481 zero-to three-dimentional descriptors was calculated for each molecule in the data set. The response (dependent variable) in the tree model consisted of the binding affinities of drugs. Three descriptors (two topological and one 3D-Morse descriptors) were applied in the final tree structure to describe the binding affinities. The mean relative error percent for the data set is 3.20%, compared with a previous model with mean relative error percent of 6.63%. To evaluate the predictive power of CART cross validation method was also performed.

Multi-Attribute and Multi-Expert Decision Making by Vague Set (Vague Set를 이용한 다속성.다수전문가 의사결정)

  • 안동규;이상용
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.20 no.43
    • /
    • pp.321-331
    • /
    • 1997
  • Measurement of attributes is often highly subjective and imprecise, yet most MADM methods lack provisions for handling imprecise data. Frequently, decision makers must establish a ranking within a finite set of alternatives with respect to multiple attributes which have varying degrees of importance. The problem is more complex if the evaluations of alternatives according to each attribute are not expressed in precise numbers, but rather in fuzzy numbers. Analysis must allow for lack of precision and partial truth. The advantages of a fuzzy approach for MADM are that a decision maker can obtain efficient solutions all at once without trial and error, and that this approach provides better support for judging the interactive improvement of solutions in comparison with o decision making method. The algorithm used in this study is based on the concepts of vague set theory. Linguistic variables and vague values are used to facilitate a decision maker's subjective assessment about attribute weightings and the appropriateness of alternative versus selection attributes in order to obtain final scores which are called vague appropriateness indices. A numerical example is presented to show the practical applicability of this approach.

  • PDF

Some Problems relating to Use of Letters of Intent in International Contracts (국제계약에 있어서 의향서의 사용과 관련한 문제점)

  • Choi, Myung-Kook
    • THE INTERNATIONAL COMMERCE & LAW REVIEW
    • /
    • v.51
    • /
    • pp.55-78
    • /
    • 2011
  • This paper has derived some problems relating to the use of letters of intent which are common occurrence in the international contracts after considering its nature and legal issues. As reviewed before, some problems may occur when a party has documented a stage in the negotiations by letters of intent. Such documents may well explicitly spell out if, and to what extent, the parties should be bound by what they have already agreed or to carry on negotiations in order to reach the final contract. But if the documents are silent, some problems would arise. Contracting parties are therefore well advised to spell out if, and to what extent, they should be bound by such preliminary agreements. Here again, it might be prudent to explicitly set forth that the parties should not be bound until there is a final written contract signed by authorized representatives of the parties but that they shall abstain from such measures which may defeat their stated objective to reach final agreement, for example, by diminishing the value of performance under the contemplated contract.

  • PDF