• Title/Summary/Keyword: set-based analysis

Search Result 4,648, Processing Time 0.033 seconds

Application of artificial neural network model in regional frequency analysis: Comparison between quantile regression and parameter regression techniques.

  • Lee, Joohyung;Kim, Hanbeen;Kim, Taereem;Heo, Jun-Haeng
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.170-170
    • /
    • 2020
  • Due to the development of technologies, complex computation of huge data set is possible with a prevalent personal computer. Therefore, machine learning methods have been widely applied in the hydrologic field such as regression-based regional frequency analysis (RFA). The main purpose of this study is to compare two frameworks of RFA based on the artificial neural network (ANN) models: quantile regression technique (QRT-ANN) and parameter regression technique (PRT-ANN). As an output layer of the ANN model, the QRT-ANN predicts quantiles for various return periods whereas the PRT-ANN provides prediction of three parameters for the generalized extreme value distribution. Rainfall gauging sites where record length is more than 20 years were selected and their annual maximum rainfalls and various hydro-meteorological variables were used as an input layer of the ANN model. While employing the ANN model, 70% and 30% of gauging sites were used as training set and testing set, respectively. For each technique, ANN model structure such as number of hidden layers and nodes was determined by a leave-one-out validation with calculating root mean square error (RMSE). To assess the performances of two frameworks, RMSEs of quantile predicted by the QRT-ANN are compared to those of the PRT-ANN.

  • PDF

PCA vs. ICA for Face Recognition

  • Lee, Oyoung;Park, Hyeyoung;Park, Seung-Jin
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.873-876
    • /
    • 2000
  • The information-theoretic approach to face recognition is based on the compact coding where face images are decomposed into a small set of basis images. Most popular method for the compact coding may be the principal component analysis (PCA) which eigenface methods are based on. PCA based methods exploit only second-order statistical structure of the data, so higher- order statistical dependencies among pixels are not considered. Independent component analysis (ICA) is a signal processing technique whose goal is to express a set of random variables as linear combinations of statistically independent component variables. ICA exploits high-order statistical structure of the data that contains important information. In this paper we employ the ICA for the efficient feature extraction from face images and show that ICA outperforms the PCA in the task of face recognition. Experimental results using a simple nearest classifier and multi layer perceptron (MLP) are presented to illustrate the performance of the proposed method.

  • PDF

Strategies to Improve Rural Amenity through Establishing Eco-village (생태마을 조성을 통한 농촌 어메니티 향상 방안)

  • Ban, Yong-Un;Jung, Jae-Ho;Baek, Jong-In
    • Journal of Korean Society of Rural Planning
    • /
    • v.14 no.4
    • /
    • pp.33-45
    • /
    • 2008
  • This study has intended to find strategies improving rural amenity through establishing eco-village. This research presented eco-village establishment as an alternative to solve the problems that current rural villages face, such as FTA farming crisis, aging, weakening of farming competitiveness. Thus, this study has set up ecological rural amenity strategies, based on the principles and cases of both eco-village development and rural amenity through suitability analysis and statistical analysis with survey analysis. This study has defined the eco-rural amenity and set up such categories as environment, history-culture, society, image, and economy. Based on the definition and categories, this study has found the strategies to improve ecological rural amenity in accordance with categories.

An Empirical Comparison of Predictability of Ranking-based and Choice-based Conjoint Analysis (순위기반 컨조인트분석과 선택기반 컨조인트분석의 예측력에 대한 실증적 비교)

  • Kim, Bu-Yong
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.5
    • /
    • pp.681-691
    • /
    • 2014
  • Ranking-based conjoint analysis(RBCA) and choice-based conjoint analysis(CBCA) have attracted significant interest in various fields such as marketing research. When conducting research, the researcher has to select one suitable approach in consideration of strengths and weaknesses. This article performs an empirical comparison of the predictability of RBCA and CBCA in order to provide criterion for the selection. A new concept of measurement set is developed by combining the ranking set and choice set. The measurement set enables us to apply two approaches separately on the same consumer group that allows a fair comparison of predictability. RBCA and CBCA are conducted on consumer preferences for RTD-coffee; subsequently, the predicted values of market shares and hit rates are compared. The study result reveals that their predictabilities are not significantly different. Further, the result indicates that RBCA is recommended if the researcher wants to improve data quality by filtering out poor responses or to implement the market segmentation. In contrast, CBCA is recommended if the researcher wants to lessen the burden on the respondents or to measure preferences under similar conditions with the actual marketplace.

Optimization of Data Placement using Principal Component Analysis based Pareto-optimal method for Multi-Cloud Storage Environment

  • Latha, V.L. Padma;Reddy, N. Sudhakar;Babu, A. Suresh
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.248-256
    • /
    • 2021
  • Now that we're in the big data era, data has taken on a new significance as the storage capacity has exploded from trillion bytes to petabytes at breakneck pace. As the use of cloud computing expands and becomes more commonly accepted, several businesses and institutions are opting to store their requests and data there. Cloud storage's concept of a nearly infinite storage resource pool makes data storage and access scalable and readily available. The majority of them, on the other hand, favour a single cloud because of the simplicity and inexpensive storage costs it offers in the near run. Cloud-based data storage, on the other hand, has concerns such as vendor lock-in, privacy leakage and unavailability. With geographically dispersed cloud storage providers, multicloud storage can alleviate these dangers. One of the key challenges in this storage system is to arrange user data in a cost-effective and high-availability manner. A multicloud storage architecture is given in this study. Next, a multi-objective optimization problem is defined to minimise total costs and maximise data availability at the same time, which can be solved using a technique based on the non-dominated sorting genetic algorithm II (NSGA-II) and obtain a set of non-dominated solutions known as the Pareto-optimal set.. When consumers can't pick from the Pareto-optimal set directly, a method based on Principal Component Analysis (PCA) is presented to find the best answer. To sum it all up, thorough tests based on a variety of real-world cloud storage scenarios have proven that the proposed method performs as expected.

Plasto-plastic Finite Element Analysis for the Parametric Process Design of the Tension Leveller(2) -Full Set Analysis (금속인장교정기의 공정변수 설계를 위한 탄소성 유한요소해석 (2)-전체공정 해석)

  • Lee, H.W.;Huh, H.;Park, S.R.
    • Transactions of Materials Processing
    • /
    • v.11 no.2
    • /
    • pp.147-154
    • /
    • 2002
  • The tension levelling Process is Performed to elongate the strip plastically In combination of tensile and bending strain so that all longitudinal fibers In the strip have an approximately equal amount ofn length and undesirable strip shapes are corrected to the flat shape. Thus paper is concerned with a simulation of the tension levelling process based on the analysis of tile unit model for the tension leveller. Analysis technique such as the sequential analysis of the unit model is suggested and verified with the assembly analysis of the unit model for the effective arts economic analysis of the full set of the tension leveller. Analysis of the full tension levelling Process using sequential unit models Is carried out for steel strips with the shape defect and provides the effect of the intermesh and optimum amount of the intermesh in tension levelling process.

Utilizing Case-based Reasoning for Consumer Choice Prediction based on the Similarity of Compared Alternative Sets

  • SEO, Sang Yun;KIM, Sang Duck;JO, Seong Chan
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.2
    • /
    • pp.221-228
    • /
    • 2020
  • This study suggests an alternative to the conventional collaborative filtering method for predicting consumer choice, using case-based reasoning. The algorithm of case-based reasoning determines the similarity between the alternative sets that each subject chooses. Case-based reasoning uses the inverse of the normalized Euclidian distance as a similarity measurement. This normalized distance is calculated by the ratio of difference between each attribute level relative to the maximum range between the lowest and highest level. The alternative case-based reasoning based on similarity predicts a target subject's choice by applying the utility values of the subjects most similar to the target subject to calculate the utility of the profiles that the target subject chooses. This approach assumes that subjects who deliberate in a similar alternative set may have similar preferences for each attribute level in decision making. The result shows the similarity between comparable alternatives the consumers consider buying is a significant factor to predict the consumer choice. Also the interaction effect has a positive influence on the predictive accuracy. This implies the consumers who looked into the same alternatives can probably pick up the same product at the end. The suggested alternative requires fewer predictors than conjoint analysis for predicting customer choices.

Decision Analysis System for Job Guidance using Rough Set (러프집합을 통한 취업의사결정 분석시스템)

  • Lee, Heui-Tae;Park, In-Kyoo
    • Journal of Digital Convergence
    • /
    • v.11 no.10
    • /
    • pp.387-394
    • /
    • 2013
  • Data mining is the process of discovering hidden, non-trivial patterns in large amounts of data records in order to be used very effectively for analysis and forecasting. Because hundreds of variables give rise to a high level of redundancy and dimensionality with time complexity, they are more likely to have spurious relationships, and even the weakest relationships will be highly significant by any statistical test. Hence cluster analysis is a main task of data mining and is the task of grouping a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups. In this paper system implementation is of great significance, which defines a new definition based on information-theoretic entropy and analyse the analogue behaviors of objects at hand so as to address the measurement of uncertainties in the classification of categorical data. The sources were taken from a survey aimed to identify of job guidance from students in high school pyeongtaek. we show how variable precision information-entropy based rough set can be used to group student in each section. It is proved that the proposed method has the more exact classification than the conventional in attributes more than 10 and that is more effective in job guidance for students.

Counter-Based Approaches for Efficient WCET Analysis of Multicore Processors with Shared Caches

  • Ding, Yiqiang;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.4
    • /
    • pp.285-299
    • /
    • 2013
  • To enable hard real-time systems to take advantage of multicore processors, it is crucial to obtain the worst-case execution time (WCET) for programs running on multicore processors. However, this is challenging and complicated due to the inter-thread interferences from the shared resources in a multicore processor. Recent research used the combined cache conflict graph (CCCG) to model and compute the worst-case inter-thread interferences on a shared L2 cache in a multicore processor, which is called the CCCG-based approach in this paper. Although it can compute the WCET safely and accurately, its computational complexity is exponential and prohibitive for a large number of cores. In this paper, we propose three counter-based approaches to significantly reduce the complexity of the multicore WCET analysis, while achieving absolute safety with tightness close to the CCCG-based approach. The basic counter-based approach simply counts the worst-case number of cache line blocks mapped to a cache set of a shared L2 cache from all the concurrent threads, and compares it with the associativity of the cache set to compute the worst-case cache behavior. The enhanced counter-based approach uses techniques to enhance the accuracy of calculating the counters. The hybrid counter-based approach combines the enhanced counter-based approach and the CCCG-based approach to further improve the tightness of analysis without significantly increasing the complexity. Our experiments on a 4-core processor indicate that the enhanced counter-based approach overestimates the WCET by 14% on average compared to the CCCG-based approach, while its averaged running time is less than 1/380 that of the CCCG-based approach. The hybrid approach reduces the overestimation to only 2.65%, while its running time is less than 1/150 that of the CCCG-based approach on average.

Development of Objectives in Nursing Clinical Education based on the Nursing Core Competencies (핵심간호능력 중심 간호학실습교육목표 개발)

  • Kim, Mi-Won
    • Journal of Korean Academy of Nursing
    • /
    • v.36 no.2
    • /
    • pp.389-402
    • /
    • 2006
  • Purpose: The purpose of this study was to set up a Nursing Core Competencies required for staff nurses and to set up Objectives for Nursing Clinical Education based on the Nursing Core Competencies. The objectives in this study are to be achieved ultimately through clinical practice because it is a common avenue of work and the basic objective regardless of the education system and curriculum. Method: A nursing Core Competencies were established by literature review and verified by 15 experts. Nursing Clinical Education Objectives were established by literature review and analysis, and a survey for validity using a five point Likert scale was given to 257 nursing professors, 503 head-nurses, 509 staff nurses who had less than 3 years clinical experience in 34 general hospitals and 738 senior student nurses from 81 nursing colleges. Result: Nine nursing core competencies were set up. In addition 39 Objectives for each of the nursing clinical core competencies were set up. Conclusion: In conclusion, this study will contribute to professional nursing education to provide comprehensive nursing care by applying knowledge to nursing practice to achieve the Nursing Pore Competency as a professional nurse.