• Title/Summary/Keyword: 최적척도화

Search Result 34, Processing Time 0.023 seconds

Optimal Scaling and Partial Quantification in Multidimensional Preference Analysis (다차원선호분석의 최적척도화 및 부분수량화)

  • 황선영;정수진;김영원
    • The Korean Journal of Applied Statistics
    • /
    • v.14 no.2
    • /
    • pp.305-320
    • /
    • 2001
  • 다차원선호분석(mutidimensional preference analysis)은 여러 상품들에 대한 개인(또는 그룹)의 선호도를 알아보기 위한 분석방법으로 결과는 보통 2차원 그림으로 제공된다. 본 연구에서는 의미있는 두 가지 최적척도 기준을 제안하고 이와 연관된 행 및 열표시자를 유도하고 있으며, 아울러 사전지식을 반영하기 위해 부분수량화를 다차원선호분석에 도입하는 방법을 제시한다. 또한 본 연구에서 제시한 다차원분석기법들을 실제 인터넷 검색엔진에 대한 선호도 자료에 적용한다.

  • PDF

Modeling of Rate-of-Occurrence-of-Failure According to the Failure Data Type of Water Distribution Cast Iron Pipes and Estimation of Optimal Replacement Time Using the Modified Time Scale (상수도 주철 배수관로의 파손자료 유형에 따른 파손율 모형화와 수정된 시간척도를 이용한 최적교체시기의 산정)

  • Park, Su-Wan;Jun, Hwan-Don;Kim, Jung-Wook
    • Journal of Korea Water Resources Association
    • /
    • v.40 no.1 s.174
    • /
    • pp.39-50
    • /
    • 2007
  • This paper presents applications of the log-linear ROCOF(rate-of-occurrence-of-failure) and the Weibull ROCOF to model the failure rate of individual cast iron pipes in a water distribution system and provides a method of estimating the economically optimal replacement time of the pipes using the 'modified time-scale'. The performance of the two ROCOFs is examined using the maximized log-likelihood estimates of the ROCOFs for the two types of failure data: 'failure-time data' and 'failure-number data'. The optimal replacement time equations for the two models are developed by applying the 'modified time-scale' to ensure the numerical convergence of the estimated values of the model parameters. The methodology is applied to the case study water distribution cast iron pipes and it is found that the log-linear ROCOF has better modeling capability than the Weibull ROCOF when the 'failure-time data' is used. Furthermore, the 'failure-time data' is determined to be more appropriate for both ROCOFs compared to the 'failure-number data' in terms of the ROCOF modeling performances for the water mains under study, implying that recording each failure time results in better modeling of the failure rate than recording failure numbers in some time intervals.

Optimal Time Structure for Tag Cognizance Scheme based on Framed and Slotted ALOHA in RFID Networks (RFID 망에서 프레임화 및 슬롯화된 ALOHA에 기반한 Tag 인식 방식을 위한 최적 시간 구조)

  • Choi, Cheon-Won
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.47 no.9
    • /
    • pp.29-36
    • /
    • 2010
  • Consider an RFID network configured as a star such that a single reader is surrounded by a crowd of tags. In the RFID network, prior to attaining the information stored at a tag, the reader must cognize the tags while arbitrating a collision among tags' responses. For this purpose, we present a tag cognizance scheme based on framed and slotted ALOHA, which statically provides a number of slots in each frame for the tags to respond. For the evaluation of the cognizance performance, we choose the cognizance completion probability and the expected cognizance completion time as key performance measures. Then, we present a method to numerically calculate the performance measures. Especially, for small numbers of tags, we derive them in a closed form. Next, we formulate a problem to find an optimal time structure which either maximizes the cognizance completion probability under a constraint on the cognizance time or minimizes the expected cognizance completion time. By solving the problem, we finally obtain an optimal number of slots per frame for the tags to respond. From numerical results, we confirm that there exist a finite optimal number of slots for the tags to respond. Also, we observe that the optimal number of slots maximizing the cognizance completion probability tends to approach to the optimal number of slots minimizing the expected cognizance completion time as the constraint on the cognizance time becomes loose.

A Fuzzy Evaluation Method of Traveler's Path Choice in Transportation Network (퍼지평가방법을 이용한 교통노선 결정)

  • 이상훈;김덕영;김성환
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.1
    • /
    • pp.65-76
    • /
    • 2002
  • This study is realized using fuzzy evaluation and AHP(the analytic hierarchy process) for the optimum search of traffic route and estimated by the quantitative analysis in the vague subjective judgement. It is different from classical route search and noticed thinking method of human. Appraisal element, weight, appraisal value of route is extracted from basic of the opinion gathering fur the driving expert and example of route model was used for the finding of practice utility. Model assessment was performed attribute membership function making of estimate element, estimate value setting, weight define by the AHP, non additive presentation of weight according to $\lambda$-fuzzy measure, Choquet fuzzy integral.

Supervised Rank Normalization with Training Sample Selection (학습 샘플 선택을 이용한 교사 랭크 정규화)

  • Heo, Gyeongyong;Choi, Hun;Youn, Joo-Sang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.1
    • /
    • pp.21-28
    • /
    • 2015
  • Feature normalization as a pre-processing step has been widely used to reduce the effect of different scale in each feature dimension and error rate in classification. Most of the existing normalization methods, however, do not use the class labels of data points and, as a result, do not guarantee the optimality of normalization in classification aspect. A supervised rank normalization method, combination of rank normalization and supervised learning technique, was proposed and demonstrated better result than others. In this paper, another technique, training sample selection, is introduced in supervised feature normalization to reduce classification error more. Training sample selection is a common technique for increasing classification accuracy by removing noisy samples and can be applied in supervised normalization method. Two sample selection measures based on the classes of neighboring samples and the distance to neighboring samples were proposed and both of them showed better results than previous supervised rank normalization method.

Analysis of Factory Automation Based on the Stochastic Simulation (확률적 시뮬레이션을 이용한 공장자동화의 도입 효과 분석)

  • 박영홍
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 1999.12a
    • /
    • pp.739-745
    • /
    • 1999
  • This paper forcuses on the measurement of increased work efficiency expected from the factory automation through random interactions of the organizational behavioral factors whose attributes can be changed with the implementation of the factory automations. Specifically the work reported here is concerned with modeling and analyzing the random interrelationships among the organizational behavioral factory which factory automation will have impact on throughout the time horizon of its implementation in terms of productivity. In addition, it is also concerned with developing a stochastic continuous simulation model to be used to assess the impact of factory automations.

  • PDF

3D Model Reconstruction Algorithm Using a Focus Measure Based on Higher Order Statistics (고차 통계 초점 척도를 이용한 3D 모델 복원 알고리즘)

  • Lee, Joo-Hyun;Yoon, Hyeon-Ju;Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.1
    • /
    • pp.11-18
    • /
    • 2013
  • This paper presents a SFF(shape from focus) algorithm using a new focus measure based on higher order statistics for the exact depth estimation. Since conventional SFF-based 3D depth reconstruction algorithms used SML(sum of modified Laplacian) as the focus measure, their performance is strongly depended on the image characteristics. These are efficient only for the rich texture and well focused images. Therefore, this paper adopts a new focus measure using HOS(higher order statistics), in order to extract the focus value for relatively poor texture and focused images. The initial best focus area map is generated by the measure. Thereafter, the area refinement, thinning, and corner detection methods are successively applied for the extraction of the locally best focus points. Finally, a 3D model from the carefully selected points is reconstructed by Delaunay triangulation.

A Study on the Evaluation of the Service Quality of Port-MIS (항만운영정보시스템(Port-MIS) 서비스 품질 평가에 관한 연구)

  • Kim, Minjin;Shin, Seungsik
    • Journal of Korea Port Economic Association
    • /
    • v.29 no.2
    • /
    • pp.211-238
    • /
    • 2013
  • The Port Logistics Information System (Port-MIS) is the system that processes all of port management such as entry and departure of ships, using facilities within ports, port traffic control, cargo entering and carrying and tax collection in 31 trade ports over the whole country. Lately, The Port Logistics Information System (Port-MIS) has been reformed as a WEB-based system and established basis to provide real time information support structure, diversification of civil complaint system, and optimized civil complaint service by using wire and wireless internet. The typical study on the Port Logistics Information System was C/S program and EDI-centered. But it has significance to study on service quality measurement of WEB Port-MIS that has been served as a new web-based platform since April 2010, for there was no such a study until now.

Extended Information Entropy via Correlation for Autonomous Attribute Reduction of BigData (빅 데이터의 자율 속성 감축을 위한 확장된 정보 엔트로피 기반 상관척도)

  • Park, In-Kyu
    • Journal of Korea Game Society
    • /
    • v.18 no.1
    • /
    • pp.105-114
    • /
    • 2018
  • Various data analysis methods used for customer type analysis are very important for game companies to understand their type and characteristics in an attempt to plan customized content for our customers and to provide more convenient services. In this paper, we propose a k-mode cluster analysis algorithm that uses information uncertainty by extending information entropy to reduce information loss. Therefore, the measurement of the similarity of attributes is considered in two aspects. One is to measure the uncertainty between each attribute on the center of each partition and the other is to measure the uncertainty about the probability distribution of the uncertainty of each property. In particular, the uncertainty in attributes is taken into account in the non-probabilistic and probabilistic scales because the entropy of the attribute is transformed into probabilistic information to measure the uncertainty. The accuracy of the algorithm is observable to the result of cluster analysis based on the optimal initial value through extensive performance analysis and various indexes.

Triangular Grid Homogenization Using Local Improvement Method (국소개선기법을 이용한 삼각격자 균질화)

  • Choi, Hyung-Il;Jun, Sang-Wook;Lee, Dong-Ho;Lee, Do-Hyung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.8
    • /
    • pp.1-7
    • /
    • 2005
  • This paper proposes a local improvement method that combines extended topological clean up and optimization-based smoothing for homogenizing triangular grid system. First extended topological clean up procedures are applied to improve the connectivities of grid elements. Then, local optimization-based smoothing is performed for maximizing the distortion metric that measures grid quality. Using the local improvement strategy, we implement the grid homogenizations for two triangular grid examples. It is shown that the suggested algorithm improves the quality of the triangular grids to a great degree in an efficient manner and also can be easily applied to the remeshing algorithm in adaptive mesh refinement technique.