• Title/Summary/Keyword: 가중치 부여 기법

Search Result 338, Processing Time 0.023 seconds

A Design and Implementation of a Content_Based Image Retrieval System using Color Space and Keywords (칼라공간과 키워드를 이용한 내용기반 화상검색 시스템 설계 및 구현)

  • Kim, Cheol-Ueon;Choi, Ki-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.6
    • /
    • pp.1418-1432
    • /
    • 1997
  • Most general content_based image retrieval techniques use color and texture as retrieval indices. In color techniques, color histogram and color pair based color retrieval techniques suffer from a lack of spatial information and text. And This paper describes the design and implementation of content_based image retrieval system using color space and keywords. The preprocessor for image retrieval has used the coordinate system of the existing HSI(Hue, Saturation, Intensity) and preformed to split One image into chromatic region and achromatic region respectively, It is necessary to normalize the size of image for 200*N or N*200 and to convert true colors into 256 color. Two color histograms for background and object are used in order to decide on color selection in the color space. Spatial information is obtained using a maximum entropy discretization. It is possible to choose the class, color, shape, location and size of image by using keyword. An input color is limited by 15 kinds keyword of chromatic and achromatic colors of the Korea Industrial Standards. Image retrieval method is used as the key of retrieval properties in the similarity. The weight values of color space ${\alpha}(%)and\;keyword\;{\beta}(%)$ can be chosen by the user in inputting the query words, controlling the values according to the properties of image_contents. The result of retrieval in the test using extracted feature such as color space and keyword to the query image are lower that those of weight value. In the case of weight value, the average of te measuring parameters shows approximate Precision(0.858), Recall(0.936), RT(1), MT(0). The above results have proved higher retrieval effects than the content_based image retrieval by using color space of keywords.

  • PDF

Multidimensional entity clustering with probabilistic relationships (확률적 관계하의 다차원 개체 clustering)

  • Lee, Cheol;Kang, Seok-Ho
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1989.10a
    • /
    • pp.25-29
    • /
    • 1989
  • 본 고에서는 다차원 개체 clustering문제에 있어서 개체간 관계가 확률적이고 가중치가 부여된 경우를 위한 기대차이등급 clustering기법을 제시하였다. 기대차이등급 clustering기법은 해법의 필요성에 비해 상대적으로 해법개발이 미진한 분산정보시스팀을 대상으로 한 전산화 master plan의 수립이나 기계-부품 그룹형성, FMS에서의 주문선정(Part Type Selection)등에 기여할 수 있을 것으로 기대된다. 반면 해법의 타당성 검토를 위한 이론적 연구가 제시되지 않아 추후 이의 보완을 위한 연구가 요망된다.

  • PDF

Establishment of Traffic Management Strategies Based on AHP (AHP 기법을 통한 교통관리전략의 설정)

  • Kim, Da Ye;Oh, Heung Un
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.6D
    • /
    • pp.533-538
    • /
    • 2012
  • The purpose of weighting traffic management strategies is to reach strategic goal through intended and balanced implementations. The AHP is commonly used technique to derive perspective and strategic goals, and to give priorities to them in a organization. In this study, perspective and strategic goals are derived through surveys and interviews with staffs of a traffic management agency. And then, priority of perspective and strategic goals is performed through the AHP. In this study, three things were identified. Firstly, it was found weighting perspectives such as values for customers, future, activity, and performances were in sequence performed. The value for customers were ranked the first. Secondly, it was found increasing of customer- centric thinking and values of eco- friendly system are the highest weighting in strategic goal. Third, it was found increasing of customer-centric thinking is the most important in headquarters and regional headquarter. But, Values of eco-friendly system is ranked as the most important in opinion of branch office's staffs. It was also identified that depending on the working places, such as headquater and branch offices, the response differs.

Methodological Implications of AHP for Universal Design Evaluation Method (유니버설디자인의 평가방법에 있어서 AHP 기법의 적용 가능성)

  • Gu, Seung-Hwan;Yoo, Jun-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.7
    • /
    • pp.138-146
    • /
    • 2012
  • This paper is about a the evaluation method for universal design on AHP method. Evaluation of Universal Design to develop indicators of the seven principles of universal design and the tripod of Design(in Japan) Review of the PPP, and previous studies were chosen with reference to a indicators. Selected variables were analyzed using the AHP technique, Universal Design 7 elements were derived weights. Factor of the most important on universal design is 'Perceptible Information'(0.263), and 'Tolerance for Error'(0.207), 'Simple and Intuitive Use'(0.154), 'Low Physical Effort'(0.143) was in order. 'Size & Space for Approach and Use'(0.108) is then shown to be important, while the 'Equitable Use'(0.087), 'Flexibility in Use'(0.038) was rated relatively low importance. Based on the results reflected a weighted evaluation of universal design checklist is proposed. Process to obtain the weights can be applied to a variety of situations.

PAPR Reduction of an OFDM Signal by use of PTS scheme with MG-PSO Algorithm (MG-PSO 알고리즘을 적용한 PTS 기법에 의한 OFDM 신호의 PAPR 감소)

  • Kim, Wan-Tae;Yoo, Sun-Yong;Cho, Sung-Joon
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • OFDM(Orthogonal Frequency Division Multiplexing) system is robust to frequency selective fading and narrowband interference in high-speed data communications. However, an OPDM signal consists of a number of independently modulated subcarriers and the superposition of these subcarriers causes a problem that can give a large PARR(Peak-to-Average Power Ratio). PTS(Partial Transmit Sequence) scheme can reduce the PAPR by dividing OFDM signal into subblocks and then multiplying the phase weighting factors to each subblocks, but computational complexity for selecting of phase weighting factors increases exponentially with the number of subblocks. Therefore, in this paper, MG-PSO(Modified Greedy algorithm-Particle Swarm Optimization) algorithm that combines modified greedy algorithm and PSO(Particle Swarm Optimization) algorithm is proposed to use for the phase control method in PTS scheme. This method can solve the computational complexity and guarantee to reduce PAPR. We analyzed the performance of the PAPR reduction when we applied the proposed method to telecommunication systems.

Stress Recovery Technique by Ordinary Kriging Interpolation in p-Adaptive Finite Element Method (적응적 p-Version 유한요소법에서 정규 크리깅에 의한 응력복구기법)

  • Woo, Kwang Sung;Jo, Jun Hyung;Lee, Dong Jin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4A
    • /
    • pp.677-687
    • /
    • 2006
  • Kriging interpolation is one of the generally used interpolation techniques in Geostatistics field. This technique includes the experimental and theoretical variograms and the formulation of kriging interpolation. In contrast to the conventional least square method for stress recovery, kriging interpolation is based on the weighted least square method to obtain the estimated exact solution from the stress data at the Gauss points. The weight factor is determined by variogram modeling for interpolation of stress data apart from the conventional interpolation methods that use an equal weight factor. In addition to this, the p-level is increased non-uniformly or selectively through a posteriori error estimation based on SPR (superconvergent patch recovery) technique, proposed by Zienkiewicz and Zhu, by auto mesh p-refinement. The cut-out plate problem under tension has been tested to validate this approach. It also provides validity of kriging interpolation through comparing to existing least square method.

Hierarchic Document Clustering in OPAC (OPAC에서 자동분류 열람을 위한 계층 클러스터링 연구)

  • 노정순
    • Journal of the Korean Society for information Management
    • /
    • v.21 no.1
    • /
    • pp.93-117
    • /
    • 2004
  • This study is to develop a hierarchic clustering model fur document classification and browsing in OPAC systems. Two automatic indexing techniques (with and without controlled terms), two term weighting methods (based on term frequency and binary weight), five similarity coefficients (Dice, Jaccard, Pearson, Cosine, and Squared Euclidean). and three hierarchic clustering algorithms (Between Average Linkage, Within Average Linkage, and Complete Linkage method) were tested on the document collection of 175 books and theses on library and information science. The best document clusters resulted from the Between Average Linkage or Complete Linkage method with Jaccard or Dice coefficient on the automatic indexing with controlled terms in binary vector. The clusters from Between Average Linkage with Jaccard has more likely decimal classification structure.

A Study on the Efficiency of Imbalanced Data Processing Techniques for Exercise Prediction in COPD Patients (COPD 환자 운동 예측을 위한 불균형 데이터 처리 기법의 효율성에 관한 연구)

  • Hyeonseok Jin;Sehyun Cho;Jayun Choi;Kyungbaek Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.652-655
    • /
    • 2024
  • COPD(Chronic Obstructive Pulmonary Disease)는 장기간에 걸쳐 기도가 좁아지는 폐질환으로, 규칙적 운동은 호흡을 용이하게 하고 증상을 개선할 수 있는 주요 자가관리 중재법 중 하나이다. 건강정보 데이터와 인공지능을 사용하여 규직적 운동 이행군과 불이행군을 선별하여 자가관리 취약 집단을 파악하는 것은 질병관리 측면에서 비용효과적인 전략이다. 하지만 많은 양의 데이터를 확보하기 어렵고, 규칙적 운동군과 그렇지 않은 환자의 비율이 상이하기 때문에 인공지능 모델의 전체적인 선별 능력을 향상시키기 어렵다는 한계가 있다. 이러한 한계를 극복하기 위해 본 연구에서는 국민건강영양조사 데이터를 사용하여 머신러닝 모델인 XGBoost와 딥러닝 모델인 MLP에 오버샘플링, 언더샘플링, 가중치 부여 등 불균형 데이터 처리 기법을 적용 후 성능을 비교하여 가장 효과적인 불균형 데이터 처리 기법을 제시한다.

  • PDF

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

An Efficient Approximation method of Adaptive Support-Weight Matching in Stereo Images (스테레오 영상에서의 적응적 영역 가중치 매칭의 효율적 근사화 방법)

  • Kim, Ho-Young;Lee, Seong-Won
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.902-915
    • /
    • 2011
  • Recently in the area-based stereo matching field, Adaptive Support-Weight (ASW) method that weights matching cost adaptively according to the luminance intensity and the geometric difference shows promising matching performance. However, ASW requires more computational cost than other matching algorithms do and its real-time implementation becomes impractical. By applying Integral Histogram technique after approximating to the Bilateral filter equation, the computational time of ASW can be restricted in constant time regardless of the support window size. However, Integral Histogram technique causes loss of the matching accuracy during approximation process of the original ASW equation. In this paper, we propose a novel algorithm that maintains the ASW algorithm's matching accuracy while reducing the computational costs. In the proposed algorithm, we propose Sub-Block method that groups the pixels within the support area. We also propose the method adjusting the disparity search range depending on edge information. The proposed technique reduces the calculation time efficiently while improving the matching accuracy.