• Title/Summary/Keyword: Similarity reduction

Search Result 203, Processing Time 0.027 seconds

Modeling for Discovery the Cutoff Point in Standby Power and Implementation of Group Formation Algorithm (대기전력 차단시점 발견을 위한 모델링과 그룹생성 알고리즘 구현)

  • Park, Tae-Jin;Kim, Su-Do;Park, Man-Gon
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.1
    • /
    • pp.107-121
    • /
    • 2009
  • First reason for generation of standby power is because starting voltage must pass through from the source of electricity to IC. The second reason is due to current when IC is in operation. Purpose of this abstract is on structures of simple modules that automatically switch on or off through analysis of state on standby power and analysis of cutoff point patterns as well as application of algorithms. To achieve this, this paper is based on analysis of electric signals and modeling. Also, on/off cutoff criteria has been established for reduction of standby power. To find on/off cutoff point, that is executed algorithm of similar group and leading pattern group generation in the standby power state. Therefore, the algorithm was defined as an important parameter of the subtraction value of calculated between $1^{st}$ SCS, $2^{nd}$ SCS, and the median value of sampling coefficient per second from a wall outlet.

  • PDF

GC-Tree: A Hierarchical Index Structure for Image Databases (GC-트리 : 이미지 데이타베이스를 위한 계층 색인 구조)

  • 차광호
    • Journal of KIISE:Databases
    • /
    • v.31 no.1
    • /
    • pp.13-22
    • /
    • 2004
  • With the proliferation of multimedia data, there is an increasing need to support the indexing and retrieval of high-dimensional image data. Although there have been many efforts, the performance of existing multidimensional indexing methods is not satisfactory in high dimensions. Thus the dimensionality reduction and the approximate solution methods were tried to deal with the so-called dimensionality curse. But these methods are inevitably accompanied by the loss of precision of query results. Therefore, recently, the vector approximation-based methods such as the VA- file and the LPC-file were developed to preserve the precision of query results. However, the performance of the vector approximation-based methods depend largely on the size of the approximation file and they lose the advantages of the multidimensional indexing methods that prune much search space. In this paper, we propose a new index structure called the GC-tree for efficient similarity search in image databases. The GC-tree is based on a special subspace partitioning strategy which is optimized for clustered high-dimensional images. It adaptively partitions the data space based on a density function and dynamically constructs an index structure. The resultant index structure adapts well to the strongly clustered distribution of high-dimensional images.

A Fast and Efficient Haar-Like Feature Selection Algorithm for Object Detection (객체검출을 위한 빠르고 효율적인 Haar-Like 피쳐 선택 알고리즘)

  • Chung, Byung Woo;Park, Ki-Yeong;Hwang, Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.6
    • /
    • pp.486-491
    • /
    • 2013
  • This paper proposes a fast and efficient Haar-like feature selection algorithm for training classifier used in object detection. Many features selected by Haar-like feature selection algorithm and existing AdaBoost algorithm are either similar in shape or overlapping due to considering only feature's error rate. The proposed algorithm calculates similarity of features by their shape and distance between features. Fast and efficient feature selection is made possible by removing selected features and features with high similarity from feature set. FERET face database is used to compare performance of classifiers trained by previous algorithm and proposed algorithm. Experimental results show improved performance comparing classifier trained by proposed method to classifier trained by previous method. When classifier is trained to show same performance, proposed method shows 20% reduction of features used in classification.

Extended Information Entropy via Correlation for Autonomous Attribute Reduction of BigData (빅 데이터의 자율 속성 감축을 위한 확장된 정보 엔트로피 기반 상관척도)

  • Park, In-Kyu
    • Journal of Korea Game Society
    • /
    • v.18 no.1
    • /
    • pp.105-114
    • /
    • 2018
  • Various data analysis methods used for customer type analysis are very important for game companies to understand their type and characteristics in an attempt to plan customized content for our customers and to provide more convenient services. In this paper, we propose a k-mode cluster analysis algorithm that uses information uncertainty by extending information entropy to reduce information loss. Therefore, the measurement of the similarity of attributes is considered in two aspects. One is to measure the uncertainty between each attribute on the center of each partition and the other is to measure the uncertainty about the probability distribution of the uncertainty of each property. In particular, the uncertainty in attributes is taken into account in the non-probabilistic and probabilistic scales because the entropy of the attribute is transformed into probabilistic information to measure the uncertainty. The accuracy of the algorithm is observable to the result of cluster analysis based on the optimal initial value through extensive performance analysis and various indexes.

Feature-Based Image Retrieval using SOM-Based R*-Tree

  • Shin, Min-Hwa;Kwon, Chang-Hee;Bae, Sang-Hyun
    • Proceedings of the KAIS Fall Conference
    • /
    • 2003.11a
    • /
    • pp.223-230
    • /
    • 2003
  • Feature-based similarity retrieval has become an important research issue in multimedia database systems. The features of multimedia data are useful for discriminating between multimedia objects (e 'g', documents, images, video, music score, etc.). For example, images are represented by their color histograms, texture vectors, and shape descriptors, and are usually high-dimensional data. The performance of conventional multidimensional data structures(e'g', R- Tree family, K-D-B tree, grid file, TV-tree) tends to deteriorate as the number of dimensions of feature vectors increases. The R*-tree is the most successful variant of the R-tree. In this paper, we propose a SOM-based R*-tree as a new indexing method for high-dimensional feature vectors.The SOM-based R*-tree combines SOM and R*-tree to achieve search performance more scalable to high dimensionalities. Self-Organizing Maps (SOMs) provide mapping from high-dimensional feature vectors onto a two dimensional space. The mapping preserves the topology of the feature vectors. The map is called a topological of the feature map, and preserves the mutual relationship (similarity) in the feature spaces of input data, clustering mutually similar feature vectors in neighboring nodes. Each node of the topological feature map holds a codebook vector. A best-matching-image-list. (BMIL) holds similar images that are closest to each codebook vector. In a topological feature map, there are empty nodes in which no image is classified. When we build an R*-tree, we use codebook vectors of topological feature map which eliminates the empty nodes that cause unnecessary disk access and degrade retrieval performance. We experimentally compare the retrieval time cost of a SOM-based R*-tree with that of an SOM and an R*-tree using color feature vectors extracted from 40, 000 images. The result show that the SOM-based R*-tree outperforms both the SOM and R*-tree due to the reduction of the number of nodes required to build R*-tree and retrieval time cost.

  • PDF

The Verification of Image Merging for Lumber Scanning System (제재목 화상입력시스템의 화상병합 성능 검증)

  • Kim, Byung Nam;Kim, Kwang Mo;Shim, Kug-Bo;Lee, Hyoung Woo;Shim, Sang-Ro
    • Journal of the Korean Wood Science and Technology
    • /
    • v.37 no.6
    • /
    • pp.556-565
    • /
    • 2009
  • Automated visual grading system of lumber needs correct input image. In order to create a correct image of domestic red pine lumber 3.6 m long feeding on a conveyer, part images were captured using area sensor and template matching algorithm was applied to merge part images. Two kinds of template matching algorithms and six kinds of template sizes were adopted in this operation. Feature extracted method appeared to have more excellent image merging performance than fixed template method. Error length was attributed to a decline of similarity related by difference of partial brightness on a part image, specific pattern and template size. The mismatch part was repetitively generated at the long grain. The best size of template for image merging was $100{\times}100$ pixels. In a further study, assignment of exact template size, preprocessing of image merging for reduction of brightness difference will be needed to improve image merging.

Comparative Study of the Dissolution Profiles of a Commercial Theophylline Product after Storage

  • Negro, S.;Herrero-Vanrell, R.;Barcia, E.;Villegas, S.
    • Archives of Pharmacal Research
    • /
    • v.24 no.6
    • /
    • pp.568-571
    • /
    • 2001
  • The purpose of this work was to study the effect of storage time and temperature on the in vitro release kinetics of a commercial sustained-release dosage form of theophylline, at different pHs of the dissolution medium. The formulation was stored at $35^{\circ}C$ for 16 months and at $45^{\circ}C$ for 8 months, with a relative humidity of 60%. The in vitro release tests were performed at pHs 2, 4, 6 and 7.4. The mean values of the transport coefficient n, were close to 0.5 in all the conditions tested, which indicates that the transport system is not modified after storage of the formulation at $35^{\circ}C$ and $45^{\circ}C$. The mean values of the dissolution rate constant ranged from 0.036 to 0.043 $min^{-n}$, under all the conditions tested. Significant differences (${\alpha}=0.05$) were found between pHs 2, 4 and 6, 7.4 for all the model-independent parameters studied. When the formulation was kept at $35^{\circ}C$ for 16 months, the mean percentage of drug dissolved at 8 hours was 25.61% (pHs 2, 4) and, 36.12% (pHs 6, 7.4), representing a 26% and 24% reduction, respectively. Simitar results were obtained after storing the formulation at $45^{\circ}C$ for 8 months, corresponding to 33.3% (pHs 2, 4) and, 22.5% (pHs 6, 7.4) diminution, respectively. The values of the similarity factory $f_2$, obtained were lower than 50, which indicates the lack of similarity among the dissolution profiles, after storing the formulation under the experimental Conditions tested.

  • PDF

S&P Noise Removal Filter Algorithm using Plane Equations (평면 방정식을 이용한 S&P 잡음제거 필터 알고리즘)

  • Young-Su, Chung;Nam-Ho, Kim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.47-53
    • /
    • 2023
  • Devices such as X-Ray, CT, MRI, scanners, etc. can generate S&P noise from several sources during the image acquisition process. Since S&P noise appearing in the image degrades the image quality, it is essential to use noise reduction technology in the image processing process. Various methods have already been proposed in research on S&P noise removal, but all of them have a problem of generating residual noise in an environment with high noise density. Therefore, this paper proposes a filtering algorithm based on a three-dimensional plane equation by setting the grayscale value of the image as a new axis. The proposed algorithm subdivides the local mask to design the three closest non-noisy pixels as effective pixels, and applies cosine similarity to a region with a plurality of pixels. In addition, even when the input pixel cannot form a plane, it is classified as an exception pixel to achieve excellent restoration without residual noise.

Speckle Noise Reduction and Image Quality Improvement in U-net-based Phase Holograms in BL-ASM (BL-ASM에서 U-net 기반 위상 홀로그램의 스펙클 노이즈 감소와 이미지 품질 향상)

  • Oh-Seung Nam;Ki-Chul Kwon;Jong-Rae Jeong;Kwon-Yeon Lee;Nam Kim
    • Korean Journal of Optics and Photonics
    • /
    • v.34 no.5
    • /
    • pp.192-201
    • /
    • 2023
  • The band-limited angular spectrum method (BL-ASM) causes aliasing errors due to spatial frequency control problems. In this paper, a sampling interval adjustment technique for phase holograms and a technique for reducing speckle noise and improving image quality using a deep-learningbased U-net model are proposed. With the proposed technique, speckle noise is reduced by first calculating the sampling factor and controlling the spatial frequency by adjusting the sampling interval so that aliasing errors can be removed in a wide range of propagation. The next step is to improve the quality of the reconstructed image by learning the phase hologram to which the deep learning model is applied. In the S/W simulation of various sample images, it was confirmed that the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were improved by 5% and 0.14% on average, compared with the existing BL-ASM.

Application of LiDAR Data & High-Resolution Satellite Image for Calculate Forest Biomass (산림바이오매스 산정을 위한 LiDAR 자료와 고해상도 위성영상 활용)

  • Lee, Hyun-Jik;Ru, Ji-Ho
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.1
    • /
    • pp.53-63
    • /
    • 2012
  • As a result of the economical loss caused by unusual climate changes resulting from emission of excessive green house gases such as carbon dioxide which is expected to account for 5~20% of the world GDP by 2100, researching technologies regarding the reduction of carbon dioxide emission is being favored worldwide as a part of the high value-added industry. As one of the Annex II countries of Kyoto Protocol of 1997 that should keep the average $CO_2$ emission rate of 5% by 2013, South Korea is also dedicated to the researches and industries of $CO_2$ emission reduction. In this study, Application of LiDAR data & KOMPSAT-2 satellite image for calculated forest Biomass. Raw LiDAR data's tree numbers and tree-high with field survey data resulted in 90% similarity of objects and an average of 0.3m difference in tree-high. Calculating the forest biomass through forest type information categorized as KOMPSAT-2 image and LiDAR data's tree-high data of tree enabled the estimation of $CO_2$ absorption and forest biomass of forest type, The similarity between the field survey average of 90% or higher were analyzed.