• Title/Summary/Keyword: 독립집합

Search Result 113, Processing Time 0.026 seconds

A Study on the Rejection Capability Based on Anti-phone Modeling (반음소 모델링을 이용한 거절기능에 대한 연구)

  • 김우성;구명완
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.3-9
    • /
    • 1999
  • This paper presents the study on the rejection capability based on anti-phone modeling for vocabulary independent speech recognition system. The rejection system detects and rejects out-of-vocabulary words which were not included in candidate words which are defined while the speech recognizer is made. The rejection system can be classified into two categories by their implementation methods, keyword spotting method and utterance verification method. The keyword spotting method uses an extra filler model as a candidate word as well as keyword models. The utterance verification method uses the anti-models for each phoneme for the calculation of confidence score after it has constructed the anti-models for all phonemes. We implemented an utterance verification algorithm which can be used for vocabulary independent speech recognizer. We also compared three kinds of means for the calculation of confidence score, and found out that the geometric mean had shown the best result. For the normalization of confidence score, usually Sigmoid function is used. On using it, we compared the effect of the weight constant for Sigmoid function and determined the optimal value. And we compared the effects of the size of cohort set, the results showed that the larger set gave the better results. And finally we found out optimal confidence score threshold value. In case of using the threshold value, the overall recognition rate including rejection errors was about 76%. This results are going to be adapted for stock information system based on speech recognizer which is currently provided as an experimental service by Korea Telecom.

  • PDF

Implemented Logic Circuits of Fuzzy Inference Engine for DC Servo Control Using decomposition of $\alpha$-level fuzzy set ($\alpha$-레벨 퍼지집합 분해에 의한 직류 서보제어용 퍼지추론 연산회로 구현)

  • 이요섭;손의식;홍순일
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.5
    • /
    • pp.1050-1057
    • /
    • 2004
  • The purpose of study is development of a fuzzy controller which independent of a computer and its software for fuzzy control of servo system. This paper describes a method of approximate reasoning for fuzzy control of servo system, based on decomposition of $\alpha$-level fuzzy sets, It is propose that fuzzy logic algorithm is a body from fuzzy inference to defuzzificaion in cases where the output variable u directly is generated PWM. The effectiveness of quantified $\alpha$-levels on input/output characteristics of fuzzy controller and output response of DC servo system is investigated. It is concluded that $\alpha$-cut 4 levels give a sufficient result for fuzzy control performance of DC servo system. The experimental results shows that the proposed hardware method is effective for practical applications of DC servo system.

A Simulation-based Optimization Approach for the Selection of Design Factors (설계 변수 선택을 위한 시뮬레이션 기반 최적화)

  • Um, In-Sup;Cheon, Hyeon-Jae;Lee, Hong-Chul
    • Journal of the Korea Society for Simulation
    • /
    • v.16 no.2
    • /
    • pp.45-54
    • /
    • 2007
  • In this article, we propose a different modeling approach, which aims at the simulation optimization so as to meet the design specification. Generally, Multi objective optimization problem is formulated by dependent factors as objective functions and independent factors as constraints. However, this paper presents the critical(dependent) factors as objective function and design(independent) factors as constraints for the selection of design factors directly. The objective function is normalized far the generalization of design factors while the constraints are composed of the simulation-based regression metamodels fer the critical factors and design factor's domain. Then the effective and fast solution procedure based on the pareto optimal solution set is proposed. This paper provides a comprehensive framework for the system design using the simulation and metamodels. Therefore, the method developed for this research can be adopted for other enhancements in different but comparable situations.

  • PDF

A Mathematical Programming Approach for Cloud Service Brokerage (클라우드 서비스 중개를 위한 수리과학 모형연구)

  • Chang, Byeong-Yun;Abate, Yabibal Afework;Yoon, Seung Hyun;Seo, Dong-Won
    • Journal of the Korea Society for Simulation
    • /
    • v.23 no.4
    • /
    • pp.143-150
    • /
    • 2014
  • Cloud computing is fast becoming the wave of the future for both home and business computing. Because of this growing acceptance, we can expect an explosion of diverse cloud service providers in the coming years. However, the cloud is not a single entity, rather it is a set of many disconnected islands of application (SaaS), infrastructure (IaaS), and different platform (PaaS) services. Cloud brokering mechanisms are essential to transform the heterogeneous cloud market into a commodity-like service. Cloud service brokers (CSBs) are the new form of business entities to help to aggregate the scattered set of cloud services and make conveniently available to their diverse users. CSBs can reserve a certain percentage of their clients' (users') demand and satisfy the remaining portion in an on-demand basis. In doing so, they need to minimize cost of both reserved and on-demand instances as well as the distance of a link between the cloud service provider (CSP) and the user. The study proposes a reservation approach with a mixed integer model to optimizes the cloud service cost and quality.

Efficient All-to-All Personalized Communication Algorithms in Wormhole-Routed Networks (웜홀 방식의 네트워크에서 효율적인 다대다 개별적 통신 알고리즘)

  • 김시관;강오한;정종인
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.7_8
    • /
    • pp.359-369
    • /
    • 2003
  • We present efficient generalized algorithms for all-to-all personalized communication operations in a 2D torus. All-to-all personalized communication, or complete exchange, is at the heart of numerous applications, such as matrix transposition, Fast Fourier Transform(FFT), and distributed table lookup. Some algorithms have been Presented when the number of nodes is power-of-2 or multiple-of-four form, but there has been no result for general cases yet. We first present complete exchange algorithm called multiple-Hop-2D when the number of nodes is in the form of multiple-of-two. Then by extending this algorithm, we present two algorithms for an arbitrary number of nodes. Split-and-Merge algorithm first splits the whole network into zones. After each zone performs complete exchange, merge is applied to finish the desired complete exchange. By handling extra steps in Double-Hop-2D algorithm, Modified Double-Hop-2D algorithm performs complete exchange operation for general cases. Finally, we compare the required start-up time for these algorithms.

A Classified Space VQ Design for Text-Independent Speaker Recognition (문맥 독립 화자인식을 위한 공간 분할 벡터 양자기 설계)

  • Lim, Dong-Chul;Lee, Hanig-Sei
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.673-680
    • /
    • 2003
  • In this paper, we study the enhancement of VQ (Vector Quantization) design for text independent speaker recognition. In a concrete way, we present a non-iterative method which makes a vector quantization codebook and this method performs non-iterative learning so that the computational complexity is epochally reduced The proposed Classified Space VQ (CSVQ) design method for text Independent speaker recognition is generalized from Semi-noniterative VQ design method for text dependent speaker recognition. CSVQ contrasts with the existing desiEn method which uses the iterative learninE algorithm for every traininE speaker. The characteristics of a CSVQ design is as follows. First, the proposed method performs the non-iterative learning by using a Classified Space Codebook. Second, a quantization region of each speaker is equivalent for the quantization region of a Classified Space Codebook. And the quantization point of each speaker is the optimal point for the statistical distribution of each speaker in a quantization region of a Classified Space Codebook. Third, Classified Space Codebook (CSC) is constructed through Sample Vector Formation Method (CSVQ1, 2) and Hyper-Lattice Formation Method (CSVQ 3). In the numerical experiment, we use the 12th met-cepstrum feature vectors of 10 speakers and compare it with the existing method, changing the codebook size from 16 to 128 for each Classified Space Codebook. The recognition rate of the proposed method is 100% for CSVQ1, 2. It is equal to the recognition rate of the existing method. Therefore the proposed CSVQ design method is, reducing computational complexity and maintaining the recognition rate, new alternative proposal and CSVQ with CSC can be applied to a general purpose recognition.

Musicals and Memories of the March 1 Independence Movement - Centered on the musical Shingheung Military School, Ku: Songs of the Goblin, Watch (기념 뮤지컬과 독립운동의 기억 -<신흥무관학교>, <구>, <워치>를 중심으로)

  • Chung, Myung-mun
    • (The) Research of the performance art and culture
    • /
    • no.43
    • /
    • pp.229-261
    • /
    • 2021
  • On the musical stage in 2019, there were many works depicting the Japanese colonial period. This is due to 2019 the timeliness of the March 1st Movement and the centennial of the establishment of the Provisional Government of the Republic of Korea. The way of remembering and commemorating historical facts reflects the power relationship between memory subjects and the time, namely the politics of memory. Until now, stage dramas dealing with the era of Japanese rule have focused on the commemoration of modern national and national defense, including feelings of misfortune and respect for patriots. This study analyzed the metaphor of the memorials emphasized to the audience in the commemorative musicals Shingheung Military School, Ku: Songs of the Goblin, and Watch which were performed in 2019, and looked at how to adjust memories and memorials. The above works highlight the narratives of ordinary people as well as those recorded against the backdrop of the Manchurian Independence Movement and Hongkou Park, expanding the object of the commemoration. Through this, active armed resistance efforts, self-reflection and reflection were highlighted. The case of Shingheung Military School revealed the earnestness of ordinary people who led the independence movement through the movement of central figures. Ku: Songs of the Goblin revises memories by reproducing forgotten objects and apologizing through time slip. Watch has strengthened the spectacles of facilities through documentary techniques such as photography, news reels, and newspaper articles, but it also reveals limitations limited to records. In the 3.1 Movement and the 100th anniversary of the establishment of the Provisional Government of the Republic of Korea, devices that actively reveal that the "people's movement" is connected to the present. To this end, the official records reflected the newly produced values and memories and devoted themselves to the daily lives and emotions of the crowd. In addition, both empirical consideration and calligraphy were utilized to increase reliability. These attempts are meaningful in that they have achieved the achievement of forming contemporary empathy.

A probabilistic information retrieval model by document ranking using term dependencies (용어간 종속성을 이용한 문서 순위 매기기에 의한 확률적 정보 검색)

  • You, Hyun-Jo;Lee, Jung-Jin
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.5
    • /
    • pp.763-782
    • /
    • 2019
  • This paper proposes a probabilistic document ranking model incorporating term dependencies. Document ranking is a fundamental information retrieval task. The task is to sort documents in a collection according to the relevance to the user query (Qin et al., Information Retrieval Journal, 13, 346-374, 2010). A probabilistic model is a model for computing the conditional probability of the relevance of each document given query. Most of the widely used models assume the term independence because it is challenging to compute the joint probabilities of multiple terms. Words in natural language texts are obviously highly correlated. In this paper, we assume a multinomial distribution model to calculate the relevance probability of a document by considering the dependency structure of words, and propose an information retrieval model to rank a document by estimating the probability with the maximum entropy method. The results of the ranking simulation experiment in various multinomial situations show better retrieval results than a model that assumes the independence of words. The results of document ranking experiments using real-world datasets LETOR OHSUMED also show better retrieval results.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Estimated Soft Information based Most Probable Classification Scheme for Sorting Metal Scraps with Laser-induced Breakdown Spectroscopy (레이저유도 플라즈마 분광법을 이용한 폐금속 분류를 위한 추정 연성정보 기반의 최빈 분류 기술)

  • Kim, Eden;Jang, Hyemin;Shin, Sungho;Jeong, Sungho;Hwang, Euiseok
    • Resources Recycling
    • /
    • v.27 no.1
    • /
    • pp.84-91
    • /
    • 2018
  • In this study, a novel soft information based most probable classification scheme is proposed for sorting recyclable metal alloys with laser induced breakdown spectroscopy (LIBS). Regression analysis with LIBS captured spectrums for estimating concentrations of common elements can be efficient for classifying unknown arbitrary metal alloys, even when that particular alloy is not included for training. Therefore, partial least square regression (PLSR) is employed in the proposed scheme, where spectrums of the certified reference materials (CRMs) are used for training. With the PLSR model, the concentrations of the test spectrum are estimated independently and are compared to those of CRMs for finding out the most probable class. Then, joint soft information can be obtained by assuming multi-variate normal (MVN) distribution, which enables to account the probability measure or a prior information and improves classification performance. For evaluating the proposed schemes, MVN soft information is evaluated based on PLSR of LIBS captured spectrums of 9 metal CRMs, and tested for classifying unknown metal alloys. Furthermore, the likelihood is evaluated with the radar chart to effectively visualize and search the most probable class among the candidates. By the leave-one-out cross validation tests, the proposed scheme is not only showing improved classification accuracies but also helpful for adaptive post-processing to correct the mis-classifications.