• Title/Summary/Keyword: error control

Search Result 6,602, Processing Time 0.041 seconds

An Improvement of Stochastic Feature Extraction for Robust Speech Recognition (강인한 음성인식을 위한 통계적 특징벡터 추출방법의 개선)

  • 김회린;고진석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2
    • /
    • pp.180-186
    • /
    • 2004
  • The presence of noise in speech signals degrades the performance of recognition systems in which there are mismatches between the training and test environments. To make a speech recognizer robust, it is necessary to compensate these mismatches. In this paper, we studied about an improvement of stochastic feature extraction based on band-SNR for robust speech recognition. At first, we proposed a modified version of the multi-band spectral subtraction (MSS) method which adjusts the subtraction level of noise spectrum according to band-SNR. In the proposed method referred as M-MSS, a noise normalization factor was newly introduced to finely control the over-estimation factor depending on the band-SNR. Also, we modified the architecture of the stochastic feature extraction (SFE) method. We could get a better performance when the spectral subtraction was applied in the power spectrum domain than in the mel-scale domain. This method is denoted as M-SFE. Last, we applied the M-MSS method to the modified stochastic feature extraction structure, which is denoted as the MMSS-MSFE method. The proposed methods were evaluated on isolated word recognition under various noise environments. The average error rates of the M-MSS, M-SFE, and MMSS-MSFE methods over the ordinary spectral subtraction (SS) method were reduced by 18.6%, 15.1%, and 33.9%, respectively. From these results, we can conclude that the proposed methods provide good candidates for robust feature extraction in the noisy speech recognition.

A Motor-Driven Focusing Mechanism for Small Satellite (소형위성용 모터 구동형 포커싱 메커니즘)

  • Jung, Jinwon;Choi, Junwoo;Lee, Dongkyu;Hwang, Jaehyuck;Kim, Byungkyu
    • Journal of Aerospace System Engineering
    • /
    • v.12 no.4
    • /
    • pp.75-80
    • /
    • 2018
  • The working principle of a satellite camera involves a focusing mechanism for controlling the focus of the optical system, which is essential for proper functioning. However, research on focusing mechanisms of satellite optical systems in Korea is in the beginning stage and developed technology is limited to a thermal control type. Therefore, in this paper, we propose a motor-driven focusing mechanism applicable to small satellite optical systems. The proposed mechanism is designed to generate z-axis displacement in the secondary mirror by a motor. In addition, three flexure hinges have been installed on the supporter for application of preload on the mechanism resulting in minimization of the alignment error arising due to manufacturing tolerance and assembly tolerance within the mechanism. After fabrication of the mechanism, the alignment errors (de-space, de-center, and tilt) were measured with LVDT sensors and laser displacement meters. Conclusively, the proposed focusing mechanism could achieve proper alignment degree, which can be applicable to small satellite optical system.

Determination of Parameters for the Clark Model based on Observed Hydrological Data (실측수문자료에 의한 Clark 모형의 매개변수 결정)

  • Ahn, Tae Jin;Jeon, Hyun Chul;Kim, Min Hyeok
    • Journal of Wetlands Research
    • /
    • v.18 no.2
    • /
    • pp.121-131
    • /
    • 2016
  • The determination of feasible design flood is the most important to control flood damage in river management. Concentration time and storage constant in the Clark unit hydrograph method mainly affects magnitude of peak flood and shape of hydrograph. Model parameters should be calibrated using observed discharge but due to deficiency of observed data the parameters have been adopted by empirical formula. This study is to suggest concentration time and storage constant based on the observed rainfall-runoff data at GongDo stage station in the Ansung river basin. To do this, five criteria have been suggested to compute root mean square error(RMSE) and residual of oserved value and computed one. Once concentration time and storage constant have been determined from three rainfall-runoff event selected at the station, the five criteria based on observed hydrograph and computed hydrograph by the Clark model have been computed to determine the value of concentration time and storage constant. A criteria has been proposed to determine concentration time and storage constant based on the results of the observed hydrograph and the Clark model. It has also been shown that an exponent value of concentration time-cumulative area curve should be determined based on the shape of watershed.

New Worstcase Optimization Method and Process-Variation-Aware Interconnect Worstcase Design Environment (새로운 Worstcase 최적화 방법 및 공정 편차를 고려한 배선의 Worstcase 설계 환경)

  • Jung, Won-Young;Kim, Hyun-Gon;Wee, Jae-Kyung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.10 s.352
    • /
    • pp.80-89
    • /
    • 2006
  • The rapid development of process technology and the introduction of new materials not only make it difficult for process control but also as a result increase process variations. These process variations are barriers to successful implementation of design circuits because there are disparities between data on layout and that on wafer. This paper proposes a new design environment to determine the interconnect worstcase with accuracy and speed so that the interconnect effects due to process-induced variations can be applied to designs of $0.13{\mu}m$ and below. Common Geometry and Maximum Probability methods have been developed and integrated into the new worstcase optimization algorithm. The delay time of the 31-stage Ring Oscillator, manufactured in UMC $0.13{\mu}m$ Logic, was measured, and the results proved the accuracy of the algorithm. When the algorithm was used to optimize worstcase determination, the relative error was less than 1.00%, two times more accurate than the conventional methods. Furthermore, the new worstcase design environment improved optimization speed by 32.01% compared to that of conventional worstcase optimizers. Moreover, the new worstcitse design environment accurately predicted the worstcase of non-normal distribution which conventional methods cannot do well.

A New Endpoint Detection Method Based on Chaotic System Features for Digital Isolated Word Recognition System (음성인식을 위한 혼돈시스템 특성기반의 종단탐색 기법)

  • Zang, Xian;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.5
    • /
    • pp.8-14
    • /
    • 2009
  • In the research field of speech recognition, pinpointing the endpoints of speech utterance even with the presence of background noise is of great importance. These noise present during recording introduce disturbances which complicates matters since what we just want is to get the stationary parameters corresponding to each speech section. One major cause of error in automatic recognition of isolated words is the inaccurate detection of the beginning and end boundaries of the test and reference templates, thus the necessity to find an effective method in removing the unnecessary regions of a speech signal. The conventional methods for speech endpoint detection are based on two linear time-domain measurements: the short-time energy, and short-time zero-crossing rate. They perform well for clean speech but their precision is not guaranteed if there is noise present, since the high energy and zero-crossing rate of the noise is mistaken as a part of the speech uttered. This paper proposes a novel approach in finding an apparent threshold between noise and speech based on Lyapunov Exponents (LEs). This proposed method adopts the nonlinear features to analyze the chaos characteristics of the speech signal instead of depending on the unreliable factor-energy. The excellent performance of this approach compared with the conventional methods lies in the fact that it detects the endpoints as a nonlinearity of speech signal, which we believe is an important characteristic and has been neglected by the conventional methods. The proposed method extracts the features based only on the time-domain waveform of the speech signal illustrating its low complexity. Simulations done showed the effective performance of the Proposed method in a noisy environment with an average recognition rate of up 92.85% for unspecified person.

Evaluation of Travel Time Prediction Reliability on Highway Using DSRC Data (DSRC 기반 고속도로 통행 소요시간 예측정보 신뢰성 평가)

  • Han, Daechul;Kim, Joohyon;Kim, Seoungbum
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.4
    • /
    • pp.86-98
    • /
    • 2018
  • Since 2015, the Korea Expressway Corporation has provided predicted travel time information, which is reproduced from DSRC systems over the extended expressway network in Korea. When it is open for public information, it helps travelers decide optimal routes while minimizing traffic congestions and travel cost. Although, sutiable evaluations to investigate the reliability of travel time forecast information have not been conducted so far. First of all, this study seeks to find out a measure of effectiveness to evaluate the reliability of travel time forecast via various literatures. Secondly, using the performance measurement, this study evaluates concurrent travel time forecast information in highway quantitatively and examines the forecast error by exploratory data analysis. It appears that most of highway lines provided reliable forecast information. However, we found significant over/under-forecast on a few links within several long lines and it turns out that such minor errors reduce overall reliability in travel time forecast of the corresponding highway lines. This study would help to build a priority for quality control of the travel time forecast information system, and highlight the importance of performing periodic and sustainable management for travel time forecast information.

Commissioning of a micro-MLC (mMLC) for Stereotactic Radiosurgery (방사선수술용 4뱅크 마이크로 다엽콜리메이터의 인수 검사)

  • Jeong, Dong-Hyeok;Shin, Kyo-Chul;Kim, Jeung-Kee;Kim, Soo-Kon;Moon, Sun-Rock;Lee, Kang-Kyoo
    • Progress in Medical Physics
    • /
    • v.20 no.1
    • /
    • pp.43-50
    • /
    • 2009
  • The 4 bank mico-MLC (mMLC; Acculeaf, Direx, Isral) has been commissioned for clinical use of linac based stereotactic radiosurgery. The geometrical parameters to control the leaves were determined and comparisons between measured and calculated by the calculation model were performed in terms of absolute dose (cGy/100 MU). As a result of evaluating calculated dose for various field sizes and depths of 5 and 10 cm in water in the geometric condition of fixed SSD (source to surface distance) and fixed SCD (source to chamber distance), most of differences were within 1% for 6 MV and 15 MV x-rays. The penumbral widths at the isocenter were approximately evaluated to 0.29~0.43 cm depending on the field size for 6 MV and 0.36~0.51 cm for 15 MV x-rays. The average transmission and leakage for 6 MV and 15 MV x-rays were 6.6% and 7.4% respectively in single level of leaves fully closed. In case of dual level of leaves fully closed the measured transmission is approximately 0.5% for both 6 MV and 15 MV x-rays. Through the commissiong procedure we could verify the dose characteristics of mMLC and approximately evaluate the error ranges for treatment planning system.

  • PDF

The Study on Application of Activity-Based Costing System on the Department of Clinical Pathology (임상병리과의 활동기준원가 관리 적용에 관한 연구)

  • Jung, Soo-Kyung;Jung, Key-Sun;Choi, Hwang-Gue;Rhyu, Kyu-Soo
    • Korea Journal of Hospital Management
    • /
    • v.5 no.1
    • /
    • pp.129-155
    • /
    • 2000
  • This empirical study, activity-based costing, a newly introduced approach that has proved to be an improvement over the conventional costing system in product or service costing, is applied at department of clinical pathology in K university hospital. The study subjects were 233 test procedures done in clinical laboratory of K university hospital. Activity analysis was done by interview, questionnaires, and time study, and the amount of resources consumed by each activity and their costs are then traced and applied to the laboratory tests. The main purpose of this study were to compare the test costs of activity-bases costing with those of conventional costing, and test fees of medical insurance, and to provide accurate cost informations for the decision makers of hospital. The major findings of this study were as belows. 1. The cost drivers for application of activity-based costing at clinical laboratory were cases of sample collection, case of specimen, cases of test, and volume-related allocation bases such as direct labor hours and total revenue of each test. 2. The profits of each clinical laboratory fields analyzed by conventional costing were different from the profits analyzed by activity-based costing, especially in the field of Urinalysis(approximately over estimated 750%). 3. The standard full costs by conventional costing were quite different from the costs computed by using activity-based costing, and the difference is most significant with the tests of long labor time. 4. From the comparison between costs computed by using activity-based costing and medical insurance fees, some test fees were significantly lower than the costs, especially in the non-automated fields. As described in this study, activity-based costing provides more accurate cost information than does conventional costing system. The former approach is especially important in the health care industry including hospitals in which planning and controlling the costs services provided are the key to maintaining a healthy financial status for the organization. Despite the contribution of activity-based costing the economic as well as technical feasibilities of implementing such a cost accounting system in an organization must be evaluated. In the development of activity-based costing systems, an activity analysis has to be conducted to identify activities that consume resources. This involves a detailed study of the organization's logistics and accounting information systems, and it is an expensive project in itself. Besides, it can be quite difficult and time consuming to identify and trace resource consumption to a specific activity. Thus the activity-based costing system should be implemented only when the decrease in cost of error far exceeds the increase in cost of measurement. By combining activity-based costing with standard costing, health care administrators can better plan and control the costs of health services provided while ensuring that the organization's bottom line is healthy.

  • PDF

An Estimation of Mean Background Concentrations of Greenhouse Gases Observed on Ulleungdo (울릉도 온실기체 관측자료를 이용한 배경대기 평균농도 산정)

  • Lim, Yun-Kyu;Moon, Yun-Seob;Kim, Jin-Seog;Song, Sang-Keun;Hong, Ji-Hyung
    • Journal of the Korean earth science society
    • /
    • v.33 no.1
    • /
    • pp.32-38
    • /
    • 2012
  • Mean background concentrations of greenhouse gases such as $CO_2$ and $CH_4$ were estimated on Ulleungdo using PICARRO Cavity Ring-Down Spectroscopy (CRDS) analyzer. To improve the accuracy of $CO_2$ and $CH_4$ concentrations, a standardized QA${\cdot}$QC (Quality Assurance Quality Control) procedure was employed with three steps: 1) the inspection procedure of physical limitation (e.g. the exclusion of data corresponding to the number of data of ${\leq}$50%) for hourly mean values, 2) a stage inspection (e.g. the use of data corresponding to ${\geq}15$ observations per day) for daily mean values, and 3) a fast fourier transform (FFT) analysis using curve-fitting methods for the investigation of climatic characteristics. The monthly mean concentrations of $CO_2$ and $CH_4$ derived from three-step QA${\cdot}$QC procedure were then compared with those observed at Anmyundo (Korea) and Ryori (Japan). Overall, the error of mean $CO_2$ and $CH_4$ concentrations estimated in this study distinctly decreased. However, in comparison with their concentrations monitored at Ryori, the $CO_2$ concentration at estimated at Ulleungdo is soemwhat lower than that of Anmyundo due to the missing data, which is statistically significant. On the other hand, the former has a statistically significant higher value of $CH_4$ that of the latter.

Model Identification for Control System Design of a Commercial 12-inch Rapid Thermal Processor (상업용 12인치 급속가열장치의 제어계 설계를 위한 모델인식)

  • Yun, Woohyun;Ji, Sang Hyun;Na, Byung-Cheol;Won, Wangyun;Lee, Kwang Soon
    • Korean Chemical Engineering Research
    • /
    • v.46 no.3
    • /
    • pp.486-491
    • /
    • 2008
  • This paper describes a model identification method that has been applied to a commercial 12-inch RTP (rapid thermal processing) equipment with an ultimate aim to develop a high-performance advanced controller. Seven thermocouples are attached on the wafer surface and twelve tungsten-halogen lamp groups are used to heat up the wafer. To obtain a MIMO balanced state space model, multiple SIMO (single-input multiple-output) identification with highorder ARX models have been conducted and the resulting models have been combined, transformed and reduced to a MIMO balanced state space model through a balanced truncation technique. The identification experiments were designed to minimize the wafer warpage and an output linearization block has been proposed for compensation of the nonlinearity from the radiation-dominant heat transfer. As a result from the identification at around 600, 700, and $800^{\circ}C$, respectively, it was found that $y=T(K)^2$ and the state dimension of 80-100 are most desirable. With this choice the root-mean-square value of the one-step-ahead temperature prediction error was found to be in the range of 0.125-0.135 K.